Dec 01 08:38:38 crc systemd[1]: Starting Kubernetes Kubelet... Dec 01 08:38:38 crc restorecon[4588]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 01 08:38:38 crc restorecon[4588]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 01 08:38:39 crc restorecon[4588]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 01 08:38:39 crc restorecon[4588]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Dec 01 08:38:40 crc kubenswrapper[4689]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 01 08:38:40 crc kubenswrapper[4689]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Dec 01 08:38:40 crc kubenswrapper[4689]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 01 08:38:40 crc kubenswrapper[4689]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 01 08:38:40 crc kubenswrapper[4689]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 01 08:38:40 crc kubenswrapper[4689]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.569349 4689 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.573986 4689 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.574220 4689 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.574235 4689 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.574241 4689 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.574248 4689 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.574254 4689 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.574268 4689 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.574273 4689 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.574281 4689 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.574290 4689 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.574295 4689 feature_gate.go:330] unrecognized feature gate: Example Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.574299 4689 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.574304 4689 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.574316 4689 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.574320 4689 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.574325 4689 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.574329 4689 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.574335 4689 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.574340 4689 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.574349 4689 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.574354 4689 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.574358 4689 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.574378 4689 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.574384 4689 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.574389 4689 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.574397 4689 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.574402 4689 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.574406 4689 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.574412 4689 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.574419 4689 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.574427 4689 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.574432 4689 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.574437 4689 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.574442 4689 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.574448 4689 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.574453 4689 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.574458 4689 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.574463 4689 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.574468 4689 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.574473 4689 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.574482 4689 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.574487 4689 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.574492 4689 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.574501 4689 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.574506 4689 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.574511 4689 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.574515 4689 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.574521 4689 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.574525 4689 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.574529 4689 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.574533 4689 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.574537 4689 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.574541 4689 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.574544 4689 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.574552 4689 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.574556 4689 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.574560 4689 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.574563 4689 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.574588 4689 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.574597 4689 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.574604 4689 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.574612 4689 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.574616 4689 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.574624 4689 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.574628 4689 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.574632 4689 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.574639 4689 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.574643 4689 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.574647 4689 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.574650 4689 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.574654 4689 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.574846 4689 flags.go:64] FLAG: --address="0.0.0.0" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.574857 4689 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.574869 4689 flags.go:64] FLAG: --anonymous-auth="true" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.574881 4689 flags.go:64] FLAG: --application-metrics-count-limit="100" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.574887 4689 flags.go:64] FLAG: --authentication-token-webhook="false" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.574894 4689 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.574900 4689 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.574907 4689 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.574913 4689 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.574917 4689 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.574923 4689 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.574932 4689 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.574936 4689 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.574941 4689 flags.go:64] FLAG: --cgroup-root="" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.574945 4689 flags.go:64] FLAG: --cgroups-per-qos="true" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.574950 4689 flags.go:64] FLAG: --client-ca-file="" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.574955 4689 flags.go:64] FLAG: --cloud-config="" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.574959 4689 flags.go:64] FLAG: --cloud-provider="" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.574964 4689 flags.go:64] FLAG: --cluster-dns="[]" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.574994 4689 flags.go:64] FLAG: --cluster-domain="" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.575003 4689 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.575008 4689 flags.go:64] FLAG: --config-dir="" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.575012 4689 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.575021 4689 flags.go:64] FLAG: --container-log-max-files="5" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.575029 4689 flags.go:64] FLAG: --container-log-max-size="10Mi" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.575034 4689 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.575043 4689 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.575048 4689 flags.go:64] FLAG: --containerd-namespace="k8s.io" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.575057 4689 flags.go:64] FLAG: --contention-profiling="false" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.575061 4689 flags.go:64] FLAG: --cpu-cfs-quota="true" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.575066 4689 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.575071 4689 flags.go:64] FLAG: --cpu-manager-policy="none" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.575075 4689 flags.go:64] FLAG: --cpu-manager-policy-options="" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.575082 4689 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.575086 4689 flags.go:64] FLAG: --enable-controller-attach-detach="true" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.575090 4689 flags.go:64] FLAG: --enable-debugging-handlers="true" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.575097 4689 flags.go:64] FLAG: --enable-load-reader="false" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.575103 4689 flags.go:64] FLAG: --enable-server="true" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.575108 4689 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.575117 4689 flags.go:64] FLAG: --event-burst="100" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.575122 4689 flags.go:64] FLAG: --event-qps="50" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.575129 4689 flags.go:64] FLAG: --event-storage-age-limit="default=0" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.575134 4689 flags.go:64] FLAG: --event-storage-event-limit="default=0" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.575139 4689 flags.go:64] FLAG: --eviction-hard="" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.575149 4689 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.575153 4689 flags.go:64] FLAG: --eviction-minimum-reclaim="" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.575157 4689 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.575162 4689 flags.go:64] FLAG: --eviction-soft="" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.575449 4689 flags.go:64] FLAG: --eviction-soft-grace-period="" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.575498 4689 flags.go:64] FLAG: --exit-on-lock-contention="false" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.575511 4689 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.575523 4689 flags.go:64] FLAG: --experimental-mounter-path="" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.575541 4689 flags.go:64] FLAG: --fail-cgroupv1="false" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.575556 4689 flags.go:64] FLAG: --fail-swap-on="true" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.575564 4689 flags.go:64] FLAG: --feature-gates="" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.575579 4689 flags.go:64] FLAG: --file-check-frequency="20s" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.575597 4689 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.575608 4689 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.575615 4689 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.575626 4689 flags.go:64] FLAG: --healthz-port="10248" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.575635 4689 flags.go:64] FLAG: --help="false" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.575643 4689 flags.go:64] FLAG: --hostname-override="" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.575649 4689 flags.go:64] FLAG: --housekeeping-interval="10s" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.575657 4689 flags.go:64] FLAG: --http-check-frequency="20s" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.575665 4689 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.575679 4689 flags.go:64] FLAG: --image-credential-provider-config="" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.575686 4689 flags.go:64] FLAG: --image-gc-high-threshold="85" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.575694 4689 flags.go:64] FLAG: --image-gc-low-threshold="80" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.575701 4689 flags.go:64] FLAG: --image-service-endpoint="" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.575709 4689 flags.go:64] FLAG: --kernel-memcg-notification="false" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.575717 4689 flags.go:64] FLAG: --kube-api-burst="100" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.575725 4689 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.575732 4689 flags.go:64] FLAG: --kube-api-qps="50" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.575739 4689 flags.go:64] FLAG: --kube-reserved="" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.575750 4689 flags.go:64] FLAG: --kube-reserved-cgroup="" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.575757 4689 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.575764 4689 flags.go:64] FLAG: --kubelet-cgroups="" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.575772 4689 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.575780 4689 flags.go:64] FLAG: --lock-file="" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.575788 4689 flags.go:64] FLAG: --log-cadvisor-usage="false" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.575810 4689 flags.go:64] FLAG: --log-flush-frequency="5s" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.575824 4689 flags.go:64] FLAG: --log-json-info-buffer-size="0" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.575874 4689 flags.go:64] FLAG: --log-json-split-stream="false" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.575882 4689 flags.go:64] FLAG: --log-text-info-buffer-size="0" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.575892 4689 flags.go:64] FLAG: --log-text-split-stream="false" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.575899 4689 flags.go:64] FLAG: --logging-format="text" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.575906 4689 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.576231 4689 flags.go:64] FLAG: --make-iptables-util-chains="true" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.576246 4689 flags.go:64] FLAG: --manifest-url="" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.576254 4689 flags.go:64] FLAG: --manifest-url-header="" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.576267 4689 flags.go:64] FLAG: --max-housekeeping-interval="15s" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.576275 4689 flags.go:64] FLAG: --max-open-files="1000000" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.576285 4689 flags.go:64] FLAG: --max-pods="110" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.576292 4689 flags.go:64] FLAG: --maximum-dead-containers="-1" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.576300 4689 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.576308 4689 flags.go:64] FLAG: --memory-manager-policy="None" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.576315 4689 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.576323 4689 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.576329 4689 flags.go:64] FLAG: --node-ip="192.168.126.11" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.576337 4689 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.576379 4689 flags.go:64] FLAG: --node-status-max-images="50" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.576386 4689 flags.go:64] FLAG: --node-status-update-frequency="10s" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.576393 4689 flags.go:64] FLAG: --oom-score-adj="-999" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.576399 4689 flags.go:64] FLAG: --pod-cidr="" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.576406 4689 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.576416 4689 flags.go:64] FLAG: --pod-manifest-path="" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.576423 4689 flags.go:64] FLAG: --pod-max-pids="-1" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.576430 4689 flags.go:64] FLAG: --pods-per-core="0" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.576437 4689 flags.go:64] FLAG: --port="10250" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.576445 4689 flags.go:64] FLAG: --protect-kernel-defaults="false" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.576452 4689 flags.go:64] FLAG: --provider-id="" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.576458 4689 flags.go:64] FLAG: --qos-reserved="" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.576465 4689 flags.go:64] FLAG: --read-only-port="10255" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.576480 4689 flags.go:64] FLAG: --register-node="true" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.576493 4689 flags.go:64] FLAG: --register-schedulable="true" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.576505 4689 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.576521 4689 flags.go:64] FLAG: --registry-burst="10" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.576528 4689 flags.go:64] FLAG: --registry-qps="5" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.576540 4689 flags.go:64] FLAG: --reserved-cpus="" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.576546 4689 flags.go:64] FLAG: --reserved-memory="" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.576555 4689 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.576561 4689 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.576568 4689 flags.go:64] FLAG: --rotate-certificates="false" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.576575 4689 flags.go:64] FLAG: --rotate-server-certificates="false" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.576583 4689 flags.go:64] FLAG: --runonce="false" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.576590 4689 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.576596 4689 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.576603 4689 flags.go:64] FLAG: --seccomp-default="false" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.576610 4689 flags.go:64] FLAG: --serialize-image-pulls="true" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.576616 4689 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.576623 4689 flags.go:64] FLAG: --storage-driver-db="cadvisor" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.576630 4689 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.576636 4689 flags.go:64] FLAG: --storage-driver-password="root" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.576643 4689 flags.go:64] FLAG: --storage-driver-secure="false" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.576650 4689 flags.go:64] FLAG: --storage-driver-table="stats" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.576656 4689 flags.go:64] FLAG: --storage-driver-user="root" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.576663 4689 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.576669 4689 flags.go:64] FLAG: --sync-frequency="1m0s" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.576676 4689 flags.go:64] FLAG: --system-cgroups="" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.576681 4689 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.576701 4689 flags.go:64] FLAG: --system-reserved-cgroup="" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.576713 4689 flags.go:64] FLAG: --tls-cert-file="" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.576720 4689 flags.go:64] FLAG: --tls-cipher-suites="[]" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.576732 4689 flags.go:64] FLAG: --tls-min-version="" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.576738 4689 flags.go:64] FLAG: --tls-private-key-file="" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.576749 4689 flags.go:64] FLAG: --topology-manager-policy="none" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.576755 4689 flags.go:64] FLAG: --topology-manager-policy-options="" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.576764 4689 flags.go:64] FLAG: --topology-manager-scope="container" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.576770 4689 flags.go:64] FLAG: --v="2" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.576782 4689 flags.go:64] FLAG: --version="false" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.576809 4689 flags.go:64] FLAG: --vmodule="" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.576819 4689 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.576826 4689 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.577046 4689 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.577057 4689 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.577063 4689 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.577070 4689 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.577075 4689 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.577081 4689 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.577087 4689 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.577092 4689 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.577098 4689 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.577105 4689 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.577111 4689 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.577116 4689 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.577121 4689 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.577126 4689 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.577132 4689 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.577137 4689 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.577145 4689 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.577152 4689 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.577158 4689 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.577166 4689 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.577174 4689 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.577180 4689 feature_gate.go:330] unrecognized feature gate: Example Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.577186 4689 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.577191 4689 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.577200 4689 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.577206 4689 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.577212 4689 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.577217 4689 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.577223 4689 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.577229 4689 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.577237 4689 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.577243 4689 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.577250 4689 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.577255 4689 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.577262 4689 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.577269 4689 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.577275 4689 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.577281 4689 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.577292 4689 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.577297 4689 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.577303 4689 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.577308 4689 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.577313 4689 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.577318 4689 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.577324 4689 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.577329 4689 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.577334 4689 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.577339 4689 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.577344 4689 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.577350 4689 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.577355 4689 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.577404 4689 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.577414 4689 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.577420 4689 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.577425 4689 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.577431 4689 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.577439 4689 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.577444 4689 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.577449 4689 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.577457 4689 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.577463 4689 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.577470 4689 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.577476 4689 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.577482 4689 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.577487 4689 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.577492 4689 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.577498 4689 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.577503 4689 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.577508 4689 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.577513 4689 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.577518 4689 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.577540 4689 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.589395 4689 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.589440 4689 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.589542 4689 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.589559 4689 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.589569 4689 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.589578 4689 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.589587 4689 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.589596 4689 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.589608 4689 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.589617 4689 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.589625 4689 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.589632 4689 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.589640 4689 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.589647 4689 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.589653 4689 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.589660 4689 feature_gate.go:330] unrecognized feature gate: Example Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.589668 4689 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.589675 4689 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.589682 4689 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.589691 4689 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.589698 4689 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.589707 4689 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.589713 4689 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.589720 4689 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.589726 4689 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.589732 4689 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.589739 4689 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.589746 4689 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.589751 4689 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.589757 4689 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.589762 4689 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.589767 4689 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.589773 4689 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.589778 4689 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.589784 4689 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.589789 4689 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.589798 4689 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.589804 4689 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.589809 4689 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.589814 4689 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.589820 4689 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.589826 4689 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.589832 4689 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.589840 4689 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.589847 4689 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.589853 4689 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.589859 4689 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.589865 4689 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.589870 4689 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.589876 4689 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.589881 4689 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.589887 4689 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.589893 4689 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.589898 4689 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.589904 4689 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.589910 4689 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.589916 4689 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.589921 4689 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.589927 4689 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.589932 4689 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.589940 4689 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.589946 4689 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.589952 4689 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.589957 4689 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.589963 4689 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.589968 4689 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.589974 4689 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.589979 4689 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.589985 4689 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.589990 4689 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.589996 4689 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.590001 4689 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.590008 4689 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.590018 4689 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.590277 4689 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.590288 4689 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.590296 4689 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.590303 4689 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.590310 4689 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.590315 4689 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.590321 4689 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.590326 4689 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.590331 4689 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.590337 4689 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.590343 4689 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.590348 4689 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.590354 4689 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.590381 4689 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.590388 4689 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.590395 4689 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.590401 4689 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.590407 4689 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.590414 4689 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.590420 4689 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.590426 4689 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.590432 4689 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.590439 4689 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.590446 4689 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.590453 4689 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.590459 4689 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.590465 4689 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.590471 4689 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.590477 4689 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.590483 4689 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.590490 4689 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.590495 4689 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.590501 4689 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.590507 4689 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.590514 4689 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.590521 4689 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.590527 4689 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.590534 4689 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.590540 4689 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.590546 4689 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.590553 4689 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.590560 4689 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.590566 4689 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.590572 4689 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.590578 4689 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.590584 4689 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.590590 4689 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.590598 4689 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.590605 4689 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.590611 4689 feature_gate.go:330] unrecognized feature gate: Example Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.590687 4689 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.590702 4689 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.590708 4689 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.590714 4689 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.590720 4689 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.590731 4689 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.590744 4689 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.590750 4689 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.590756 4689 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.590762 4689 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.590768 4689 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.590774 4689 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.590780 4689 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.590786 4689 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.590792 4689 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.590798 4689 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.590804 4689 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.590810 4689 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.590816 4689 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.590821 4689 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.590830 4689 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.590840 4689 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.591412 4689 server.go:940] "Client rotation is on, will bootstrap in background" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.595176 4689 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.595338 4689 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.595993 4689 server.go:997] "Starting client certificate rotation" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.596025 4689 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.596329 4689 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-30 15:10:44.775181215 +0000 UTC Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.596544 4689 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 702h32m4.178642404s for next certificate rotation Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.632682 4689 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.636015 4689 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.677184 4689 log.go:25] "Validated CRI v1 runtime API" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.695402 4689 log.go:25] "Validated CRI v1 image API" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.697093 4689 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.699914 4689 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-12-01-08-33-22-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.699955 4689 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.713882 4689 manager.go:217] Machine: {Timestamp:2025-12-01 08:38:40.712593792 +0000 UTC m=+0.784881716 CPUVendorID:AuthenticAMD NumCores:8 NumPhysicalCores:1 NumSockets:8 CpuFrequency:2799998 MemoryCapacity:25199480832 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:1b1c64ae-9dbc-417f-9fac-7f3e657b08f7 BootID:87f9359d-17aa-499d-90bc-b05146c26f0f Filesystems:[{Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:2519945216 Type:vfs Inodes:615221 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:3076108 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:12599738368 Type:vfs Inodes:3076108 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:5039898624 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:12599742464 Type:vfs Inodes:1048576 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:429496729600 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:67:4e:5e Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:67:4e:5e Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:0b:51:9a Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:54:8b:86 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:8a:17:67 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:4a:01:25 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:f6:56:69:72:4f:65 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:82:32:43:eb:7a:ea Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:25199480832 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.714157 4689 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.714327 4689 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.715228 4689 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.715664 4689 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.715739 4689 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.716224 4689 topology_manager.go:138] "Creating topology manager with none policy" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.716243 4689 container_manager_linux.go:303] "Creating device plugin manager" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.716539 4689 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.716629 4689 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.716988 4689 state_mem.go:36] "Initialized new in-memory state store" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.717133 4689 server.go:1245] "Using root directory" path="/var/lib/kubelet" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.718126 4689 kubelet.go:418] "Attempting to sync node with API server" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.718166 4689 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.718245 4689 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.718282 4689 kubelet.go:324] "Adding apiserver pod source" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.718325 4689 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.720795 4689 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.721637 4689 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.722259 4689 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.722921 4689 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.722947 4689 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.722958 4689 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.722969 4689 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.722985 4689 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.722995 4689 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.723006 4689 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.723019 4689 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.723038 4689 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.723049 4689 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.723062 4689 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.723070 4689 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.723246 4689 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.723892 4689 server.go:1280] "Started kubelet" Dec 01 08:38:40 crc systemd[1]: Started Kubernetes Kubelet. Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.727780 4689 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.730968 4689 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.190:6443: connect: connection refused Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.731171 4689 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.190:6443: connect: connection refused Dec 01 08:38:40 crc kubenswrapper[4689]: E1201 08:38:40.731355 4689 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.190:6443: connect: connection refused" logger="UnhandledError" Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.731503 4689 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.190:6443: connect: connection refused Dec 01 08:38:40 crc kubenswrapper[4689]: E1201 08:38:40.731545 4689 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.190:6443: connect: connection refused" logger="UnhandledError" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.732870 4689 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 01:10:28.864582125 +0000 UTC Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.733758 4689 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.733779 4689 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.734601 4689 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.734773 4689 volume_manager.go:287] "The desired_state_of_world populator starts" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.734797 4689 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.735101 4689 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.735318 4689 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 01 08:38:40 crc kubenswrapper[4689]: E1201 08:38:40.759694 4689 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 01 08:38:40 crc kubenswrapper[4689]: W1201 08:38:40.760991 4689 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.190:6443: connect: connection refused Dec 01 08:38:40 crc kubenswrapper[4689]: E1201 08:38:40.761070 4689 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.190:6443: connect: connection refused" logger="UnhandledError" Dec 01 08:38:40 crc kubenswrapper[4689]: E1201 08:38:40.761238 4689 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.190:6443: connect: connection refused" interval="200ms" Dec 01 08:38:40 crc kubenswrapper[4689]: E1201 08:38:40.760806 4689 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.190:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187d0aa231199098 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-01 08:38:40.723841176 +0000 UTC m=+0.796129070,LastTimestamp:2025-12-01 08:38:40.723841176 +0000 UTC m=+0.796129070,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.762777 4689 factory.go:55] Registering systemd factory Dec 01 08:38:40 crc kubenswrapper[4689]: I1201 08:38:40.762863 4689 factory.go:221] Registration of the systemd container factory successfully Dec 01 08:38:41 crc kubenswrapper[4689]: E1201 08:38:41.003520 4689 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.190:6443: connect: connection refused" interval="400ms" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.003552 4689 factory.go:153] Registering CRI-O factory Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.003597 4689 factory.go:221] Registration of the crio container factory successfully Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.003682 4689 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Dec 01 08:38:41 crc kubenswrapper[4689]: E1201 08:38:41.003691 4689 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.003740 4689 factory.go:103] Registering Raw factory Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.003763 4689 manager.go:1196] Started watching for new ooms in manager Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.004016 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.004071 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.004087 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.004099 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.004165 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.004196 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.004210 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.004224 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.006094 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.006109 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.006122 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.006134 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.006145 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.006161 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.006172 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.006182 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.006212 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.006222 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.006234 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.006244 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.006253 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.006263 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.006274 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.006284 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.006295 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.006305 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.006319 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.006331 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.006341 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.006351 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.006363 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.006401 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.006414 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.006424 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.006436 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.006447 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.006459 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.006476 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.006487 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.006496 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.006507 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.006518 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.006529 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.006539 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.006552 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.006564 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.006578 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.006591 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.006604 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.006616 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.006622 4689 manager.go:319] Starting recovery of all containers Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.007346 4689 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.007623 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.007647 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.007712 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.007725 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.007737 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.007749 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.007761 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.007770 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.007780 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.007789 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.007798 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.007810 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.007820 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.007829 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.007838 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.007847 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.007859 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.007868 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.007881 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.007890 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.007900 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.007910 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.007921 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.007931 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.007941 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.007950 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.007959 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.007969 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.007978 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.007987 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.007996 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.008005 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.008016 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.008028 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.008038 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.008049 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.008059 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.008068 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.008081 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.008091 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.008104 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.008115 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.008127 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.008136 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.008148 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.008159 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.008169 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.008179 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.008198 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.008214 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.008224 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.008235 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.008244 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.008255 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.008270 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.008282 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.008291 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.008301 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.008312 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.008321 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.008331 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.008342 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.008350 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.008360 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.008384 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.008394 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.008402 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.008411 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.008420 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.008428 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.008438 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.008446 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.008462 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.008473 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.008485 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.008494 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.008507 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.008516 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.008531 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.008544 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.008558 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.008569 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.008582 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.008595 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.008611 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.008623 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.008641 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.008651 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.008661 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.008675 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.008687 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.008699 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.008710 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.008723 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.008735 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.008747 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.008762 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.008771 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.008780 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.008798 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.008809 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.008822 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.008837 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.008847 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.008860 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.008874 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.008885 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.008900 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.008910 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.008923 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.008933 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.008949 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.008959 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.008967 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.008979 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.008990 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.008999 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.009011 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.009043 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.009054 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.009067 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.009084 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.009094 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.009105 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.009115 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.009125 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.009135 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.009146 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.009157 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.009167 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.009181 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.009190 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.009201 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.009211 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.009223 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.009232 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.009241 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.009251 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.009262 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.009271 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.009281 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.009290 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.009301 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.009310 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.009318 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.009327 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.009337 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.009347 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.009358 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.009383 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.009393 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.009402 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.009412 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.009427 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.009436 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.009447 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.009456 4689 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.009469 4689 reconstruct.go:97] "Volume reconstruction finished" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.009478 4689 reconciler.go:26] "Reconciler: start to sync state" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.010210 4689 server.go:460] "Adding debug handlers to kubelet server" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.025234 4689 manager.go:324] Recovery completed Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.034680 4689 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.039871 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.039929 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.039942 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.043596 4689 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.044094 4689 cpu_manager.go:225] "Starting CPU manager" policy="none" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.044113 4689 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.044139 4689 state_mem.go:36] "Initialized new in-memory state store" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.045956 4689 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.046018 4689 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.046064 4689 kubelet.go:2335] "Starting kubelet main sync loop" Dec 01 08:38:41 crc kubenswrapper[4689]: E1201 08:38:41.046125 4689 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 01 08:38:41 crc kubenswrapper[4689]: W1201 08:38:41.047593 4689 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.190:6443: connect: connection refused Dec 01 08:38:41 crc kubenswrapper[4689]: E1201 08:38:41.047686 4689 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.190:6443: connect: connection refused" logger="UnhandledError" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.058992 4689 policy_none.go:49] "None policy: Start" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.064098 4689 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.064139 4689 state_mem.go:35] "Initializing new in-memory state store" Dec 01 08:38:41 crc kubenswrapper[4689]: E1201 08:38:41.104409 4689 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.114461 4689 manager.go:334] "Starting Device Plugin manager" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.114519 4689 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.114536 4689 server.go:79] "Starting device plugin registration server" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.114945 4689 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.114966 4689 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.115263 4689 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.115343 4689 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.115351 4689 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 01 08:38:41 crc kubenswrapper[4689]: E1201 08:38:41.130201 4689 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.147100 4689 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc"] Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.147451 4689 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.148619 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.148723 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.148845 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.149083 4689 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.149317 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.149387 4689 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.150238 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.150342 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.150454 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.150359 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.150575 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.150586 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.150751 4689 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.150942 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.151007 4689 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.151859 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.151885 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.151898 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.152042 4689 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.152253 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.152319 4689 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.153145 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.153180 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.153192 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.153312 4689 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.153635 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.153832 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.153922 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.154047 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.154077 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.154088 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.153718 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.154177 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.154190 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.154229 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.154257 4689 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.153688 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.154400 4689 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.154975 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.155008 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.155020 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.155135 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.155154 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.155163 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.217688 4689 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.218886 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.218924 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.218933 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.218958 4689 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 01 08:38:41 crc kubenswrapper[4689]: E1201 08:38:41.219736 4689 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.190:6443: connect: connection refused" node="crc" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.313128 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.313187 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.313216 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.313235 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.313253 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.313271 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.313290 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.313309 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.313328 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.313445 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.313518 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.313555 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.313601 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.313633 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.313666 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 08:38:41 crc kubenswrapper[4689]: E1201 08:38:41.407412 4689 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.190:6443: connect: connection refused" interval="800ms" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.415697 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.416178 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.416298 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.416414 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.416498 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.416569 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.416617 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.416665 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.416802 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.416818 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.416898 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.416980 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.417023 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.417063 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.417100 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.417155 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.417076 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.417189 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.417197 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.417216 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.417231 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.417130 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.417182 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.417244 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.417330 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.417349 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.417335 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.417255 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.417220 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.417594 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.420141 4689 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.424156 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.424228 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.424253 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.424457 4689 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 01 08:38:41 crc kubenswrapper[4689]: E1201 08:38:41.425402 4689 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.190:6443: connect: connection refused" node="crc" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.489171 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.512572 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 01 08:38:41 crc kubenswrapper[4689]: W1201 08:38:41.525538 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-22c66f5ab47775bbc2b9e0846446e3dfc140bac38077d71bab7afbbb72ab2ea3 WatchSource:0}: Error finding container 22c66f5ab47775bbc2b9e0846446e3dfc140bac38077d71bab7afbbb72ab2ea3: Status 404 returned error can't find the container with id 22c66f5ab47775bbc2b9e0846446e3dfc140bac38077d71bab7afbbb72ab2ea3 Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.532767 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.540150 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 08:38:41 crc kubenswrapper[4689]: W1201 08:38:41.540502 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-9cb387e4cb7b386be02439f45e6f3dfc8998ad2d56eb40ff8fccb75350499f85 WatchSource:0}: Error finding container 9cb387e4cb7b386be02439f45e6f3dfc8998ad2d56eb40ff8fccb75350499f85: Status 404 returned error can't find the container with id 9cb387e4cb7b386be02439f45e6f3dfc8998ad2d56eb40ff8fccb75350499f85 Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.544788 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 08:38:41 crc kubenswrapper[4689]: W1201 08:38:41.563849 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-c4f4b83360bbcc1b7aea04801da14dbd5d362c2e780eb7d59c18804447adc246 WatchSource:0}: Error finding container c4f4b83360bbcc1b7aea04801da14dbd5d362c2e780eb7d59c18804447adc246: Status 404 returned error can't find the container with id c4f4b83360bbcc1b7aea04801da14dbd5d362c2e780eb7d59c18804447adc246 Dec 01 08:38:41 crc kubenswrapper[4689]: W1201 08:38:41.566463 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-3dbec1f9da6cc3c54e0a95d262711529b82263ee0897f798043425ba81bd4c68 WatchSource:0}: Error finding container 3dbec1f9da6cc3c54e0a95d262711529b82263ee0897f798043425ba81bd4c68: Status 404 returned error can't find the container with id 3dbec1f9da6cc3c54e0a95d262711529b82263ee0897f798043425ba81bd4c68 Dec 01 08:38:41 crc kubenswrapper[4689]: W1201 08:38:41.591190 4689 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.190:6443: connect: connection refused Dec 01 08:38:41 crc kubenswrapper[4689]: E1201 08:38:41.591337 4689 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.190:6443: connect: connection refused" logger="UnhandledError" Dec 01 08:38:41 crc kubenswrapper[4689]: W1201 08:38:41.593852 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-d673684398df65b8b86ecf2c2eb9be3343e9bf9e2632b82147c757c4a2ab3535 WatchSource:0}: Error finding container d673684398df65b8b86ecf2c2eb9be3343e9bf9e2632b82147c757c4a2ab3535: Status 404 returned error can't find the container with id d673684398df65b8b86ecf2c2eb9be3343e9bf9e2632b82147c757c4a2ab3535 Dec 01 08:38:41 crc kubenswrapper[4689]: W1201 08:38:41.670585 4689 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.190:6443: connect: connection refused Dec 01 08:38:41 crc kubenswrapper[4689]: E1201 08:38:41.670684 4689 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.190:6443: connect: connection refused" logger="UnhandledError" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.732525 4689 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.190:6443: connect: connection refused Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.733613 4689 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 06:02:42.534462428 +0000 UTC Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.733729 4689 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 21h24m0.800736431s for next certificate rotation Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.825777 4689 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.827354 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.827421 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.827436 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:38:41 crc kubenswrapper[4689]: I1201 08:38:41.827469 4689 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 01 08:38:41 crc kubenswrapper[4689]: E1201 08:38:41.832130 4689 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.190:6443: connect: connection refused" node="crc" Dec 01 08:38:41 crc kubenswrapper[4689]: W1201 08:38:41.854907 4689 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.190:6443: connect: connection refused Dec 01 08:38:41 crc kubenswrapper[4689]: E1201 08:38:41.855002 4689 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.190:6443: connect: connection refused" logger="UnhandledError" Dec 01 08:38:42 crc kubenswrapper[4689]: I1201 08:38:42.057664 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"16d09630f7eb6a631d65473e272c4f98de5ba071d33992776b4101ac797b0854"} Dec 01 08:38:42 crc kubenswrapper[4689]: I1201 08:38:42.058153 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"3dbec1f9da6cc3c54e0a95d262711529b82263ee0897f798043425ba81bd4c68"} Dec 01 08:38:42 crc kubenswrapper[4689]: I1201 08:38:42.059509 4689 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="a56b83ce083a611ba49ed00ff01a20ff09d5379160397a8f3f650269825849af" exitCode=0 Dec 01 08:38:42 crc kubenswrapper[4689]: I1201 08:38:42.059619 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"a56b83ce083a611ba49ed00ff01a20ff09d5379160397a8f3f650269825849af"} Dec 01 08:38:42 crc kubenswrapper[4689]: I1201 08:38:42.059712 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"c4f4b83360bbcc1b7aea04801da14dbd5d362c2e780eb7d59c18804447adc246"} Dec 01 08:38:42 crc kubenswrapper[4689]: I1201 08:38:42.059914 4689 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 08:38:42 crc kubenswrapper[4689]: I1201 08:38:42.061391 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:38:42 crc kubenswrapper[4689]: I1201 08:38:42.061515 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:38:42 crc kubenswrapper[4689]: I1201 08:38:42.061527 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:38:42 crc kubenswrapper[4689]: I1201 08:38:42.063709 4689 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="921be1c6959e2fc0a26fdb36ba09b5c79974f935782d5f1af2f908632d6ee0f6" exitCode=0 Dec 01 08:38:42 crc kubenswrapper[4689]: I1201 08:38:42.063843 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"921be1c6959e2fc0a26fdb36ba09b5c79974f935782d5f1af2f908632d6ee0f6"} Dec 01 08:38:42 crc kubenswrapper[4689]: I1201 08:38:42.063894 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"9cb387e4cb7b386be02439f45e6f3dfc8998ad2d56eb40ff8fccb75350499f85"} Dec 01 08:38:42 crc kubenswrapper[4689]: I1201 08:38:42.064013 4689 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 08:38:42 crc kubenswrapper[4689]: I1201 08:38:42.064957 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:38:42 crc kubenswrapper[4689]: I1201 08:38:42.064982 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:38:42 crc kubenswrapper[4689]: I1201 08:38:42.064992 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:38:42 crc kubenswrapper[4689]: I1201 08:38:42.065719 4689 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="d507cfb5d0bd608d6d5ecd4105f944cdf013df7acf17c4d6237512601b4a7125" exitCode=0 Dec 01 08:38:42 crc kubenswrapper[4689]: I1201 08:38:42.065790 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"d507cfb5d0bd608d6d5ecd4105f944cdf013df7acf17c4d6237512601b4a7125"} Dec 01 08:38:42 crc kubenswrapper[4689]: I1201 08:38:42.065814 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"22c66f5ab47775bbc2b9e0846446e3dfc140bac38077d71bab7afbbb72ab2ea3"} Dec 01 08:38:42 crc kubenswrapper[4689]: I1201 08:38:42.065871 4689 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 08:38:42 crc kubenswrapper[4689]: I1201 08:38:42.066688 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:38:42 crc kubenswrapper[4689]: I1201 08:38:42.066727 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:38:42 crc kubenswrapper[4689]: I1201 08:38:42.066741 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:38:42 crc kubenswrapper[4689]: I1201 08:38:42.069220 4689 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6bba1efe2a197a5c47fc4220c1c53da0db51bcbdf317261c978b7f31348034f8" exitCode=0 Dec 01 08:38:42 crc kubenswrapper[4689]: I1201 08:38:42.069290 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"6bba1efe2a197a5c47fc4220c1c53da0db51bcbdf317261c978b7f31348034f8"} Dec 01 08:38:42 crc kubenswrapper[4689]: I1201 08:38:42.069329 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d673684398df65b8b86ecf2c2eb9be3343e9bf9e2632b82147c757c4a2ab3535"} Dec 01 08:38:42 crc kubenswrapper[4689]: I1201 08:38:42.069456 4689 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 08:38:42 crc kubenswrapper[4689]: I1201 08:38:42.070403 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:38:42 crc kubenswrapper[4689]: I1201 08:38:42.070433 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:38:42 crc kubenswrapper[4689]: I1201 08:38:42.070445 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:38:42 crc kubenswrapper[4689]: I1201 08:38:42.075641 4689 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 08:38:42 crc kubenswrapper[4689]: I1201 08:38:42.078179 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:38:42 crc kubenswrapper[4689]: I1201 08:38:42.078233 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:38:42 crc kubenswrapper[4689]: I1201 08:38:42.078252 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:38:42 crc kubenswrapper[4689]: E1201 08:38:42.209116 4689 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.190:6443: connect: connection refused" interval="1.6s" Dec 01 08:38:42 crc kubenswrapper[4689]: W1201 08:38:42.506505 4689 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.190:6443: connect: connection refused Dec 01 08:38:42 crc kubenswrapper[4689]: E1201 08:38:42.506621 4689 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.190:6443: connect: connection refused" logger="UnhandledError" Dec 01 08:38:42 crc kubenswrapper[4689]: I1201 08:38:42.633182 4689 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 08:38:42 crc kubenswrapper[4689]: I1201 08:38:42.634819 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:38:42 crc kubenswrapper[4689]: I1201 08:38:42.634863 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:38:42 crc kubenswrapper[4689]: I1201 08:38:42.634872 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:38:42 crc kubenswrapper[4689]: I1201 08:38:42.634930 4689 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 01 08:38:42 crc kubenswrapper[4689]: E1201 08:38:42.635886 4689 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.190:6443: connect: connection refused" node="crc" Dec 01 08:38:42 crc kubenswrapper[4689]: I1201 08:38:42.766773 4689 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.190:6443: connect: connection refused Dec 01 08:38:43 crc kubenswrapper[4689]: I1201 08:38:43.075601 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"20de12d3c9020c7818be4f11a75808c7b7e81db8ae821b12284182d81e7cbceb"} Dec 01 08:38:43 crc kubenswrapper[4689]: I1201 08:38:43.075725 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"6ad3697dfb9a953c1345d69e9f6c393a184bafaf121e9a62a86d05f8f26e3f64"} Dec 01 08:38:43 crc kubenswrapper[4689]: I1201 08:38:43.085884 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ad9f8bfeec0bea5ec9df44017223d633ce320bba477e99d5cc325712f92fff65"} Dec 01 08:38:43 crc kubenswrapper[4689]: I1201 08:38:43.085966 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"2dbad86a3834b9ec3334de7ccc49988da1f7e42c423801c9789ce12ba70390b1"} Dec 01 08:38:43 crc kubenswrapper[4689]: I1201 08:38:43.089433 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"35299e45eecb4b2b98003295a39b7945d1fa2943a5a7f97301054d255687877e"} Dec 01 08:38:43 crc kubenswrapper[4689]: I1201 08:38:43.089490 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f804833ecd52f1cd93a4df02639ff912e5c7ff38bfce3ba1c47e8cd5fca46e11"} Dec 01 08:38:43 crc kubenswrapper[4689]: I1201 08:38:43.089535 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"6ccd36e732198fa380968b1ca9a174e8f380c9d3c3c762c3267b2f65367c4360"} Dec 01 08:38:43 crc kubenswrapper[4689]: I1201 08:38:43.089772 4689 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 08:38:43 crc kubenswrapper[4689]: I1201 08:38:43.090849 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:38:43 crc kubenswrapper[4689]: I1201 08:38:43.090898 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:38:43 crc kubenswrapper[4689]: I1201 08:38:43.090923 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:38:43 crc kubenswrapper[4689]: I1201 08:38:43.093598 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"f2c1b6982c1f372b680ec1f056266b51f7156a1881dea415db18d5280c7bf92b"} Dec 01 08:38:43 crc kubenswrapper[4689]: I1201 08:38:43.093865 4689 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 08:38:43 crc kubenswrapper[4689]: I1201 08:38:43.095310 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:38:43 crc kubenswrapper[4689]: I1201 08:38:43.095444 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:38:43 crc kubenswrapper[4689]: I1201 08:38:43.095550 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:38:43 crc kubenswrapper[4689]: I1201 08:38:43.100240 4689 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="0b324f4c55eee3ab4cd16cc8bf706c96ec23946906b7cc437cfe3e525e8d8992" exitCode=0 Dec 01 08:38:43 crc kubenswrapper[4689]: I1201 08:38:43.100566 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"0b324f4c55eee3ab4cd16cc8bf706c96ec23946906b7cc437cfe3e525e8d8992"} Dec 01 08:38:43 crc kubenswrapper[4689]: I1201 08:38:43.100984 4689 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 08:38:43 crc kubenswrapper[4689]: I1201 08:38:43.102318 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:38:43 crc kubenswrapper[4689]: I1201 08:38:43.102381 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:38:43 crc kubenswrapper[4689]: I1201 08:38:43.102394 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:38:43 crc kubenswrapper[4689]: I1201 08:38:43.209853 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 08:38:43 crc kubenswrapper[4689]: W1201 08:38:43.262813 4689 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.190:6443: connect: connection refused Dec 01 08:38:43 crc kubenswrapper[4689]: E1201 08:38:43.263004 4689 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.190:6443: connect: connection refused" logger="UnhandledError" Dec 01 08:38:43 crc kubenswrapper[4689]: W1201 08:38:43.514688 4689 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.190:6443: connect: connection refused Dec 01 08:38:43 crc kubenswrapper[4689]: E1201 08:38:43.516288 4689 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.190:6443: connect: connection refused" logger="UnhandledError" Dec 01 08:38:43 crc kubenswrapper[4689]: I1201 08:38:43.869476 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 08:38:43 crc kubenswrapper[4689]: I1201 08:38:43.881407 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 08:38:44 crc kubenswrapper[4689]: I1201 08:38:44.113592 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"83ec64e82504ae546c78bdb3eb8b6cf47514373616749723ec457191b95b73c7"} Dec 01 08:38:44 crc kubenswrapper[4689]: I1201 08:38:44.113675 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7190f95ad8a5cce27417903e343138ff345b97c0713609f47db91d8fe0c44a7f"} Dec 01 08:38:44 crc kubenswrapper[4689]: I1201 08:38:44.113693 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"848171dd65d193193056bff78092de5ea65abaf7af4d168a9c27409765bd0ac2"} Dec 01 08:38:44 crc kubenswrapper[4689]: I1201 08:38:44.114257 4689 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 08:38:44 crc kubenswrapper[4689]: I1201 08:38:44.115636 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:38:44 crc kubenswrapper[4689]: I1201 08:38:44.115744 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:38:44 crc kubenswrapper[4689]: I1201 08:38:44.115809 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:38:44 crc kubenswrapper[4689]: I1201 08:38:44.116846 4689 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="82d085936f8fa50e21ceb33dece861ce43fe37058227b1c6911171dfc076f798" exitCode=0 Dec 01 08:38:44 crc kubenswrapper[4689]: I1201 08:38:44.116920 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"82d085936f8fa50e21ceb33dece861ce43fe37058227b1c6911171dfc076f798"} Dec 01 08:38:44 crc kubenswrapper[4689]: I1201 08:38:44.117076 4689 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 08:38:44 crc kubenswrapper[4689]: I1201 08:38:44.118808 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:38:44 crc kubenswrapper[4689]: I1201 08:38:44.118901 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:38:44 crc kubenswrapper[4689]: I1201 08:38:44.118972 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:38:44 crc kubenswrapper[4689]: I1201 08:38:44.130343 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"35c39a06af6610f8d18eac5f96e5dee0b542fb3fb0c81a102ed8b2a20d054a42"} Dec 01 08:38:44 crc kubenswrapper[4689]: I1201 08:38:44.130469 4689 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 08:38:44 crc kubenswrapper[4689]: I1201 08:38:44.130713 4689 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 08:38:44 crc kubenswrapper[4689]: I1201 08:38:44.131664 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:38:44 crc kubenswrapper[4689]: I1201 08:38:44.131725 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:38:44 crc kubenswrapper[4689]: I1201 08:38:44.131742 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:38:44 crc kubenswrapper[4689]: I1201 08:38:44.133297 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:38:44 crc kubenswrapper[4689]: I1201 08:38:44.133447 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:38:44 crc kubenswrapper[4689]: I1201 08:38:44.133553 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:38:44 crc kubenswrapper[4689]: I1201 08:38:44.236220 4689 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 08:38:44 crc kubenswrapper[4689]: I1201 08:38:44.238078 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:38:44 crc kubenswrapper[4689]: I1201 08:38:44.238268 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:38:44 crc kubenswrapper[4689]: I1201 08:38:44.238301 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:38:44 crc kubenswrapper[4689]: I1201 08:38:44.238346 4689 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 01 08:38:45 crc kubenswrapper[4689]: I1201 08:38:45.136624 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"30720245d97d2053e407e3039443629a5edefa647b247412b535c0a436a8325d"} Dec 01 08:38:45 crc kubenswrapper[4689]: I1201 08:38:45.136675 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"fdbfe79cf101357b86e8fa4a6eaa769617b6c8c98e339dd887970c7f957588bf"} Dec 01 08:38:45 crc kubenswrapper[4689]: I1201 08:38:45.136693 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d244c74a4c458aa718e3010263b581f72216421da46e936f5e952067b6bdfc80"} Dec 01 08:38:45 crc kubenswrapper[4689]: I1201 08:38:45.136714 4689 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 01 08:38:45 crc kubenswrapper[4689]: I1201 08:38:45.136759 4689 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 08:38:45 crc kubenswrapper[4689]: I1201 08:38:45.136775 4689 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 08:38:45 crc kubenswrapper[4689]: I1201 08:38:45.136780 4689 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 01 08:38:45 crc kubenswrapper[4689]: I1201 08:38:45.136817 4689 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 08:38:45 crc kubenswrapper[4689]: I1201 08:38:45.137499 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 08:38:45 crc kubenswrapper[4689]: I1201 08:38:45.138054 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:38:45 crc kubenswrapper[4689]: I1201 08:38:45.138085 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:38:45 crc kubenswrapper[4689]: I1201 08:38:45.138061 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:38:45 crc kubenswrapper[4689]: I1201 08:38:45.138096 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:38:45 crc kubenswrapper[4689]: I1201 08:38:45.138112 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:38:45 crc kubenswrapper[4689]: I1201 08:38:45.138122 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:38:45 crc kubenswrapper[4689]: I1201 08:38:45.138132 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:38:45 crc kubenswrapper[4689]: I1201 08:38:45.138169 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:38:45 crc kubenswrapper[4689]: I1201 08:38:45.138231 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:38:45 crc kubenswrapper[4689]: I1201 08:38:45.905612 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 08:38:46 crc kubenswrapper[4689]: I1201 08:38:46.146587 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"ed6b4db1d9a36246844820185fca89d3da7459b219d0c95ffd4b64125aa0b353"} Dec 01 08:38:46 crc kubenswrapper[4689]: I1201 08:38:46.146693 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e2c2451d2ef54ab13321a4d14a8b1ebd1109758b866f4c682aeac8ab1849f11d"} Dec 01 08:38:46 crc kubenswrapper[4689]: I1201 08:38:46.146863 4689 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 08:38:46 crc kubenswrapper[4689]: I1201 08:38:46.146879 4689 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 08:38:46 crc kubenswrapper[4689]: I1201 08:38:46.149316 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:38:46 crc kubenswrapper[4689]: I1201 08:38:46.149455 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:38:46 crc kubenswrapper[4689]: I1201 08:38:46.149497 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:38:46 crc kubenswrapper[4689]: I1201 08:38:46.154761 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:38:46 crc kubenswrapper[4689]: I1201 08:38:46.154798 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:38:46 crc kubenswrapper[4689]: I1201 08:38:46.154812 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:38:46 crc kubenswrapper[4689]: I1201 08:38:46.211084 4689 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 01 08:38:46 crc kubenswrapper[4689]: I1201 08:38:46.211527 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 01 08:38:47 crc kubenswrapper[4689]: I1201 08:38:47.150266 4689 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 08:38:47 crc kubenswrapper[4689]: I1201 08:38:47.150783 4689 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 08:38:47 crc kubenswrapper[4689]: I1201 08:38:47.152753 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:38:47 crc kubenswrapper[4689]: I1201 08:38:47.152832 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:38:47 crc kubenswrapper[4689]: I1201 08:38:47.152859 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:38:47 crc kubenswrapper[4689]: I1201 08:38:47.153092 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:38:47 crc kubenswrapper[4689]: I1201 08:38:47.153126 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:38:47 crc kubenswrapper[4689]: I1201 08:38:47.153142 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:38:47 crc kubenswrapper[4689]: I1201 08:38:47.285471 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 01 08:38:47 crc kubenswrapper[4689]: I1201 08:38:47.285795 4689 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 08:38:47 crc kubenswrapper[4689]: I1201 08:38:47.287844 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:38:47 crc kubenswrapper[4689]: I1201 08:38:47.287922 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:38:47 crc kubenswrapper[4689]: I1201 08:38:47.287949 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:38:47 crc kubenswrapper[4689]: I1201 08:38:47.428253 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 08:38:47 crc kubenswrapper[4689]: I1201 08:38:47.435567 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Dec 01 08:38:48 crc kubenswrapper[4689]: I1201 08:38:48.153942 4689 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 08:38:48 crc kubenswrapper[4689]: I1201 08:38:48.153942 4689 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 08:38:48 crc kubenswrapper[4689]: I1201 08:38:48.155289 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:38:48 crc kubenswrapper[4689]: I1201 08:38:48.155342 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:38:48 crc kubenswrapper[4689]: I1201 08:38:48.155386 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:38:48 crc kubenswrapper[4689]: I1201 08:38:48.156711 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:38:48 crc kubenswrapper[4689]: I1201 08:38:48.156795 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:38:48 crc kubenswrapper[4689]: I1201 08:38:48.156858 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:38:49 crc kubenswrapper[4689]: I1201 08:38:49.601702 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Dec 01 08:38:49 crc kubenswrapper[4689]: I1201 08:38:49.602020 4689 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 08:38:49 crc kubenswrapper[4689]: I1201 08:38:49.603959 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:38:49 crc kubenswrapper[4689]: I1201 08:38:49.604061 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:38:49 crc kubenswrapper[4689]: I1201 08:38:49.604080 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:38:49 crc kubenswrapper[4689]: I1201 08:38:49.627001 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 08:38:49 crc kubenswrapper[4689]: I1201 08:38:49.627223 4689 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 01 08:38:49 crc kubenswrapper[4689]: I1201 08:38:49.627294 4689 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 08:38:49 crc kubenswrapper[4689]: I1201 08:38:49.628847 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:38:49 crc kubenswrapper[4689]: I1201 08:38:49.628884 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:38:49 crc kubenswrapper[4689]: I1201 08:38:49.628898 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:38:50 crc kubenswrapper[4689]: I1201 08:38:50.532218 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 08:38:50 crc kubenswrapper[4689]: I1201 08:38:50.532590 4689 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 08:38:50 crc kubenswrapper[4689]: I1201 08:38:50.535133 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:38:50 crc kubenswrapper[4689]: I1201 08:38:50.535241 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:38:50 crc kubenswrapper[4689]: I1201 08:38:50.535263 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:38:51 crc kubenswrapper[4689]: E1201 08:38:51.130513 4689 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 01 08:38:53 crc kubenswrapper[4689]: I1201 08:38:53.733048 4689 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Dec 01 08:38:53 crc kubenswrapper[4689]: E1201 08:38:53.811090 4689 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Dec 01 08:38:53 crc kubenswrapper[4689]: I1201 08:38:53.828851 4689 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 01 08:38:53 crc kubenswrapper[4689]: I1201 08:38:53.828975 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 01 08:38:54 crc kubenswrapper[4689]: E1201 08:38:54.240520 4689 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Dec 01 08:38:54 crc kubenswrapper[4689]: W1201 08:38:54.352613 4689 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Dec 01 08:38:54 crc kubenswrapper[4689]: I1201 08:38:54.352787 4689 trace.go:236] Trace[581555328]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (01-Dec-2025 08:38:44.351) (total time: 10001ms): Dec 01 08:38:54 crc kubenswrapper[4689]: Trace[581555328]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (08:38:54.352) Dec 01 08:38:54 crc kubenswrapper[4689]: Trace[581555328]: [10.001317055s] [10.001317055s] END Dec 01 08:38:54 crc kubenswrapper[4689]: E1201 08:38:54.352818 4689 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Dec 01 08:38:54 crc kubenswrapper[4689]: E1201 08:38:54.695199 4689 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{crc.187d0aa231199098 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-01 08:38:40.723841176 +0000 UTC m=+0.796129070,LastTimestamp:2025-12-01 08:38:40.723841176 +0000 UTC m=+0.796129070,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 01 08:38:54 crc kubenswrapper[4689]: W1201 08:38:54.712107 4689 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Dec 01 08:38:54 crc kubenswrapper[4689]: I1201 08:38:54.712250 4689 trace.go:236] Trace[2072018185]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (01-Dec-2025 08:38:44.710) (total time: 10001ms): Dec 01 08:38:54 crc kubenswrapper[4689]: Trace[2072018185]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (08:38:54.712) Dec 01 08:38:54 crc kubenswrapper[4689]: Trace[2072018185]: [10.001741342s] [10.001741342s] END Dec 01 08:38:54 crc kubenswrapper[4689]: E1201 08:38:54.712346 4689 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Dec 01 08:38:55 crc kubenswrapper[4689]: I1201 08:38:55.109069 4689 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 01 08:38:55 crc kubenswrapper[4689]: I1201 08:38:55.109145 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 01 08:38:55 crc kubenswrapper[4689]: I1201 08:38:55.114894 4689 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 01 08:38:55 crc kubenswrapper[4689]: I1201 08:38:55.114996 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 01 08:38:55 crc kubenswrapper[4689]: I1201 08:38:55.918200 4689 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 01 08:38:55 crc kubenswrapper[4689]: [+]log ok Dec 01 08:38:55 crc kubenswrapper[4689]: [+]etcd ok Dec 01 08:38:55 crc kubenswrapper[4689]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 01 08:38:55 crc kubenswrapper[4689]: [+]poststarthook/openshift.io-api-request-count-filter ok Dec 01 08:38:55 crc kubenswrapper[4689]: [+]poststarthook/openshift.io-startkubeinformers ok Dec 01 08:38:55 crc kubenswrapper[4689]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Dec 01 08:38:55 crc kubenswrapper[4689]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Dec 01 08:38:55 crc kubenswrapper[4689]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 01 08:38:55 crc kubenswrapper[4689]: [+]poststarthook/generic-apiserver-start-informers ok Dec 01 08:38:55 crc kubenswrapper[4689]: [+]poststarthook/priority-and-fairness-config-consumer ok Dec 01 08:38:55 crc kubenswrapper[4689]: [+]poststarthook/priority-and-fairness-filter ok Dec 01 08:38:55 crc kubenswrapper[4689]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 01 08:38:55 crc kubenswrapper[4689]: [+]poststarthook/start-apiextensions-informers ok Dec 01 08:38:55 crc kubenswrapper[4689]: [+]poststarthook/start-apiextensions-controllers ok Dec 01 08:38:55 crc kubenswrapper[4689]: [+]poststarthook/crd-informer-synced ok Dec 01 08:38:55 crc kubenswrapper[4689]: [+]poststarthook/start-system-namespaces-controller ok Dec 01 08:38:55 crc kubenswrapper[4689]: [+]poststarthook/start-cluster-authentication-info-controller ok Dec 01 08:38:55 crc kubenswrapper[4689]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Dec 01 08:38:55 crc kubenswrapper[4689]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Dec 01 08:38:55 crc kubenswrapper[4689]: [+]poststarthook/start-legacy-token-tracking-controller ok Dec 01 08:38:55 crc kubenswrapper[4689]: [+]poststarthook/start-service-ip-repair-controllers ok Dec 01 08:38:55 crc kubenswrapper[4689]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Dec 01 08:38:55 crc kubenswrapper[4689]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Dec 01 08:38:55 crc kubenswrapper[4689]: [+]poststarthook/priority-and-fairness-config-producer ok Dec 01 08:38:55 crc kubenswrapper[4689]: [+]poststarthook/bootstrap-controller ok Dec 01 08:38:55 crc kubenswrapper[4689]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Dec 01 08:38:55 crc kubenswrapper[4689]: [+]poststarthook/start-kube-aggregator-informers ok Dec 01 08:38:55 crc kubenswrapper[4689]: [+]poststarthook/apiservice-status-local-available-controller ok Dec 01 08:38:55 crc kubenswrapper[4689]: [+]poststarthook/apiservice-status-remote-available-controller ok Dec 01 08:38:55 crc kubenswrapper[4689]: [+]poststarthook/apiservice-registration-controller ok Dec 01 08:38:55 crc kubenswrapper[4689]: [+]poststarthook/apiservice-wait-for-first-sync ok Dec 01 08:38:55 crc kubenswrapper[4689]: [+]poststarthook/apiservice-discovery-controller ok Dec 01 08:38:55 crc kubenswrapper[4689]: [+]poststarthook/kube-apiserver-autoregistration ok Dec 01 08:38:55 crc kubenswrapper[4689]: [+]autoregister-completion ok Dec 01 08:38:55 crc kubenswrapper[4689]: [+]poststarthook/apiservice-openapi-controller ok Dec 01 08:38:55 crc kubenswrapper[4689]: [+]poststarthook/apiservice-openapiv3-controller ok Dec 01 08:38:55 crc kubenswrapper[4689]: livez check failed Dec 01 08:38:55 crc kubenswrapper[4689]: I1201 08:38:55.918335 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 08:38:56 crc kubenswrapper[4689]: I1201 08:38:56.252686 4689 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded" start-of-body= Dec 01 08:38:56 crc kubenswrapper[4689]: I1201 08:38:56.252870 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded" Dec 01 08:38:57 crc kubenswrapper[4689]: I1201 08:38:57.441764 4689 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 08:38:57 crc kubenswrapper[4689]: I1201 08:38:57.443812 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:38:57 crc kubenswrapper[4689]: I1201 08:38:57.443870 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:38:57 crc kubenswrapper[4689]: I1201 08:38:57.443892 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:38:57 crc kubenswrapper[4689]: I1201 08:38:57.443932 4689 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 01 08:38:57 crc kubenswrapper[4689]: E1201 08:38:57.451870 4689 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Dec 01 08:38:57 crc kubenswrapper[4689]: I1201 08:38:57.474749 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Dec 01 08:38:57 crc kubenswrapper[4689]: I1201 08:38:57.475521 4689 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 08:38:57 crc kubenswrapper[4689]: I1201 08:38:57.477293 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:38:57 crc kubenswrapper[4689]: I1201 08:38:57.477403 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:38:57 crc kubenswrapper[4689]: I1201 08:38:57.477424 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:38:57 crc kubenswrapper[4689]: I1201 08:38:57.501843 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Dec 01 08:38:58 crc kubenswrapper[4689]: I1201 08:38:58.196920 4689 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 08:38:58 crc kubenswrapper[4689]: I1201 08:38:58.198736 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:38:58 crc kubenswrapper[4689]: I1201 08:38:58.198800 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:38:58 crc kubenswrapper[4689]: I1201 08:38:58.198814 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:38:58 crc kubenswrapper[4689]: I1201 08:38:58.252062 4689 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 01 08:38:58 crc kubenswrapper[4689]: I1201 08:38:58.600223 4689 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.123401 4689 trace.go:236] Trace[1318272624]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (01-Dec-2025 08:38:48.835) (total time: 11287ms): Dec 01 08:39:00 crc kubenswrapper[4689]: Trace[1318272624]: ---"Objects listed" error: 11287ms (08:39:00.123) Dec 01 08:39:00 crc kubenswrapper[4689]: Trace[1318272624]: [11.287813105s] [11.287813105s] END Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.123460 4689 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.123614 4689 trace.go:236] Trace[1060339425]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (01-Dec-2025 08:38:48.384) (total time: 11739ms): Dec 01 08:39:00 crc kubenswrapper[4689]: Trace[1060339425]: ---"Objects listed" error: 11739ms (08:39:00.123) Dec 01 08:39:00 crc kubenswrapper[4689]: Trace[1060339425]: [11.739406122s] [11.739406122s] END Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.123648 4689 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.124190 4689 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.159824 4689 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:42960->192.168.126.11:17697: read: connection reset by peer" start-of-body= Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.159941 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:42960->192.168.126.11:17697: read: connection reset by peer" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.541139 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.776769 4689 apiserver.go:52] "Watching apiserver" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.781764 4689 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.782087 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c"] Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.782633 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.782671 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.782828 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.783043 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 01 08:39:00 crc kubenswrapper[4689]: E1201 08:39:00.783062 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 08:39:00 crc kubenswrapper[4689]: E1201 08:39:00.783099 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.783142 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.783615 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 08:39:00 crc kubenswrapper[4689]: E1201 08:39:00.783773 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.785337 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.785768 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.786232 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.789468 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.789469 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.789609 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.789644 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.789690 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.789939 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.816280 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.835946 4689 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.838867 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.838931 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.838977 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.839018 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.839044 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.839076 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.839103 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.839132 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.839154 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.839189 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.839212 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.839231 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.839254 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.839274 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.839293 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.839350 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.839415 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.839449 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.839474 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.839496 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.839515 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.839547 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.839567 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.839586 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.839590 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.839614 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.839639 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.839667 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.839708 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.839732 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.839754 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.839776 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.839808 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.839826 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.839851 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.839871 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.839894 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.839913 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.839936 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.839958 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.839978 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.840000 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.840250 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.840279 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.840309 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.840330 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.840377 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.840435 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.840472 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.840521 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.840563 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.840593 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.840615 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.840636 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.840657 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.840685 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.840712 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.840732 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.840775 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.840797 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.840819 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.840849 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.840968 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.840995 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.841019 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.841050 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.841090 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.841114 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.841137 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.841160 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.841185 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.841208 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.841233 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.841256 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.841320 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.841353 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.841402 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.841432 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.841453 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.841480 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.841503 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.841526 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.841551 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.841571 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.841593 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.841616 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.841638 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.841659 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.841681 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.841703 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.841724 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.841744 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.839594 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.841790 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.841802 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.839598 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.839847 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.839890 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.840077 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.840103 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.840132 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.840318 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.840353 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.840390 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.840461 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.840553 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.840661 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.840702 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.840729 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.840878 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.840926 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.841028 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.841149 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.841187 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.841208 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.841356 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.841385 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.841988 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.841776 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.841763 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.842074 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.842128 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.842155 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.842189 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.842216 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.842239 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.842265 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.842290 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.842316 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.842344 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.843308 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.843340 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.843410 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.843440 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.843464 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.843494 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.843522 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.843549 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.843574 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.843661 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.843687 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.843716 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.843738 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.843761 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.843788 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.843811 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.843835 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.843857 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.843881 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.843907 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.844007 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.844037 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.844117 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.844145 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.844187 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.844212 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.844237 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.844318 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.844348 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.844392 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.844421 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.844448 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.844516 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.844540 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.844564 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.844589 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.844615 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.844639 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.844663 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.844689 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.844712 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.844738 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.844763 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.844791 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.844817 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.844842 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.844866 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.844891 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.844926 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.844952 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.844980 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.845006 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.845106 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.845143 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.845167 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.845192 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.845215 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.845277 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.845310 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.845337 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.845360 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.845456 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.845481 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.845509 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.845536 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.845530 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.845560 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.845588 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.845615 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.845639 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.845665 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.845691 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.845717 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.845742 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.845770 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.845770 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.845795 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.845845 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.845871 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.845894 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.845899 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.845978 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.846003 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.846024 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.846042 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.846073 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.846091 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.846109 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.846127 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.846144 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.846164 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.846184 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.846194 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.846256 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.846276 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.846285 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.846303 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.846333 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.846446 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.846522 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.846551 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.846573 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.846613 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.846637 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.846688 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.846712 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.846733 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.846757 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.846781 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.846806 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.846828 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.846845 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.846862 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.846947 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.846960 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.846972 4689 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.846990 4689 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.847004 4689 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.847020 4689 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.847035 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.847051 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.847065 4689 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.847075 4689 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.847089 4689 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.847100 4689 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.847111 4689 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.847120 4689 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.847134 4689 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.847148 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.847158 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.847168 4689 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.847177 4689 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.847187 4689 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.847197 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.847213 4689 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.847222 4689 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.847232 4689 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.847241 4689 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.847251 4689 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.847260 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.847270 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.847281 4689 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.847291 4689 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.847302 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.847312 4689 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.847328 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.848463 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.846662 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.846888 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.846906 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.846991 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.847116 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.847182 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.847192 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.847389 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.847660 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.847908 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.848141 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.848277 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.848473 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.887912 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.848521 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.848769 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.848789 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.848831 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.848999 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.849053 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.849247 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.849201 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.849499 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.849952 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.850244 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.850480 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.850655 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.850873 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.851051 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.856558 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.856691 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.857333 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.857517 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.857729 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.857980 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.858026 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.858291 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.858426 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.858702 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.858930 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.859302 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.859904 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.860294 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.860652 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.860724 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.861020 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.861062 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.861118 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.861266 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.861494 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.861530 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.861634 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.861831 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.862025 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.862253 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.862358 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.862700 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.862841 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.863176 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.863183 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.864733 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.864961 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.865177 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.865300 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.866073 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.866482 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.866780 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.866998 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.868828 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.869337 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.869493 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.869771 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.869828 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.869899 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.870388 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.874195 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.874579 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.874591 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.875104 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.888524 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.875461 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.875715 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.875897 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.876059 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.876116 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.876438 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.876655 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.876771 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.876957 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.877036 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.877092 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.877354 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.877793 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.878169 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.878351 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.878533 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.878707 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.880163 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.880245 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.880489 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.880547 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.881850 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.882602 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.882835 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: E1201 08:39:00.884508 4689 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.885935 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.888508 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.888818 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: E1201 08:39:00.889042 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-01 08:39:01.389010621 +0000 UTC m=+21.461298515 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.889079 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.889126 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.889891 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.889913 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.890479 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.892537 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.892988 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.893308 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.887775 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.893647 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.893727 4689 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.893926 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: E1201 08:39:00.894176 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:39:01.394153478 +0000 UTC m=+21.466441382 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.894421 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.894720 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.894883 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.895142 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.895425 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.895892 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.896349 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.896730 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.896803 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.897069 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.897201 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.897259 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.897720 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.898119 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.898515 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.898643 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.898997 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.901550 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.902107 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.902535 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.903951 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.903988 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.904313 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.904642 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: E1201 08:39:00.906899 4689 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 01 08:39:00 crc kubenswrapper[4689]: E1201 08:39:00.906977 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-01 08:39:01.406954117 +0000 UTC m=+21.479242021 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.907203 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.907317 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.907846 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.908347 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.884446 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.908791 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.909698 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.911744 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.911933 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.913640 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.913892 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.916038 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.916626 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.918968 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.920968 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.921101 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.921671 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.925504 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.925913 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.925889 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.941902 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.942916 4689 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.942957 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 01 08:39:00 crc kubenswrapper[4689]: E1201 08:39:00.943169 4689 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 01 08:39:00 crc kubenswrapper[4689]: E1201 08:39:00.943201 4689 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 01 08:39:00 crc kubenswrapper[4689]: E1201 08:39:00.943215 4689 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 08:39:00 crc kubenswrapper[4689]: E1201 08:39:00.943281 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-01 08:39:01.443260041 +0000 UTC m=+21.515547945 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.945250 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.949547 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.950340 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 01 08:39:00 crc kubenswrapper[4689]: E1201 08:39:00.966921 4689 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 01 08:39:00 crc kubenswrapper[4689]: E1201 08:39:00.966968 4689 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 01 08:39:00 crc kubenswrapper[4689]: E1201 08:39:00.966987 4689 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.967133 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.967206 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.967340 4689 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: E1201 08:39:00.967415 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-01 08:39:01.467342731 +0000 UTC m=+21.539630635 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.967483 4689 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.967499 4689 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.967510 4689 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.967510 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.967481 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.967521 4689 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.967563 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.967578 4689 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.967600 4689 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.967612 4689 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.967651 4689 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.967663 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.967675 4689 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.967686 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.967697 4689 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.967708 4689 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.967719 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.967730 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.967741 4689 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.967752 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.967763 4689 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.967773 4689 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.967785 4689 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.967797 4689 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.967809 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.967821 4689 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.967832 4689 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.967842 4689 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.967853 4689 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.967866 4689 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.967878 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.967890 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.967903 4689 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.967915 4689 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.967926 4689 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.967937 4689 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.967951 4689 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.967964 4689 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.967975 4689 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.967986 4689 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.967996 4689 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.968007 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.968018 4689 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.968032 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.968043 4689 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.968054 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.968065 4689 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.968078 4689 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.968089 4689 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.968100 4689 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.968111 4689 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.968123 4689 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.968135 4689 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.968146 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.968158 4689 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.968168 4689 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.968181 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.968194 4689 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.968205 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.968215 4689 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.968225 4689 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.968235 4689 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.968247 4689 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.968258 4689 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.968270 4689 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.968280 4689 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.968291 4689 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.968301 4689 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.968322 4689 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.968336 4689 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.968348 4689 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.968360 4689 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.970296 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.970313 4689 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.970325 4689 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.970336 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.970348 4689 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.970359 4689 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.970402 4689 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.970414 4689 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.970427 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.970439 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.970453 4689 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.970467 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.970479 4689 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.970490 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.970501 4689 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.970512 4689 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.970524 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.970536 4689 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.970548 4689 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.970561 4689 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.970572 4689 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.970584 4689 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.970596 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.970607 4689 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.970618 4689 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.970628 4689 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.970639 4689 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.970651 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.970662 4689 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.970673 4689 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.970684 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.970695 4689 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.970708 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.970719 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.970731 4689 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.970742 4689 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.970754 4689 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.970766 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.970778 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.970790 4689 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.970801 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.970812 4689 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.970824 4689 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.970836 4689 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.970847 4689 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.970858 4689 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.970869 4689 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.970881 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.970893 4689 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.970903 4689 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.970918 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.970931 4689 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.970942 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.970953 4689 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.970964 4689 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.970974 4689 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.970985 4689 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.970997 4689 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.971008 4689 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.971019 4689 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.971030 4689 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.971041 4689 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.971052 4689 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.971063 4689 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.971075 4689 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.971087 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.971098 4689 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.971108 4689 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.971119 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.971129 4689 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.971139 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.971150 4689 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.971161 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.971172 4689 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.971182 4689 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.971194 4689 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.971205 4689 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.971216 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.971227 4689 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.971239 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.971250 4689 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.971260 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.971271 4689 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.971282 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.971294 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.971304 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:00 crc kubenswrapper[4689]: I1201 08:39:00.972691 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 08:39:01 crc kubenswrapper[4689]: I1201 08:39:01.127899 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 01 08:39:01 crc kubenswrapper[4689]: I1201 08:39:01.129660 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 01 08:39:01 crc kubenswrapper[4689]: I1201 08:39:01.136167 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 01 08:39:01 crc kubenswrapper[4689]: I1201 08:39:01.138652 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Dec 01 08:39:01 crc kubenswrapper[4689]: I1201 08:39:01.139323 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Dec 01 08:39:01 crc kubenswrapper[4689]: I1201 08:39:01.140535 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Dec 01 08:39:01 crc kubenswrapper[4689]: I1201 08:39:01.141215 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Dec 01 08:39:01 crc kubenswrapper[4689]: I1201 08:39:01.141709 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 08:39:01 crc kubenswrapper[4689]: I1201 08:39:01.168768 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f71ff62f-7141-4d50-b8ad-8312dc8a80e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccd36e732198fa380968b1ca9a174e8f380c9d3c3c762c3267b2f65367c4360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d09630f7eb6a631d65473e272c4f98de5ba071d33992776b4101ac797b0854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f804833ecd52f1cd93a4df02639ff912e5c7ff38bfce3ba1c47e8cd5fca46e11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35299e45eecb4b2b98003295a39b7945d1fa2943a5a7f97301054d255687877e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:38:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 08:39:01 crc kubenswrapper[4689]: I1201 08:39:01.172826 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 08:39:01 crc kubenswrapper[4689]: I1201 08:39:01.176942 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Dec 01 08:39:01 crc kubenswrapper[4689]: I1201 08:39:01.177663 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Dec 01 08:39:01 crc kubenswrapper[4689]: I1201 08:39:01.178878 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Dec 01 08:39:01 crc kubenswrapper[4689]: I1201 08:39:01.202168 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 08:39:01 crc kubenswrapper[4689]: I1201 08:39:01.203657 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Dec 01 08:39:01 crc kubenswrapper[4689]: I1201 08:39:01.204669 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Dec 01 08:39:01 crc kubenswrapper[4689]: I1201 08:39:01.208311 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Dec 01 08:39:01 crc kubenswrapper[4689]: I1201 08:39:01.209035 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Dec 01 08:39:01 crc kubenswrapper[4689]: I1201 08:39:01.217095 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Dec 01 08:39:01 crc kubenswrapper[4689]: I1201 08:39:01.218600 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 08:39:01 crc kubenswrapper[4689]: I1201 08:39:01.224274 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Dec 01 08:39:01 crc kubenswrapper[4689]: I1201 08:39:01.225054 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Dec 01 08:39:01 crc kubenswrapper[4689]: I1201 08:39:01.226124 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Dec 01 08:39:01 crc kubenswrapper[4689]: I1201 08:39:01.226735 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Dec 01 08:39:01 crc kubenswrapper[4689]: I1201 08:39:01.227968 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Dec 01 08:39:01 crc kubenswrapper[4689]: I1201 08:39:01.228489 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Dec 01 08:39:01 crc kubenswrapper[4689]: I1201 08:39:01.246290 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Dec 01 08:39:01 crc kubenswrapper[4689]: I1201 08:39:01.248073 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Dec 01 08:39:01 crc kubenswrapper[4689]: I1201 08:39:01.248845 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Dec 01 08:39:01 crc kubenswrapper[4689]: I1201 08:39:01.250567 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Dec 01 08:39:01 crc kubenswrapper[4689]: I1201 08:39:01.251128 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Dec 01 08:39:01 crc kubenswrapper[4689]: I1201 08:39:01.251870 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 08:39:01 crc kubenswrapper[4689]: I1201 08:39:01.256934 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Dec 01 08:39:01 crc kubenswrapper[4689]: I1201 08:39:01.257933 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Dec 01 08:39:01 crc kubenswrapper[4689]: I1201 08:39:01.263243 4689 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:01 crc kubenswrapper[4689]: I1201 08:39:01.263267 4689 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:01 crc kubenswrapper[4689]: I1201 08:39:01.263277 4689 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:01 crc kubenswrapper[4689]: I1201 08:39:01.263290 4689 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 08:39:01 crc kubenswrapper[4689]: I1201 08:39:01.263775 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Dec 01 08:39:01 crc kubenswrapper[4689]: I1201 08:39:01.264756 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Dec 01 08:39:01 crc kubenswrapper[4689]: I1201 08:39:01.265800 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Dec 01 08:39:01 crc kubenswrapper[4689]: I1201 08:39:01.266631 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Dec 01 08:39:01 crc kubenswrapper[4689]: I1201 08:39:01.267311 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Dec 01 08:39:01 crc kubenswrapper[4689]: I1201 08:39:01.269912 4689 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Dec 01 08:39:01 crc kubenswrapper[4689]: I1201 08:39:01.270104 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Dec 01 08:39:01 crc kubenswrapper[4689]: I1201 08:39:01.272053 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 01 08:39:01 crc kubenswrapper[4689]: I1201 08:39:01.273157 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Dec 01 08:39:01 crc kubenswrapper[4689]: I1201 08:39:01.273802 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Dec 01 08:39:01 crc kubenswrapper[4689]: I1201 08:39:01.278673 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Dec 01 08:39:01 crc kubenswrapper[4689]: I1201 08:39:01.281628 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Dec 01 08:39:01 crc kubenswrapper[4689]: I1201 08:39:01.282402 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Dec 01 08:39:01 crc kubenswrapper[4689]: I1201 08:39:01.283445 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Dec 01 08:39:01 crc kubenswrapper[4689]: I1201 08:39:01.284333 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Dec 01 08:39:01 crc kubenswrapper[4689]: I1201 08:39:01.285437 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 08:39:01 crc kubenswrapper[4689]: I1201 08:39:01.287413 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Dec 01 08:39:01 crc kubenswrapper[4689]: I1201 08:39:01.289502 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Dec 01 08:39:01 crc kubenswrapper[4689]: I1201 08:39:01.290339 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Dec 01 08:39:01 crc kubenswrapper[4689]: I1201 08:39:01.297953 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Dec 01 08:39:01 crc kubenswrapper[4689]: I1201 08:39:01.301144 4689 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="83ec64e82504ae546c78bdb3eb8b6cf47514373616749723ec457191b95b73c7" exitCode=255 Dec 01 08:39:01 crc kubenswrapper[4689]: I1201 08:39:01.302070 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Dec 01 08:39:01 crc kubenswrapper[4689]: I1201 08:39:01.302779 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Dec 01 08:39:01 crc kubenswrapper[4689]: I1201 08:39:01.303999 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Dec 01 08:39:01 crc kubenswrapper[4689]: I1201 08:39:01.305006 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Dec 01 08:39:01 crc kubenswrapper[4689]: I1201 08:39:01.309555 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Dec 01 08:39:01 crc kubenswrapper[4689]: I1201 08:39:01.309585 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 08:39:01 crc kubenswrapper[4689]: I1201 08:39:01.310464 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Dec 01 08:39:01 crc kubenswrapper[4689]: I1201 08:39:01.311164 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Dec 01 08:39:01 crc kubenswrapper[4689]: I1201 08:39:01.312832 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Dec 01 08:39:01 crc kubenswrapper[4689]: I1201 08:39:01.313678 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Dec 01 08:39:01 crc kubenswrapper[4689]: E1201 08:39:01.321651 4689 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-crc\" already exists" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 08:39:01 crc kubenswrapper[4689]: I1201 08:39:01.322050 4689 scope.go:117] "RemoveContainer" containerID="83ec64e82504ae546c78bdb3eb8b6cf47514373616749723ec457191b95b73c7" Dec 01 08:39:01 crc kubenswrapper[4689]: I1201 08:39:01.323103 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Dec 01 08:39:01 crc kubenswrapper[4689]: I1201 08:39:01.323874 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Dec 01 08:39:01 crc kubenswrapper[4689]: I1201 08:39:01.327667 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"83ec64e82504ae546c78bdb3eb8b6cf47514373616749723ec457191b95b73c7"} Dec 01 08:39:01 crc kubenswrapper[4689]: I1201 08:39:01.327963 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"bacf41753feb609b7121144365a4c32fbcc92a4f45b39c15003483a3841b6607"} Dec 01 08:39:01 crc kubenswrapper[4689]: I1201 08:39:01.328032 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"5d632a3069cd2792658a21aae1c6bd14be64fd3ec7ea5b630248af0056c70104"} Dec 01 08:39:01 crc kubenswrapper[4689]: I1201 08:39:01.328107 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 01 08:39:01 crc kubenswrapper[4689]: I1201 08:39:01.343748 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 08:39:01 crc kubenswrapper[4689]: I1201 08:39:01.391826 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 08:39:01 crc kubenswrapper[4689]: I1201 08:39:01.417910 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 08:39:01 crc kubenswrapper[4689]: I1201 08:39:01.446793 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 08:39:01 crc kubenswrapper[4689]: I1201 08:39:01.465007 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:39:01 crc kubenswrapper[4689]: I1201 08:39:01.465098 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 08:39:01 crc kubenswrapper[4689]: I1201 08:39:01.465129 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 08:39:01 crc kubenswrapper[4689]: I1201 08:39:01.465183 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 08:39:01 crc kubenswrapper[4689]: E1201 08:39:01.465325 4689 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 01 08:39:01 crc kubenswrapper[4689]: E1201 08:39:01.465409 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-01 08:39:02.465390752 +0000 UTC m=+22.537678656 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 01 08:39:01 crc kubenswrapper[4689]: E1201 08:39:01.465500 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:39:02.465480165 +0000 UTC m=+22.537768069 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:39:01 crc kubenswrapper[4689]: E1201 08:39:01.465567 4689 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 01 08:39:01 crc kubenswrapper[4689]: E1201 08:39:01.465578 4689 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 01 08:39:01 crc kubenswrapper[4689]: E1201 08:39:01.465589 4689 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 08:39:01 crc kubenswrapper[4689]: E1201 08:39:01.465612 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-01 08:39:02.465605768 +0000 UTC m=+22.537893672 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 08:39:01 crc kubenswrapper[4689]: E1201 08:39:01.465646 4689 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 01 08:39:01 crc kubenswrapper[4689]: E1201 08:39:01.465667 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-01 08:39:02.465660899 +0000 UTC m=+22.537948803 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 01 08:39:01 crc kubenswrapper[4689]: I1201 08:39:01.480455 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1946844b-83ee-401e-b3b6-5994ef81c85e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dbad86a3834b9ec3334de7ccc49988da1f7e42c423801c9789ce12ba70390b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://848171dd65d193193056bff78092de5ea65abaf7af4d168a9c27409765bd0ac2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad9f8bfeec0bea5ec9df44017223d633ce320bba477e99d5cc325712f92fff65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83ec64e82504ae546c78bdb3eb8b6cf47514373616749723ec457191b95b73c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83ec64e82504ae546c78bdb3eb8b6cf47514373616749723ec457191b95b73c7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 08:38:54.283267 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 08:38:54.288329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1100834793/tls.crt::/tmp/serving-cert-1100834793/tls.key\\\\\\\"\\\\nI1201 08:39:00.137331 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 08:39:00.140699 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 08:39:00.140719 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 08:39:00.140740 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 08:39:00.140746 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 08:39:00.151271 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 08:39:00.151299 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 08:39:00.151305 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 08:39:00.151312 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 08:39:00.151316 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 08:39:00.151322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 08:39:00.151326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 08:39:00.151604 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 08:39:00.154622 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7190f95ad8a5cce27417903e343138ff345b97c0713609f47db91d8fe0c44a7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bba1efe2a197a5c47fc4220c1c53da0db51bcbdf317261c978b7f31348034f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bba1efe2a197a5c47fc4220c1c53da0db51bcbdf317261c978b7f31348034f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:38:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 08:39:01 crc kubenswrapper[4689]: I1201 08:39:01.500035 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 08:39:01 crc kubenswrapper[4689]: I1201 08:39:01.530301 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 08:39:01 crc kubenswrapper[4689]: I1201 08:39:01.558183 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f71ff62f-7141-4d50-b8ad-8312dc8a80e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccd36e732198fa380968b1ca9a174e8f380c9d3c3c762c3267b2f65367c4360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d09630f7eb6a631d65473e272c4f98de5ba071d33992776b4101ac797b0854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f804833ecd52f1cd93a4df02639ff912e5c7ff38bfce3ba1c47e8cd5fca46e11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35299e45eecb4b2b98003295a39b7945d1fa2943a5a7f97301054d255687877e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:38:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 08:39:01 crc kubenswrapper[4689]: I1201 08:39:01.566767 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 08:39:01 crc kubenswrapper[4689]: E1201 08:39:01.566919 4689 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 01 08:39:01 crc kubenswrapper[4689]: E1201 08:39:01.566935 4689 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 01 08:39:01 crc kubenswrapper[4689]: E1201 08:39:01.566946 4689 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 08:39:01 crc kubenswrapper[4689]: E1201 08:39:01.566992 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-01 08:39:02.566980299 +0000 UTC m=+22.639268203 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 08:39:01 crc kubenswrapper[4689]: I1201 08:39:01.591426 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 08:39:01 crc kubenswrapper[4689]: I1201 08:39:01.632968 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f71ff62f-7141-4d50-b8ad-8312dc8a80e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccd36e732198fa380968b1ca9a174e8f380c9d3c3c762c3267b2f65367c4360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d09630f7eb6a631d65473e272c4f98de5ba071d33992776b4101ac797b0854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f804833ecd52f1cd93a4df02639ff912e5c7ff38bfce3ba1c47e8cd5fca46e11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35299e45eecb4b2b98003295a39b7945d1fa2943a5a7f97301054d255687877e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:38:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 08:39:01 crc kubenswrapper[4689]: I1201 08:39:01.689269 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.053117 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.053150 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 08:39:02 crc kubenswrapper[4689]: E1201 08:39:02.053262 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 08:39:02 crc kubenswrapper[4689]: E1201 08:39:02.053341 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.063185 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.094101 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.127867 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.149667 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1946844b-83ee-401e-b3b6-5994ef81c85e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dbad86a3834b9ec3334de7ccc49988da1f7e42c423801c9789ce12ba70390b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://848171dd65d193193056bff78092de5ea65abaf7af4d168a9c27409765bd0ac2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad9f8bfeec0bea5ec9df44017223d633ce320bba477e99d5cc325712f92fff65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83ec64e82504ae546c78bdb3eb8b6cf47514373616749723ec457191b95b73c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83ec64e82504ae546c78bdb3eb8b6cf47514373616749723ec457191b95b73c7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 08:38:54.283267 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 08:38:54.288329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1100834793/tls.crt::/tmp/serving-cert-1100834793/tls.key\\\\\\\"\\\\nI1201 08:39:00.137331 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 08:39:00.140699 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 08:39:00.140719 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 08:39:00.140740 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 08:39:00.140746 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 08:39:00.151271 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 08:39:00.151299 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 08:39:00.151305 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 08:39:00.151312 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 08:39:00.151316 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 08:39:00.151322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 08:39:00.151326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 08:39:00.151604 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 08:39:00.154622 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7190f95ad8a5cce27417903e343138ff345b97c0713609f47db91d8fe0c44a7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bba1efe2a197a5c47fc4220c1c53da0db51bcbdf317261c978b7f31348034f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bba1efe2a197a5c47fc4220c1c53da0db51bcbdf317261c978b7f31348034f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:38:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.165230 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.180005 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.364217 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.367678 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5920f4efa4e812090eb100b51bcf8e37eb82eb6b9fa495c8560ceeea3a94d55b"} Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.368391 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.369316 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"62ff4a03d2e50e2d881287008cfedc7ff067e04d3dc24ace25b56677b0eb117c"} Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.369345 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"e1ed68789c1f37ddbc952b0927feba7a704b01f3ee52e057e29c989f1d475e37"} Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.370178 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"e81b25bc4fc917a5480b84f4c7d2550829f5ad209d4cac14d2cd64d5b792b9e5"} Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.370202 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"8eaf1c0d37cbd7a571e6c257b2896f4dc7184761b0f487f3e6b8e1c78d57f02f"} Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.390386 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1946844b-83ee-401e-b3b6-5994ef81c85e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dbad86a3834b9ec3334de7ccc49988da1f7e42c423801c9789ce12ba70390b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://848171dd65d193193056bff78092de5ea65abaf7af4d168a9c27409765bd0ac2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad9f8bfeec0bea5ec9df44017223d633ce320bba477e99d5cc325712f92fff65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5920f4efa4e812090eb100b51bcf8e37eb82eb6b9fa495c8560ceeea3a94d55b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83ec64e82504ae546c78bdb3eb8b6cf47514373616749723ec457191b95b73c7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 08:38:54.283267 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 08:38:54.288329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1100834793/tls.crt::/tmp/serving-cert-1100834793/tls.key\\\\\\\"\\\\nI1201 08:39:00.137331 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 08:39:00.140699 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 08:39:00.140719 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 08:39:00.140740 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 08:39:00.140746 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 08:39:00.151271 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 08:39:00.151299 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 08:39:00.151305 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 08:39:00.151312 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 08:39:00.151316 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 08:39:00.151322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 08:39:00.151326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 08:39:00.151604 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 08:39:00.154622 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7190f95ad8a5cce27417903e343138ff345b97c0713609f47db91d8fe0c44a7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bba1efe2a197a5c47fc4220c1c53da0db51bcbdf317261c978b7f31348034f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bba1efe2a197a5c47fc4220c1c53da0db51bcbdf317261c978b7f31348034f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:38:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.402642 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.413389 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.431043 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.439620 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f71ff62f-7141-4d50-b8ad-8312dc8a80e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccd36e732198fa380968b1ca9a174e8f380c9d3c3c762c3267b2f65367c4360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d09630f7eb6a631d65473e272c4f98de5ba071d33992776b4101ac797b0854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f804833ecd52f1cd93a4df02639ff912e5c7ff38bfce3ba1c47e8cd5fca46e11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35299e45eecb4b2b98003295a39b7945d1fa2943a5a7f97301054d255687877e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:38:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.474586 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.494537 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-dl2st"] Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.494927 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-4z9l8"] Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.495135 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-4z9l8" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.495131 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-dl2st" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.547394 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.547864 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.548641 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.548881 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.549303 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.549954 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.550312 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.555685 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:39:02 crc kubenswrapper[4689]: E1201 08:39:02.556771 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:39:04.556734704 +0000 UTC m=+24.629022608 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.556839 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/6bebcb50-c292-4bca-9299-2fdc21439b18-cnibin\") pod \"multus-dl2st\" (UID: \"6bebcb50-c292-4bca-9299-2fdc21439b18\") " pod="openshift-multus/multus-dl2st" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.556872 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/6bebcb50-c292-4bca-9299-2fdc21439b18-host-run-multus-certs\") pod \"multus-dl2st\" (UID: \"6bebcb50-c292-4bca-9299-2fdc21439b18\") " pod="openshift-multus/multus-dl2st" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.556904 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.556939 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/6bebcb50-c292-4bca-9299-2fdc21439b18-multus-socket-dir-parent\") pod \"multus-dl2st\" (UID: \"6bebcb50-c292-4bca-9299-2fdc21439b18\") " pod="openshift-multus/multus-dl2st" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.556954 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/6bebcb50-c292-4bca-9299-2fdc21439b18-hostroot\") pod \"multus-dl2st\" (UID: \"6bebcb50-c292-4bca-9299-2fdc21439b18\") " pod="openshift-multus/multus-dl2st" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.556972 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/6bebcb50-c292-4bca-9299-2fdc21439b18-multus-daemon-config\") pod \"multus-dl2st\" (UID: \"6bebcb50-c292-4bca-9299-2fdc21439b18\") " pod="openshift-multus/multus-dl2st" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.557112 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98wj6\" (UniqueName: \"kubernetes.io/projected/6bebcb50-c292-4bca-9299-2fdc21439b18-kube-api-access-98wj6\") pod \"multus-dl2st\" (UID: \"6bebcb50-c292-4bca-9299-2fdc21439b18\") " pod="openshift-multus/multus-dl2st" Dec 01 08:39:02 crc kubenswrapper[4689]: E1201 08:39:02.557128 4689 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.557184 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/6bebcb50-c292-4bca-9299-2fdc21439b18-os-release\") pod \"multus-dl2st\" (UID: \"6bebcb50-c292-4bca-9299-2fdc21439b18\") " pod="openshift-multus/multus-dl2st" Dec 01 08:39:02 crc kubenswrapper[4689]: E1201 08:39:02.557192 4689 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.557214 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6bebcb50-c292-4bca-9299-2fdc21439b18-etc-kubernetes\") pod \"multus-dl2st\" (UID: \"6bebcb50-c292-4bca-9299-2fdc21439b18\") " pod="openshift-multus/multus-dl2st" Dec 01 08:39:02 crc kubenswrapper[4689]: E1201 08:39:02.557229 4689 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.557242 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.557274 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6bebcb50-c292-4bca-9299-2fdc21439b18-multus-conf-dir\") pod \"multus-dl2st\" (UID: \"6bebcb50-c292-4bca-9299-2fdc21439b18\") " pod="openshift-multus/multus-dl2st" Dec 01 08:39:02 crc kubenswrapper[4689]: E1201 08:39:02.557300 4689 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.557320 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6bebcb50-c292-4bca-9299-2fdc21439b18-system-cni-dir\") pod \"multus-dl2st\" (UID: \"6bebcb50-c292-4bca-9299-2fdc21439b18\") " pod="openshift-multus/multus-dl2st" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.557337 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6bebcb50-c292-4bca-9299-2fdc21439b18-multus-cni-dir\") pod \"multus-dl2st\" (UID: \"6bebcb50-c292-4bca-9299-2fdc21439b18\") " pod="openshift-multus/multus-dl2st" Dec 01 08:39:02 crc kubenswrapper[4689]: E1201 08:39:02.557345 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-01 08:39:04.557335229 +0000 UTC m=+24.629623133 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.557398 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 08:39:02 crc kubenswrapper[4689]: E1201 08:39:02.557434 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-01 08:39:04.557425062 +0000 UTC m=+24.629712966 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.557462 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6bebcb50-c292-4bca-9299-2fdc21439b18-host-run-netns\") pod \"multus-dl2st\" (UID: \"6bebcb50-c292-4bca-9299-2fdc21439b18\") " pod="openshift-multus/multus-dl2st" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.557488 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6bebcb50-c292-4bca-9299-2fdc21439b18-host-var-lib-cni-bin\") pod \"multus-dl2st\" (UID: \"6bebcb50-c292-4bca-9299-2fdc21439b18\") " pod="openshift-multus/multus-dl2st" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.557542 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjzb5\" (UniqueName: \"kubernetes.io/projected/74395c07-d5ab-45ec-a616-1d0b1b336583-kube-api-access-vjzb5\") pod \"node-resolver-4z9l8\" (UID: \"74395c07-d5ab-45ec-a616-1d0b1b336583\") " pod="openshift-dns/node-resolver-4z9l8" Dec 01 08:39:02 crc kubenswrapper[4689]: E1201 08:39:02.557565 4689 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 01 08:39:02 crc kubenswrapper[4689]: E1201 08:39:02.566528 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-01 08:39:04.566488553 +0000 UTC m=+24.638776457 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.557586 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/6bebcb50-c292-4bca-9299-2fdc21439b18-cni-binary-copy\") pod \"multus-dl2st\" (UID: \"6bebcb50-c292-4bca-9299-2fdc21439b18\") " pod="openshift-multus/multus-dl2st" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.566633 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/6bebcb50-c292-4bca-9299-2fdc21439b18-host-var-lib-cni-multus\") pod \"multus-dl2st\" (UID: \"6bebcb50-c292-4bca-9299-2fdc21439b18\") " pod="openshift-multus/multus-dl2st" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.566671 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/6bebcb50-c292-4bca-9299-2fdc21439b18-host-run-k8s-cni-cncf-io\") pod \"multus-dl2st\" (UID: \"6bebcb50-c292-4bca-9299-2fdc21439b18\") " pod="openshift-multus/multus-dl2st" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.566696 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/6bebcb50-c292-4bca-9299-2fdc21439b18-host-var-lib-kubelet\") pod \"multus-dl2st\" (UID: \"6bebcb50-c292-4bca-9299-2fdc21439b18\") " pod="openshift-multus/multus-dl2st" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.566722 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/74395c07-d5ab-45ec-a616-1d0b1b336583-hosts-file\") pod \"node-resolver-4z9l8\" (UID: \"74395c07-d5ab-45ec-a616-1d0b1b336583\") " pod="openshift-dns/node-resolver-4z9l8" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.567921 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-7p2p7"] Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.568898 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-hmdnx"] Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.569341 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.571011 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-7p2p7" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.575085 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.581525 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.607863 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.607969 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.608080 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.608266 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.610539 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.610685 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.610901 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.644872 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.667201 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/6bebcb50-c292-4bca-9299-2fdc21439b18-cni-binary-copy\") pod \"multus-dl2st\" (UID: \"6bebcb50-c292-4bca-9299-2fdc21439b18\") " pod="openshift-multus/multus-dl2st" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.667418 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/6bebcb50-c292-4bca-9299-2fdc21439b18-host-var-lib-cni-multus\") pod \"multus-dl2st\" (UID: \"6bebcb50-c292-4bca-9299-2fdc21439b18\") " pod="openshift-multus/multus-dl2st" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.667261 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/6bebcb50-c292-4bca-9299-2fdc21439b18-host-var-lib-cni-multus\") pod \"multus-dl2st\" (UID: \"6bebcb50-c292-4bca-9299-2fdc21439b18\") " pod="openshift-multus/multus-dl2st" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.668071 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/6bebcb50-c292-4bca-9299-2fdc21439b18-cni-binary-copy\") pod \"multus-dl2st\" (UID: \"6bebcb50-c292-4bca-9299-2fdc21439b18\") " pod="openshift-multus/multus-dl2st" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.668089 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/74395c07-d5ab-45ec-a616-1d0b1b336583-hosts-file\") pod \"node-resolver-4z9l8\" (UID: \"74395c07-d5ab-45ec-a616-1d0b1b336583\") " pod="openshift-dns/node-resolver-4z9l8" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.668111 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/6bebcb50-c292-4bca-9299-2fdc21439b18-host-run-k8s-cni-cncf-io\") pod \"multus-dl2st\" (UID: \"6bebcb50-c292-4bca-9299-2fdc21439b18\") " pod="openshift-multus/multus-dl2st" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.668128 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/6bebcb50-c292-4bca-9299-2fdc21439b18-host-var-lib-kubelet\") pod \"multus-dl2st\" (UID: \"6bebcb50-c292-4bca-9299-2fdc21439b18\") " pod="openshift-multus/multus-dl2st" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.668145 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/6bebcb50-c292-4bca-9299-2fdc21439b18-cnibin\") pod \"multus-dl2st\" (UID: \"6bebcb50-c292-4bca-9299-2fdc21439b18\") " pod="openshift-multus/multus-dl2st" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.668166 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/3947625d-75bf-4332-a233-1491b2ee9d96-rootfs\") pod \"machine-config-daemon-hmdnx\" (UID: \"3947625d-75bf-4332-a233-1491b2ee9d96\") " pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.668172 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/74395c07-d5ab-45ec-a616-1d0b1b336583-hosts-file\") pod \"node-resolver-4z9l8\" (UID: \"74395c07-d5ab-45ec-a616-1d0b1b336583\") " pod="openshift-dns/node-resolver-4z9l8" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.668183 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/dcab587f-eb9b-4dde-a0a1-75ed175999b0-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-7p2p7\" (UID: \"dcab587f-eb9b-4dde-a0a1-75ed175999b0\") " pod="openshift-multus/multus-additional-cni-plugins-7p2p7" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.668202 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flskq\" (UniqueName: \"kubernetes.io/projected/dcab587f-eb9b-4dde-a0a1-75ed175999b0-kube-api-access-flskq\") pod \"multus-additional-cni-plugins-7p2p7\" (UID: \"dcab587f-eb9b-4dde-a0a1-75ed175999b0\") " pod="openshift-multus/multus-additional-cni-plugins-7p2p7" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.668211 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/6bebcb50-c292-4bca-9299-2fdc21439b18-host-var-lib-kubelet\") pod \"multus-dl2st\" (UID: \"6bebcb50-c292-4bca-9299-2fdc21439b18\") " pod="openshift-multus/multus-dl2st" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.668222 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.668235 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/6bebcb50-c292-4bca-9299-2fdc21439b18-host-run-k8s-cni-cncf-io\") pod \"multus-dl2st\" (UID: \"6bebcb50-c292-4bca-9299-2fdc21439b18\") " pod="openshift-multus/multus-dl2st" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.668239 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98wj6\" (UniqueName: \"kubernetes.io/projected/6bebcb50-c292-4bca-9299-2fdc21439b18-kube-api-access-98wj6\") pod \"multus-dl2st\" (UID: \"6bebcb50-c292-4bca-9299-2fdc21439b18\") " pod="openshift-multus/multus-dl2st" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.668258 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5v5pz\" (UniqueName: \"kubernetes.io/projected/3947625d-75bf-4332-a233-1491b2ee9d96-kube-api-access-5v5pz\") pod \"machine-config-daemon-hmdnx\" (UID: \"3947625d-75bf-4332-a233-1491b2ee9d96\") " pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.668275 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/dcab587f-eb9b-4dde-a0a1-75ed175999b0-cnibin\") pod \"multus-additional-cni-plugins-7p2p7\" (UID: \"dcab587f-eb9b-4dde-a0a1-75ed175999b0\") " pod="openshift-multus/multus-additional-cni-plugins-7p2p7" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.668291 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/6bebcb50-c292-4bca-9299-2fdc21439b18-os-release\") pod \"multus-dl2st\" (UID: \"6bebcb50-c292-4bca-9299-2fdc21439b18\") " pod="openshift-multus/multus-dl2st" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.668306 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6bebcb50-c292-4bca-9299-2fdc21439b18-system-cni-dir\") pod \"multus-dl2st\" (UID: \"6bebcb50-c292-4bca-9299-2fdc21439b18\") " pod="openshift-multus/multus-dl2st" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.668323 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6bebcb50-c292-4bca-9299-2fdc21439b18-multus-cni-dir\") pod \"multus-dl2st\" (UID: \"6bebcb50-c292-4bca-9299-2fdc21439b18\") " pod="openshift-multus/multus-dl2st" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.668339 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/dcab587f-eb9b-4dde-a0a1-75ed175999b0-os-release\") pod \"multus-additional-cni-plugins-7p2p7\" (UID: \"dcab587f-eb9b-4dde-a0a1-75ed175999b0\") " pod="openshift-multus/multus-additional-cni-plugins-7p2p7" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.668376 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6bebcb50-c292-4bca-9299-2fdc21439b18-host-run-netns\") pod \"multus-dl2st\" (UID: \"6bebcb50-c292-4bca-9299-2fdc21439b18\") " pod="openshift-multus/multus-dl2st" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.668392 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3947625d-75bf-4332-a233-1491b2ee9d96-mcd-auth-proxy-config\") pod \"machine-config-daemon-hmdnx\" (UID: \"3947625d-75bf-4332-a233-1491b2ee9d96\") " pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.668409 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/6bebcb50-c292-4bca-9299-2fdc21439b18-host-run-multus-certs\") pod \"multus-dl2st\" (UID: \"6bebcb50-c292-4bca-9299-2fdc21439b18\") " pod="openshift-multus/multus-dl2st" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.668427 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/dcab587f-eb9b-4dde-a0a1-75ed175999b0-tuning-conf-dir\") pod \"multus-additional-cni-plugins-7p2p7\" (UID: \"dcab587f-eb9b-4dde-a0a1-75ed175999b0\") " pod="openshift-multus/multus-additional-cni-plugins-7p2p7" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.668444 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/6bebcb50-c292-4bca-9299-2fdc21439b18-multus-socket-dir-parent\") pod \"multus-dl2st\" (UID: \"6bebcb50-c292-4bca-9299-2fdc21439b18\") " pod="openshift-multus/multus-dl2st" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.668458 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/6bebcb50-c292-4bca-9299-2fdc21439b18-hostroot\") pod \"multus-dl2st\" (UID: \"6bebcb50-c292-4bca-9299-2fdc21439b18\") " pod="openshift-multus/multus-dl2st" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.668473 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/6bebcb50-c292-4bca-9299-2fdc21439b18-multus-daemon-config\") pod \"multus-dl2st\" (UID: \"6bebcb50-c292-4bca-9299-2fdc21439b18\") " pod="openshift-multus/multus-dl2st" Dec 01 08:39:02 crc kubenswrapper[4689]: E1201 08:39:02.668475 4689 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 01 08:39:02 crc kubenswrapper[4689]: E1201 08:39:02.668495 4689 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 01 08:39:02 crc kubenswrapper[4689]: E1201 08:39:02.668508 4689 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 08:39:02 crc kubenswrapper[4689]: E1201 08:39:02.668557 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-01 08:39:04.668542332 +0000 UTC m=+24.740830236 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.668702 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/6bebcb50-c292-4bca-9299-2fdc21439b18-os-release\") pod \"multus-dl2st\" (UID: \"6bebcb50-c292-4bca-9299-2fdc21439b18\") " pod="openshift-multus/multus-dl2st" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.668757 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6bebcb50-c292-4bca-9299-2fdc21439b18-system-cni-dir\") pod \"multus-dl2st\" (UID: \"6bebcb50-c292-4bca-9299-2fdc21439b18\") " pod="openshift-multus/multus-dl2st" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.668307 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/6bebcb50-c292-4bca-9299-2fdc21439b18-cnibin\") pod \"multus-dl2st\" (UID: \"6bebcb50-c292-4bca-9299-2fdc21439b18\") " pod="openshift-multus/multus-dl2st" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.668494 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3947625d-75bf-4332-a233-1491b2ee9d96-proxy-tls\") pod \"machine-config-daemon-hmdnx\" (UID: \"3947625d-75bf-4332-a233-1491b2ee9d96\") " pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.668835 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6bebcb50-c292-4bca-9299-2fdc21439b18-multus-cni-dir\") pod \"multus-dl2st\" (UID: \"6bebcb50-c292-4bca-9299-2fdc21439b18\") " pod="openshift-multus/multus-dl2st" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.668854 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6bebcb50-c292-4bca-9299-2fdc21439b18-etc-kubernetes\") pod \"multus-dl2st\" (UID: \"6bebcb50-c292-4bca-9299-2fdc21439b18\") " pod="openshift-multus/multus-dl2st" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.668867 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6bebcb50-c292-4bca-9299-2fdc21439b18-host-run-netns\") pod \"multus-dl2st\" (UID: \"6bebcb50-c292-4bca-9299-2fdc21439b18\") " pod="openshift-multus/multus-dl2st" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.668878 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/dcab587f-eb9b-4dde-a0a1-75ed175999b0-system-cni-dir\") pod \"multus-additional-cni-plugins-7p2p7\" (UID: \"dcab587f-eb9b-4dde-a0a1-75ed175999b0\") " pod="openshift-multus/multus-additional-cni-plugins-7p2p7" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.668902 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/6bebcb50-c292-4bca-9299-2fdc21439b18-host-run-multus-certs\") pod \"multus-dl2st\" (UID: \"6bebcb50-c292-4bca-9299-2fdc21439b18\") " pod="openshift-multus/multus-dl2st" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.668912 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6bebcb50-c292-4bca-9299-2fdc21439b18-multus-conf-dir\") pod \"multus-dl2st\" (UID: \"6bebcb50-c292-4bca-9299-2fdc21439b18\") " pod="openshift-multus/multus-dl2st" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.668929 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/dcab587f-eb9b-4dde-a0a1-75ed175999b0-cni-binary-copy\") pod \"multus-additional-cni-plugins-7p2p7\" (UID: \"dcab587f-eb9b-4dde-a0a1-75ed175999b0\") " pod="openshift-multus/multus-additional-cni-plugins-7p2p7" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.668951 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6bebcb50-c292-4bca-9299-2fdc21439b18-host-var-lib-cni-bin\") pod \"multus-dl2st\" (UID: \"6bebcb50-c292-4bca-9299-2fdc21439b18\") " pod="openshift-multus/multus-dl2st" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.668967 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjzb5\" (UniqueName: \"kubernetes.io/projected/74395c07-d5ab-45ec-a616-1d0b1b336583-kube-api-access-vjzb5\") pod \"node-resolver-4z9l8\" (UID: \"74395c07-d5ab-45ec-a616-1d0b1b336583\") " pod="openshift-dns/node-resolver-4z9l8" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.668972 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/6bebcb50-c292-4bca-9299-2fdc21439b18-multus-socket-dir-parent\") pod \"multus-dl2st\" (UID: \"6bebcb50-c292-4bca-9299-2fdc21439b18\") " pod="openshift-multus/multus-dl2st" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.669020 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6bebcb50-c292-4bca-9299-2fdc21439b18-multus-conf-dir\") pod \"multus-dl2st\" (UID: \"6bebcb50-c292-4bca-9299-2fdc21439b18\") " pod="openshift-multus/multus-dl2st" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.669042 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6bebcb50-c292-4bca-9299-2fdc21439b18-etc-kubernetes\") pod \"multus-dl2st\" (UID: \"6bebcb50-c292-4bca-9299-2fdc21439b18\") " pod="openshift-multus/multus-dl2st" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.669072 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6bebcb50-c292-4bca-9299-2fdc21439b18-host-var-lib-cni-bin\") pod \"multus-dl2st\" (UID: \"6bebcb50-c292-4bca-9299-2fdc21439b18\") " pod="openshift-multus/multus-dl2st" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.669274 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/6bebcb50-c292-4bca-9299-2fdc21439b18-hostroot\") pod \"multus-dl2st\" (UID: \"6bebcb50-c292-4bca-9299-2fdc21439b18\") " pod="openshift-multus/multus-dl2st" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.669450 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/6bebcb50-c292-4bca-9299-2fdc21439b18-multus-daemon-config\") pod \"multus-dl2st\" (UID: \"6bebcb50-c292-4bca-9299-2fdc21439b18\") " pod="openshift-multus/multus-dl2st" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.698660 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjzb5\" (UniqueName: \"kubernetes.io/projected/74395c07-d5ab-45ec-a616-1d0b1b336583-kube-api-access-vjzb5\") pod \"node-resolver-4z9l8\" (UID: \"74395c07-d5ab-45ec-a616-1d0b1b336583\") " pod="openshift-dns/node-resolver-4z9l8" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.700410 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98wj6\" (UniqueName: \"kubernetes.io/projected/6bebcb50-c292-4bca-9299-2fdc21439b18-kube-api-access-98wj6\") pod \"multus-dl2st\" (UID: \"6bebcb50-c292-4bca-9299-2fdc21439b18\") " pod="openshift-multus/multus-dl2st" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.722908 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e81b25bc4fc917a5480b84f4c7d2550829f5ad209d4cac14d2cd64d5b792b9e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:02Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.764118 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:02Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.770335 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/dcab587f-eb9b-4dde-a0a1-75ed175999b0-os-release\") pod \"multus-additional-cni-plugins-7p2p7\" (UID: \"dcab587f-eb9b-4dde-a0a1-75ed175999b0\") " pod="openshift-multus/multus-additional-cni-plugins-7p2p7" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.770382 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3947625d-75bf-4332-a233-1491b2ee9d96-mcd-auth-proxy-config\") pod \"machine-config-daemon-hmdnx\" (UID: \"3947625d-75bf-4332-a233-1491b2ee9d96\") " pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.770401 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/dcab587f-eb9b-4dde-a0a1-75ed175999b0-tuning-conf-dir\") pod \"multus-additional-cni-plugins-7p2p7\" (UID: \"dcab587f-eb9b-4dde-a0a1-75ed175999b0\") " pod="openshift-multus/multus-additional-cni-plugins-7p2p7" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.770424 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3947625d-75bf-4332-a233-1491b2ee9d96-proxy-tls\") pod \"machine-config-daemon-hmdnx\" (UID: \"3947625d-75bf-4332-a233-1491b2ee9d96\") " pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.770441 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/dcab587f-eb9b-4dde-a0a1-75ed175999b0-system-cni-dir\") pod \"multus-additional-cni-plugins-7p2p7\" (UID: \"dcab587f-eb9b-4dde-a0a1-75ed175999b0\") " pod="openshift-multus/multus-additional-cni-plugins-7p2p7" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.770460 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/dcab587f-eb9b-4dde-a0a1-75ed175999b0-cni-binary-copy\") pod \"multus-additional-cni-plugins-7p2p7\" (UID: \"dcab587f-eb9b-4dde-a0a1-75ed175999b0\") " pod="openshift-multus/multus-additional-cni-plugins-7p2p7" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.770480 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/3947625d-75bf-4332-a233-1491b2ee9d96-rootfs\") pod \"machine-config-daemon-hmdnx\" (UID: \"3947625d-75bf-4332-a233-1491b2ee9d96\") " pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.770764 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/3947625d-75bf-4332-a233-1491b2ee9d96-rootfs\") pod \"machine-config-daemon-hmdnx\" (UID: \"3947625d-75bf-4332-a233-1491b2ee9d96\") " pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.770501 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5v5pz\" (UniqueName: \"kubernetes.io/projected/3947625d-75bf-4332-a233-1491b2ee9d96-kube-api-access-5v5pz\") pod \"machine-config-daemon-hmdnx\" (UID: \"3947625d-75bf-4332-a233-1491b2ee9d96\") " pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.771151 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/dcab587f-eb9b-4dde-a0a1-75ed175999b0-cnibin\") pod \"multus-additional-cni-plugins-7p2p7\" (UID: \"dcab587f-eb9b-4dde-a0a1-75ed175999b0\") " pod="openshift-multus/multus-additional-cni-plugins-7p2p7" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.771171 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/dcab587f-eb9b-4dde-a0a1-75ed175999b0-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-7p2p7\" (UID: \"dcab587f-eb9b-4dde-a0a1-75ed175999b0\") " pod="openshift-multus/multus-additional-cni-plugins-7p2p7" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.771187 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flskq\" (UniqueName: \"kubernetes.io/projected/dcab587f-eb9b-4dde-a0a1-75ed175999b0-kube-api-access-flskq\") pod \"multus-additional-cni-plugins-7p2p7\" (UID: \"dcab587f-eb9b-4dde-a0a1-75ed175999b0\") " pod="openshift-multus/multus-additional-cni-plugins-7p2p7" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.771496 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3947625d-75bf-4332-a233-1491b2ee9d96-mcd-auth-proxy-config\") pod \"machine-config-daemon-hmdnx\" (UID: \"3947625d-75bf-4332-a233-1491b2ee9d96\") " pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.771520 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/dcab587f-eb9b-4dde-a0a1-75ed175999b0-cnibin\") pod \"multus-additional-cni-plugins-7p2p7\" (UID: \"dcab587f-eb9b-4dde-a0a1-75ed175999b0\") " pod="openshift-multus/multus-additional-cni-plugins-7p2p7" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.771576 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/dcab587f-eb9b-4dde-a0a1-75ed175999b0-os-release\") pod \"multus-additional-cni-plugins-7p2p7\" (UID: \"dcab587f-eb9b-4dde-a0a1-75ed175999b0\") " pod="openshift-multus/multus-additional-cni-plugins-7p2p7" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.771677 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/dcab587f-eb9b-4dde-a0a1-75ed175999b0-system-cni-dir\") pod \"multus-additional-cni-plugins-7p2p7\" (UID: \"dcab587f-eb9b-4dde-a0a1-75ed175999b0\") " pod="openshift-multus/multus-additional-cni-plugins-7p2p7" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.772085 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/dcab587f-eb9b-4dde-a0a1-75ed175999b0-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-7p2p7\" (UID: \"dcab587f-eb9b-4dde-a0a1-75ed175999b0\") " pod="openshift-multus/multus-additional-cni-plugins-7p2p7" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.772158 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/dcab587f-eb9b-4dde-a0a1-75ed175999b0-cni-binary-copy\") pod \"multus-additional-cni-plugins-7p2p7\" (UID: \"dcab587f-eb9b-4dde-a0a1-75ed175999b0\") " pod="openshift-multus/multus-additional-cni-plugins-7p2p7" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.772286 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/dcab587f-eb9b-4dde-a0a1-75ed175999b0-tuning-conf-dir\") pod \"multus-additional-cni-plugins-7p2p7\" (UID: \"dcab587f-eb9b-4dde-a0a1-75ed175999b0\") " pod="openshift-multus/multus-additional-cni-plugins-7p2p7" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.778298 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3947625d-75bf-4332-a233-1491b2ee9d96-proxy-tls\") pod \"machine-config-daemon-hmdnx\" (UID: \"3947625d-75bf-4332-a233-1491b2ee9d96\") " pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.783403 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62ff4a03d2e50e2d881287008cfedc7ff067e04d3dc24ace25b56677b0eb117c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1ed68789c1f37ddbc952b0927feba7a704b01f3ee52e057e29c989f1d475e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:02Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.799904 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f71ff62f-7141-4d50-b8ad-8312dc8a80e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccd36e732198fa380968b1ca9a174e8f380c9d3c3c762c3267b2f65367c4360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d09630f7eb6a631d65473e272c4f98de5ba071d33992776b4101ac797b0854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f804833ecd52f1cd93a4df02639ff912e5c7ff38bfce3ba1c47e8cd5fca46e11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35299e45eecb4b2b98003295a39b7945d1fa2943a5a7f97301054d255687877e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:38:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:02Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.800306 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5v5pz\" (UniqueName: \"kubernetes.io/projected/3947625d-75bf-4332-a233-1491b2ee9d96-kube-api-access-5v5pz\") pod \"machine-config-daemon-hmdnx\" (UID: \"3947625d-75bf-4332-a233-1491b2ee9d96\") " pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.803221 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flskq\" (UniqueName: \"kubernetes.io/projected/dcab587f-eb9b-4dde-a0a1-75ed175999b0-kube-api-access-flskq\") pod \"multus-additional-cni-plugins-7p2p7\" (UID: \"dcab587f-eb9b-4dde-a0a1-75ed175999b0\") " pod="openshift-multus/multus-additional-cni-plugins-7p2p7" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.817332 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:02Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.829776 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4z9l8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74395c07-d5ab-45ec-a616-1d0b1b336583\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjzb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4z9l8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:02Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.839871 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-4z9l8" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.846280 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1946844b-83ee-401e-b3b6-5994ef81c85e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dbad86a3834b9ec3334de7ccc49988da1f7e42c423801c9789ce12ba70390b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://848171dd65d193193056bff78092de5ea65abaf7af4d168a9c27409765bd0ac2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad9f8bfeec0bea5ec9df44017223d633ce320bba477e99d5cc325712f92fff65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5920f4efa4e812090eb100b51bcf8e37eb82eb6b9fa495c8560ceeea3a94d55b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83ec64e82504ae546c78bdb3eb8b6cf47514373616749723ec457191b95b73c7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 08:38:54.283267 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 08:38:54.288329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1100834793/tls.crt::/tmp/serving-cert-1100834793/tls.key\\\\\\\"\\\\nI1201 08:39:00.137331 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 08:39:00.140699 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 08:39:00.140719 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 08:39:00.140740 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 08:39:00.140746 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 08:39:00.151271 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 08:39:00.151299 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 08:39:00.151305 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 08:39:00.151312 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 08:39:00.151316 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 08:39:00.151322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 08:39:00.151326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 08:39:00.151604 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 08:39:00.154622 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7190f95ad8a5cce27417903e343138ff345b97c0713609f47db91d8fe0c44a7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bba1efe2a197a5c47fc4220c1c53da0db51bcbdf317261c978b7f31348034f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bba1efe2a197a5c47fc4220c1c53da0db51bcbdf317261c978b7f31348034f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:38:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:02Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.860609 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3947625d-75bf-4332-a233-1491b2ee9d96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5v5pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5v5pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hmdnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:02Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.862213 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-dl2st" Dec 01 08:39:02 crc kubenswrapper[4689]: W1201 08:39:02.880759 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6bebcb50_c292_4bca_9299_2fdc21439b18.slice/crio-6c2b373a995cf671ea5bf66490e3841ba99633036cbaedf02250b87dbbd97864 WatchSource:0}: Error finding container 6c2b373a995cf671ea5bf66490e3841ba99633036cbaedf02250b87dbbd97864: Status 404 returned error can't find the container with id 6c2b373a995cf671ea5bf66490e3841ba99633036cbaedf02250b87dbbd97864 Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.887898 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:02Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.891927 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.898589 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-7p2p7" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.905821 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:02Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.925252 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dl2st" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bebcb50-c292-4bca-9299-2fdc21439b18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98wj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dl2st\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:02Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.958232 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-8zn56"] Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.959153 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.967410 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.967836 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.967710 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.967995 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.968121 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.968184 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.969199 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7p2p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcab587f-eb9b-4dde-a0a1-75ed175999b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7p2p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:02Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:02 crc kubenswrapper[4689]: I1201 08:39:02.983621 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.000841 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dl2st" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bebcb50-c292-4bca-9299-2fdc21439b18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98wj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dl2st\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:02Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.075920 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7p2p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcab587f-eb9b-4dde-a0a1-75ed175999b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7p2p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:03Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.076296 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 08:39:03 crc kubenswrapper[4689]: E1201 08:39:03.076512 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.078540 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/988f960f-52fa-406f-9320-a8eec7a04f76-systemd-units\") pod \"ovnkube-node-8zn56\" (UID: \"988f960f-52fa-406f-9320-a8eec7a04f76\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.078583 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/988f960f-52fa-406f-9320-a8eec7a04f76-var-lib-openvswitch\") pod \"ovnkube-node-8zn56\" (UID: \"988f960f-52fa-406f-9320-a8eec7a04f76\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.078606 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/988f960f-52fa-406f-9320-a8eec7a04f76-host-run-ovn-kubernetes\") pod \"ovnkube-node-8zn56\" (UID: \"988f960f-52fa-406f-9320-a8eec7a04f76\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.078643 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/988f960f-52fa-406f-9320-a8eec7a04f76-run-ovn\") pod \"ovnkube-node-8zn56\" (UID: \"988f960f-52fa-406f-9320-a8eec7a04f76\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.078664 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/988f960f-52fa-406f-9320-a8eec7a04f76-host-kubelet\") pod \"ovnkube-node-8zn56\" (UID: \"988f960f-52fa-406f-9320-a8eec7a04f76\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.078694 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/988f960f-52fa-406f-9320-a8eec7a04f76-etc-openvswitch\") pod \"ovnkube-node-8zn56\" (UID: \"988f960f-52fa-406f-9320-a8eec7a04f76\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.078713 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/988f960f-52fa-406f-9320-a8eec7a04f76-ovnkube-config\") pod \"ovnkube-node-8zn56\" (UID: \"988f960f-52fa-406f-9320-a8eec7a04f76\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.078733 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/988f960f-52fa-406f-9320-a8eec7a04f76-ovnkube-script-lib\") pod \"ovnkube-node-8zn56\" (UID: \"988f960f-52fa-406f-9320-a8eec7a04f76\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.078753 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcm2f\" (UniqueName: \"kubernetes.io/projected/988f960f-52fa-406f-9320-a8eec7a04f76-kube-api-access-fcm2f\") pod \"ovnkube-node-8zn56\" (UID: \"988f960f-52fa-406f-9320-a8eec7a04f76\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.078770 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/988f960f-52fa-406f-9320-a8eec7a04f76-host-run-netns\") pod \"ovnkube-node-8zn56\" (UID: \"988f960f-52fa-406f-9320-a8eec7a04f76\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.078816 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/988f960f-52fa-406f-9320-a8eec7a04f76-env-overrides\") pod \"ovnkube-node-8zn56\" (UID: \"988f960f-52fa-406f-9320-a8eec7a04f76\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.078838 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/988f960f-52fa-406f-9320-a8eec7a04f76-host-slash\") pod \"ovnkube-node-8zn56\" (UID: \"988f960f-52fa-406f-9320-a8eec7a04f76\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.078853 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/988f960f-52fa-406f-9320-a8eec7a04f76-host-cni-bin\") pod \"ovnkube-node-8zn56\" (UID: \"988f960f-52fa-406f-9320-a8eec7a04f76\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.078878 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/988f960f-52fa-406f-9320-a8eec7a04f76-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8zn56\" (UID: \"988f960f-52fa-406f-9320-a8eec7a04f76\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.078898 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/988f960f-52fa-406f-9320-a8eec7a04f76-run-openvswitch\") pod \"ovnkube-node-8zn56\" (UID: \"988f960f-52fa-406f-9320-a8eec7a04f76\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.078915 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/988f960f-52fa-406f-9320-a8eec7a04f76-ovn-node-metrics-cert\") pod \"ovnkube-node-8zn56\" (UID: \"988f960f-52fa-406f-9320-a8eec7a04f76\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.078930 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/988f960f-52fa-406f-9320-a8eec7a04f76-run-systemd\") pod \"ovnkube-node-8zn56\" (UID: \"988f960f-52fa-406f-9320-a8eec7a04f76\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.078949 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/988f960f-52fa-406f-9320-a8eec7a04f76-host-cni-netd\") pod \"ovnkube-node-8zn56\" (UID: \"988f960f-52fa-406f-9320-a8eec7a04f76\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.078972 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/988f960f-52fa-406f-9320-a8eec7a04f76-node-log\") pod \"ovnkube-node-8zn56\" (UID: \"988f960f-52fa-406f-9320-a8eec7a04f76\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.078991 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/988f960f-52fa-406f-9320-a8eec7a04f76-log-socket\") pod \"ovnkube-node-8zn56\" (UID: \"988f960f-52fa-406f-9320-a8eec7a04f76\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.098024 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:03Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.121051 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:03Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.148146 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:03Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.172886 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62ff4a03d2e50e2d881287008cfedc7ff067e04d3dc24ace25b56677b0eb117c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1ed68789c1f37ddbc952b0927feba7a704b01f3ee52e057e29c989f1d475e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:03Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.179568 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/988f960f-52fa-406f-9320-a8eec7a04f76-run-systemd\") pod \"ovnkube-node-8zn56\" (UID: \"988f960f-52fa-406f-9320-a8eec7a04f76\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.179634 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/988f960f-52fa-406f-9320-a8eec7a04f76-run-openvswitch\") pod \"ovnkube-node-8zn56\" (UID: \"988f960f-52fa-406f-9320-a8eec7a04f76\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.179664 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/988f960f-52fa-406f-9320-a8eec7a04f76-ovn-node-metrics-cert\") pod \"ovnkube-node-8zn56\" (UID: \"988f960f-52fa-406f-9320-a8eec7a04f76\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.179690 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/988f960f-52fa-406f-9320-a8eec7a04f76-host-cni-netd\") pod \"ovnkube-node-8zn56\" (UID: \"988f960f-52fa-406f-9320-a8eec7a04f76\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.179725 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/988f960f-52fa-406f-9320-a8eec7a04f76-node-log\") pod \"ovnkube-node-8zn56\" (UID: \"988f960f-52fa-406f-9320-a8eec7a04f76\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.179750 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/988f960f-52fa-406f-9320-a8eec7a04f76-log-socket\") pod \"ovnkube-node-8zn56\" (UID: \"988f960f-52fa-406f-9320-a8eec7a04f76\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.179770 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/988f960f-52fa-406f-9320-a8eec7a04f76-host-run-ovn-kubernetes\") pod \"ovnkube-node-8zn56\" (UID: \"988f960f-52fa-406f-9320-a8eec7a04f76\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.179790 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/988f960f-52fa-406f-9320-a8eec7a04f76-systemd-units\") pod \"ovnkube-node-8zn56\" (UID: \"988f960f-52fa-406f-9320-a8eec7a04f76\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.179813 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/988f960f-52fa-406f-9320-a8eec7a04f76-var-lib-openvswitch\") pod \"ovnkube-node-8zn56\" (UID: \"988f960f-52fa-406f-9320-a8eec7a04f76\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.179843 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/988f960f-52fa-406f-9320-a8eec7a04f76-host-kubelet\") pod \"ovnkube-node-8zn56\" (UID: \"988f960f-52fa-406f-9320-a8eec7a04f76\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.179861 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/988f960f-52fa-406f-9320-a8eec7a04f76-run-ovn\") pod \"ovnkube-node-8zn56\" (UID: \"988f960f-52fa-406f-9320-a8eec7a04f76\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.179896 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/988f960f-52fa-406f-9320-a8eec7a04f76-ovnkube-config\") pod \"ovnkube-node-8zn56\" (UID: \"988f960f-52fa-406f-9320-a8eec7a04f76\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.179929 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/988f960f-52fa-406f-9320-a8eec7a04f76-ovnkube-script-lib\") pod \"ovnkube-node-8zn56\" (UID: \"988f960f-52fa-406f-9320-a8eec7a04f76\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.180145 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcm2f\" (UniqueName: \"kubernetes.io/projected/988f960f-52fa-406f-9320-a8eec7a04f76-kube-api-access-fcm2f\") pod \"ovnkube-node-8zn56\" (UID: \"988f960f-52fa-406f-9320-a8eec7a04f76\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.180164 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/988f960f-52fa-406f-9320-a8eec7a04f76-etc-openvswitch\") pod \"ovnkube-node-8zn56\" (UID: \"988f960f-52fa-406f-9320-a8eec7a04f76\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.180183 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/988f960f-52fa-406f-9320-a8eec7a04f76-host-run-netns\") pod \"ovnkube-node-8zn56\" (UID: \"988f960f-52fa-406f-9320-a8eec7a04f76\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.180218 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/988f960f-52fa-406f-9320-a8eec7a04f76-env-overrides\") pod \"ovnkube-node-8zn56\" (UID: \"988f960f-52fa-406f-9320-a8eec7a04f76\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.180243 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/988f960f-52fa-406f-9320-a8eec7a04f76-host-slash\") pod \"ovnkube-node-8zn56\" (UID: \"988f960f-52fa-406f-9320-a8eec7a04f76\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.180265 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/988f960f-52fa-406f-9320-a8eec7a04f76-host-cni-bin\") pod \"ovnkube-node-8zn56\" (UID: \"988f960f-52fa-406f-9320-a8eec7a04f76\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.180303 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/988f960f-52fa-406f-9320-a8eec7a04f76-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8zn56\" (UID: \"988f960f-52fa-406f-9320-a8eec7a04f76\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.180393 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/988f960f-52fa-406f-9320-a8eec7a04f76-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8zn56\" (UID: \"988f960f-52fa-406f-9320-a8eec7a04f76\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.180447 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/988f960f-52fa-406f-9320-a8eec7a04f76-run-systemd\") pod \"ovnkube-node-8zn56\" (UID: \"988f960f-52fa-406f-9320-a8eec7a04f76\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.180476 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/988f960f-52fa-406f-9320-a8eec7a04f76-run-openvswitch\") pod \"ovnkube-node-8zn56\" (UID: \"988f960f-52fa-406f-9320-a8eec7a04f76\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.180750 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/988f960f-52fa-406f-9320-a8eec7a04f76-etc-openvswitch\") pod \"ovnkube-node-8zn56\" (UID: \"988f960f-52fa-406f-9320-a8eec7a04f76\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.180805 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/988f960f-52fa-406f-9320-a8eec7a04f76-run-ovn\") pod \"ovnkube-node-8zn56\" (UID: \"988f960f-52fa-406f-9320-a8eec7a04f76\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.180905 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/988f960f-52fa-406f-9320-a8eec7a04f76-host-cni-netd\") pod \"ovnkube-node-8zn56\" (UID: \"988f960f-52fa-406f-9320-a8eec7a04f76\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.180935 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/988f960f-52fa-406f-9320-a8eec7a04f76-node-log\") pod \"ovnkube-node-8zn56\" (UID: \"988f960f-52fa-406f-9320-a8eec7a04f76\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.180961 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/988f960f-52fa-406f-9320-a8eec7a04f76-log-socket\") pod \"ovnkube-node-8zn56\" (UID: \"988f960f-52fa-406f-9320-a8eec7a04f76\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.180991 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/988f960f-52fa-406f-9320-a8eec7a04f76-host-run-ovn-kubernetes\") pod \"ovnkube-node-8zn56\" (UID: \"988f960f-52fa-406f-9320-a8eec7a04f76\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.181020 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/988f960f-52fa-406f-9320-a8eec7a04f76-systemd-units\") pod \"ovnkube-node-8zn56\" (UID: \"988f960f-52fa-406f-9320-a8eec7a04f76\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.181045 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/988f960f-52fa-406f-9320-a8eec7a04f76-var-lib-openvswitch\") pod \"ovnkube-node-8zn56\" (UID: \"988f960f-52fa-406f-9320-a8eec7a04f76\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.181223 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/988f960f-52fa-406f-9320-a8eec7a04f76-host-kubelet\") pod \"ovnkube-node-8zn56\" (UID: \"988f960f-52fa-406f-9320-a8eec7a04f76\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.181594 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/988f960f-52fa-406f-9320-a8eec7a04f76-host-slash\") pod \"ovnkube-node-8zn56\" (UID: \"988f960f-52fa-406f-9320-a8eec7a04f76\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.181714 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/988f960f-52fa-406f-9320-a8eec7a04f76-host-run-netns\") pod \"ovnkube-node-8zn56\" (UID: \"988f960f-52fa-406f-9320-a8eec7a04f76\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.181727 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/988f960f-52fa-406f-9320-a8eec7a04f76-host-cni-bin\") pod \"ovnkube-node-8zn56\" (UID: \"988f960f-52fa-406f-9320-a8eec7a04f76\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.182590 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/988f960f-52fa-406f-9320-a8eec7a04f76-ovnkube-config\") pod \"ovnkube-node-8zn56\" (UID: \"988f960f-52fa-406f-9320-a8eec7a04f76\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.182619 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/988f960f-52fa-406f-9320-a8eec7a04f76-ovnkube-script-lib\") pod \"ovnkube-node-8zn56\" (UID: \"988f960f-52fa-406f-9320-a8eec7a04f76\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.183400 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/988f960f-52fa-406f-9320-a8eec7a04f76-env-overrides\") pod \"ovnkube-node-8zn56\" (UID: \"988f960f-52fa-406f-9320-a8eec7a04f76\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.194701 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/988f960f-52fa-406f-9320-a8eec7a04f76-ovn-node-metrics-cert\") pod \"ovnkube-node-8zn56\" (UID: \"988f960f-52fa-406f-9320-a8eec7a04f76\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.210815 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e81b25bc4fc917a5480b84f4c7d2550829f5ad209d4cac14d2cd64d5b792b9e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:03Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.212412 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcm2f\" (UniqueName: \"kubernetes.io/projected/988f960f-52fa-406f-9320-a8eec7a04f76-kube-api-access-fcm2f\") pod \"ovnkube-node-8zn56\" (UID: \"988f960f-52fa-406f-9320-a8eec7a04f76\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.215731 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.218801 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.229217 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:03Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.246645 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4z9l8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74395c07-d5ab-45ec-a616-1d0b1b336583\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjzb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4z9l8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:03Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.268665 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"988f960f-52fa-406f-9320-a8eec7a04f76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8zn56\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:03Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.280042 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.287439 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f71ff62f-7141-4d50-b8ad-8312dc8a80e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccd36e732198fa380968b1ca9a174e8f380c9d3c3c762c3267b2f65367c4360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d09630f7eb6a631d65473e272c4f98de5ba071d33992776b4101ac797b0854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f804833ecd52f1cd93a4df02639ff912e5c7ff38bfce3ba1c47e8cd5fca46e11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35299e45eecb4b2b98003295a39b7945d1fa2943a5a7f97301054d255687877e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:38:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:03Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.321433 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3947625d-75bf-4332-a233-1491b2ee9d96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5v5pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5v5pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hmdnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:03Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:03 crc kubenswrapper[4689]: W1201 08:39:03.331588 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod988f960f_52fa_406f_9320_a8eec7a04f76.slice/crio-273efa17ff5b2d285cdae463bed6e3a5cc8fbb768846cf6beff009d97192773b WatchSource:0}: Error finding container 273efa17ff5b2d285cdae463bed6e3a5cc8fbb768846cf6beff009d97192773b: Status 404 returned error can't find the container with id 273efa17ff5b2d285cdae463bed6e3a5cc8fbb768846cf6beff009d97192773b Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.362909 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1946844b-83ee-401e-b3b6-5994ef81c85e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dbad86a3834b9ec3334de7ccc49988da1f7e42c423801c9789ce12ba70390b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://848171dd65d193193056bff78092de5ea65abaf7af4d168a9c27409765bd0ac2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad9f8bfeec0bea5ec9df44017223d633ce320bba477e99d5cc325712f92fff65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5920f4efa4e812090eb100b51bcf8e37eb82eb6b9fa495c8560ceeea3a94d55b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83ec64e82504ae546c78bdb3eb8b6cf47514373616749723ec457191b95b73c7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 08:38:54.283267 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 08:38:54.288329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1100834793/tls.crt::/tmp/serving-cert-1100834793/tls.key\\\\\\\"\\\\nI1201 08:39:00.137331 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 08:39:00.140699 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 08:39:00.140719 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 08:39:00.140740 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 08:39:00.140746 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 08:39:00.151271 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 08:39:00.151299 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 08:39:00.151305 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 08:39:00.151312 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 08:39:00.151316 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 08:39:00.151322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 08:39:00.151326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 08:39:00.151604 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 08:39:00.154622 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7190f95ad8a5cce27417903e343138ff345b97c0713609f47db91d8fe0c44a7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bba1efe2a197a5c47fc4220c1c53da0db51bcbdf317261c978b7f31348034f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bba1efe2a197a5c47fc4220c1c53da0db51bcbdf317261c978b7f31348034f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:38:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:03Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.374864 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7p2p7" event={"ID":"dcab587f-eb9b-4dde-a0a1-75ed175999b0","Type":"ContainerStarted","Data":"4c3ba95fcf9c5e12e6b0e8e7c1ae9da2be7423c1803fab4969decdba59801ebe"} Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.379247 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-4z9l8" event={"ID":"74395c07-d5ab-45ec-a616-1d0b1b336583","Type":"ContainerStarted","Data":"e2f9b1493d035a89354477e779e8277075127a85756f190d5f0c3bf98a15f15b"} Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.379275 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-4z9l8" event={"ID":"74395c07-d5ab-45ec-a616-1d0b1b336583","Type":"ContainerStarted","Data":"67f541d565496b929aef71fae0fc39472c0785e8eb617a89c192b60f99af845d"} Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.388019 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" event={"ID":"988f960f-52fa-406f-9320-a8eec7a04f76","Type":"ContainerStarted","Data":"273efa17ff5b2d285cdae463bed6e3a5cc8fbb768846cf6beff009d97192773b"} Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.392756 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" event={"ID":"3947625d-75bf-4332-a233-1491b2ee9d96","Type":"ContainerStarted","Data":"cdcf58565a332d76dac9cc12df1e0cd89eee6934be4b3234f8c36e9a8e22983a"} Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.392833 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" event={"ID":"3947625d-75bf-4332-a233-1491b2ee9d96","Type":"ContainerStarted","Data":"5e3a5155c19ef667aa9836df7c51fa960347b9c89c6f73c154247020db8d8518"} Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.394640 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-dl2st" event={"ID":"6bebcb50-c292-4bca-9299-2fdc21439b18","Type":"ContainerStarted","Data":"768aa329fc277550bd549890a176d80246fcbfa50fbc3b88c444434b980b9986"} Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.394659 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-dl2st" event={"ID":"6bebcb50-c292-4bca-9299-2fdc21439b18","Type":"ContainerStarted","Data":"6c2b373a995cf671ea5bf66490e3841ba99633036cbaedf02250b87dbbd97864"} Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.446048 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1946844b-83ee-401e-b3b6-5994ef81c85e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dbad86a3834b9ec3334de7ccc49988da1f7e42c423801c9789ce12ba70390b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://848171dd65d193193056bff78092de5ea65abaf7af4d168a9c27409765bd0ac2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad9f8bfeec0bea5ec9df44017223d633ce320bba477e99d5cc325712f92fff65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5920f4efa4e812090eb100b51bcf8e37eb82eb6b9fa495c8560ceeea3a94d55b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83ec64e82504ae546c78bdb3eb8b6cf47514373616749723ec457191b95b73c7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 08:38:54.283267 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 08:38:54.288329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1100834793/tls.crt::/tmp/serving-cert-1100834793/tls.key\\\\\\\"\\\\nI1201 08:39:00.137331 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 08:39:00.140699 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 08:39:00.140719 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 08:39:00.140740 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 08:39:00.140746 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 08:39:00.151271 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 08:39:00.151299 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 08:39:00.151305 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 08:39:00.151312 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 08:39:00.151316 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 08:39:00.151322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 08:39:00.151326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 08:39:00.151604 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 08:39:00.154622 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7190f95ad8a5cce27417903e343138ff345b97c0713609f47db91d8fe0c44a7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bba1efe2a197a5c47fc4220c1c53da0db51bcbdf317261c978b7f31348034f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bba1efe2a197a5c47fc4220c1c53da0db51bcbdf317261c978b7f31348034f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:38:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:03Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.496941 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3947625d-75bf-4332-a233-1491b2ee9d96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5v5pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5v5pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hmdnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:03Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.552751 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:03Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.612046 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:03Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.629564 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dl2st" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bebcb50-c292-4bca-9299-2fdc21439b18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98wj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dl2st\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:03Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.662894 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7p2p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcab587f-eb9b-4dde-a0a1-75ed175999b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7p2p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:03Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.679494 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e81b25bc4fc917a5480b84f4c7d2550829f5ad209d4cac14d2cd64d5b792b9e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:03Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.695212 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:03Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.713789 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62ff4a03d2e50e2d881287008cfedc7ff067e04d3dc24ace25b56677b0eb117c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1ed68789c1f37ddbc952b0927feba7a704b01f3ee52e057e29c989f1d475e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:03Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.741714 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"988f960f-52fa-406f-9320-a8eec7a04f76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8zn56\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:03Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.780720 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f71ff62f-7141-4d50-b8ad-8312dc8a80e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccd36e732198fa380968b1ca9a174e8f380c9d3c3c762c3267b2f65367c4360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d09630f7eb6a631d65473e272c4f98de5ba071d33992776b4101ac797b0854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f804833ecd52f1cd93a4df02639ff912e5c7ff38bfce3ba1c47e8cd5fca46e11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35299e45eecb4b2b98003295a39b7945d1fa2943a5a7f97301054d255687877e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:38:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:03Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.799661 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:03Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.810053 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4z9l8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74395c07-d5ab-45ec-a616-1d0b1b336583\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjzb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4z9l8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:03Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.828873 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e81b25bc4fc917a5480b84f4c7d2550829f5ad209d4cac14d2cd64d5b792b9e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:03Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.842087 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:03Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.852332 4689 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.856609 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.856678 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.856690 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.856860 4689 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.857426 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62ff4a03d2e50e2d881287008cfedc7ff067e04d3dc24ace25b56677b0eb117c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1ed68789c1f37ddbc952b0927feba7a704b01f3ee52e057e29c989f1d475e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:03Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.894340 4689 kubelet_node_status.go:115] "Node was previously registered" node="crc" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.894843 4689 kubelet_node_status.go:79] "Successfully registered node" node="crc" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.896261 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.896300 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.896313 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.896332 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.896386 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:03Z","lastTransitionTime":"2025-12-01T08:39:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.899792 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f71ff62f-7141-4d50-b8ad-8312dc8a80e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccd36e732198fa380968b1ca9a174e8f380c9d3c3c762c3267b2f65367c4360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d09630f7eb6a631d65473e272c4f98de5ba071d33992776b4101ac797b0854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f804833ecd52f1cd93a4df02639ff912e5c7ff38bfce3ba1c47e8cd5fca46e11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35299e45eecb4b2b98003295a39b7945d1fa2943a5a7f97301054d255687877e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:38:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:03Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:03 crc kubenswrapper[4689]: E1201 08:39:03.915377 4689 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"87f9359d-17aa-499d-90bc-b05146c26f0f\\\",\\\"systemUUID\\\":\\\"1b1c64ae-9dbc-417f-9fac-7f3e657b08f7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:03Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.918736 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.918765 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.918774 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.918787 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.918798 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:03Z","lastTransitionTime":"2025-12-01T08:39:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.930459 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:03Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:03 crc kubenswrapper[4689]: E1201 08:39:03.935303 4689 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"87f9359d-17aa-499d-90bc-b05146c26f0f\\\",\\\"systemUUID\\\":\\\"1b1c64ae-9dbc-417f-9fac-7f3e657b08f7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:03Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.939918 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.939991 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.940006 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.940029 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.940058 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:03Z","lastTransitionTime":"2025-12-01T08:39:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.946633 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4z9l8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74395c07-d5ab-45ec-a616-1d0b1b336583\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f9b1493d035a89354477e779e8277075127a85756f190d5f0c3bf98a15f15b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjzb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4z9l8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:03Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:03 crc kubenswrapper[4689]: E1201 08:39:03.956405 4689 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"87f9359d-17aa-499d-90bc-b05146c26f0f\\\",\\\"systemUUID\\\":\\\"1b1c64ae-9dbc-417f-9fac-7f3e657b08f7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:03Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.962648 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.963023 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.963129 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.963244 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.963324 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:03Z","lastTransitionTime":"2025-12-01T08:39:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.972467 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"988f960f-52fa-406f-9320-a8eec7a04f76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8zn56\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:03Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:03 crc kubenswrapper[4689]: E1201 08:39:03.982537 4689 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"87f9359d-17aa-499d-90bc-b05146c26f0f\\\",\\\"systemUUID\\\":\\\"1b1c64ae-9dbc-417f-9fac-7f3e657b08f7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:03Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.988576 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.988787 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.988914 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.989048 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.989156 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:03Z","lastTransitionTime":"2025-12-01T08:39:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:03 crc kubenswrapper[4689]: I1201 08:39:03.995785 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1946844b-83ee-401e-b3b6-5994ef81c85e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dbad86a3834b9ec3334de7ccc49988da1f7e42c423801c9789ce12ba70390b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://848171dd65d193193056bff78092de5ea65abaf7af4d168a9c27409765bd0ac2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad9f8bfeec0bea5ec9df44017223d633ce320bba477e99d5cc325712f92fff65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5920f4efa4e812090eb100b51bcf8e37eb82eb6b9fa495c8560ceeea3a94d55b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83ec64e82504ae546c78bdb3eb8b6cf47514373616749723ec457191b95b73c7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 08:38:54.283267 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 08:38:54.288329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1100834793/tls.crt::/tmp/serving-cert-1100834793/tls.key\\\\\\\"\\\\nI1201 08:39:00.137331 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 08:39:00.140699 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 08:39:00.140719 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 08:39:00.140740 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 08:39:00.140746 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 08:39:00.151271 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 08:39:00.151299 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 08:39:00.151305 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 08:39:00.151312 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 08:39:00.151316 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 08:39:00.151322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 08:39:00.151326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 08:39:00.151604 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 08:39:00.154622 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7190f95ad8a5cce27417903e343138ff345b97c0713609f47db91d8fe0c44a7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bba1efe2a197a5c47fc4220c1c53da0db51bcbdf317261c978b7f31348034f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bba1efe2a197a5c47fc4220c1c53da0db51bcbdf317261c978b7f31348034f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:38:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:03Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:04 crc kubenswrapper[4689]: E1201 08:39:04.006159 4689 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"87f9359d-17aa-499d-90bc-b05146c26f0f\\\",\\\"systemUUID\\\":\\\"1b1c64ae-9dbc-417f-9fac-7f3e657b08f7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:04Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:04 crc kubenswrapper[4689]: E1201 08:39:04.006320 4689 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 01 08:39:04 crc kubenswrapper[4689]: I1201 08:39:04.007573 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3947625d-75bf-4332-a233-1491b2ee9d96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5v5pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5v5pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hmdnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:04Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:04 crc kubenswrapper[4689]: I1201 08:39:04.008546 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:04 crc kubenswrapper[4689]: I1201 08:39:04.008582 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:04 crc kubenswrapper[4689]: I1201 08:39:04.008594 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:04 crc kubenswrapper[4689]: I1201 08:39:04.008614 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:04 crc kubenswrapper[4689]: I1201 08:39:04.008625 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:04Z","lastTransitionTime":"2025-12-01T08:39:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:04 crc kubenswrapper[4689]: I1201 08:39:04.025764 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:04Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:04 crc kubenswrapper[4689]: I1201 08:39:04.042180 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dl2st" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bebcb50-c292-4bca-9299-2fdc21439b18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://768aa329fc277550bd549890a176d80246fcbfa50fbc3b88c444434b980b9986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98wj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dl2st\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:04Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:04 crc kubenswrapper[4689]: I1201 08:39:04.047854 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 08:39:04 crc kubenswrapper[4689]: E1201 08:39:04.048066 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 08:39:04 crc kubenswrapper[4689]: I1201 08:39:04.048254 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 08:39:04 crc kubenswrapper[4689]: E1201 08:39:04.048678 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 08:39:04 crc kubenswrapper[4689]: I1201 08:39:04.074980 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7p2p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcab587f-eb9b-4dde-a0a1-75ed175999b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7p2p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:04Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:04 crc kubenswrapper[4689]: I1201 08:39:04.173483 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:04 crc kubenswrapper[4689]: I1201 08:39:04.173538 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:04 crc kubenswrapper[4689]: I1201 08:39:04.173552 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:04 crc kubenswrapper[4689]: I1201 08:39:04.173572 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:04 crc kubenswrapper[4689]: I1201 08:39:04.173583 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:04Z","lastTransitionTime":"2025-12-01T08:39:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:04 crc kubenswrapper[4689]: I1201 08:39:04.189432 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:04Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:04 crc kubenswrapper[4689]: I1201 08:39:04.276913 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:04 crc kubenswrapper[4689]: I1201 08:39:04.276990 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:04 crc kubenswrapper[4689]: I1201 08:39:04.277015 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:04 crc kubenswrapper[4689]: I1201 08:39:04.277051 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:04 crc kubenswrapper[4689]: I1201 08:39:04.277071 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:04Z","lastTransitionTime":"2025-12-01T08:39:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:04 crc kubenswrapper[4689]: I1201 08:39:04.386989 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:04 crc kubenswrapper[4689]: I1201 08:39:04.387057 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:04 crc kubenswrapper[4689]: I1201 08:39:04.387068 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:04 crc kubenswrapper[4689]: I1201 08:39:04.387086 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:04 crc kubenswrapper[4689]: I1201 08:39:04.387098 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:04Z","lastTransitionTime":"2025-12-01T08:39:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:04 crc kubenswrapper[4689]: I1201 08:39:04.399509 4689 generic.go:334] "Generic (PLEG): container finished" podID="dcab587f-eb9b-4dde-a0a1-75ed175999b0" containerID="a6954a34226801e4552090e2c6e76c36de94e28ea8e175bc26392f534fb12214" exitCode=0 Dec 01 08:39:04 crc kubenswrapper[4689]: I1201 08:39:04.399596 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7p2p7" event={"ID":"dcab587f-eb9b-4dde-a0a1-75ed175999b0","Type":"ContainerDied","Data":"a6954a34226801e4552090e2c6e76c36de94e28ea8e175bc26392f534fb12214"} Dec 01 08:39:04 crc kubenswrapper[4689]: I1201 08:39:04.407989 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"a78a7f1a0302034697497f8a7a3d9547400a4311d7e304fdba95a19750c66658"} Dec 01 08:39:04 crc kubenswrapper[4689]: I1201 08:39:04.409540 4689 generic.go:334] "Generic (PLEG): container finished" podID="988f960f-52fa-406f-9320-a8eec7a04f76" containerID="496b7a56d1e3bb23f4d1da0fc5bbe009995c46b1f44faad80ee18a18ab301a18" exitCode=0 Dec 01 08:39:04 crc kubenswrapper[4689]: I1201 08:39:04.409590 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" event={"ID":"988f960f-52fa-406f-9320-a8eec7a04f76","Type":"ContainerDied","Data":"496b7a56d1e3bb23f4d1da0fc5bbe009995c46b1f44faad80ee18a18ab301a18"} Dec 01 08:39:04 crc kubenswrapper[4689]: I1201 08:39:04.413832 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" event={"ID":"3947625d-75bf-4332-a233-1491b2ee9d96","Type":"ContainerStarted","Data":"5095b8eb2f2f6319a0221ed4d7ec1fb81540fb75f610f7b57dc23fefbd196b13"} Dec 01 08:39:04 crc kubenswrapper[4689]: I1201 08:39:04.464447 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f71ff62f-7141-4d50-b8ad-8312dc8a80e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccd36e732198fa380968b1ca9a174e8f380c9d3c3c762c3267b2f65367c4360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d09630f7eb6a631d65473e272c4f98de5ba071d33992776b4101ac797b0854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f804833ecd52f1cd93a4df02639ff912e5c7ff38bfce3ba1c47e8cd5fca46e11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35299e45eecb4b2b98003295a39b7945d1fa2943a5a7f97301054d255687877e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:38:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:04Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:04 crc kubenswrapper[4689]: I1201 08:39:04.546287 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:04 crc kubenswrapper[4689]: I1201 08:39:04.546396 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:04 crc kubenswrapper[4689]: I1201 08:39:04.546423 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:04 crc kubenswrapper[4689]: I1201 08:39:04.546457 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:04 crc kubenswrapper[4689]: I1201 08:39:04.546482 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:04Z","lastTransitionTime":"2025-12-01T08:39:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:04 crc kubenswrapper[4689]: I1201 08:39:04.577285 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:39:04 crc kubenswrapper[4689]: I1201 08:39:04.577586 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 08:39:04 crc kubenswrapper[4689]: I1201 08:39:04.577624 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 08:39:04 crc kubenswrapper[4689]: I1201 08:39:04.577673 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 08:39:04 crc kubenswrapper[4689]: E1201 08:39:04.577820 4689 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 01 08:39:04 crc kubenswrapper[4689]: E1201 08:39:04.577950 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-01 08:39:08.577907329 +0000 UTC m=+28.650195233 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 01 08:39:04 crc kubenswrapper[4689]: E1201 08:39:04.578058 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:39:08.578049953 +0000 UTC m=+28.650337857 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:39:04 crc kubenswrapper[4689]: E1201 08:39:04.580814 4689 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 01 08:39:04 crc kubenswrapper[4689]: E1201 08:39:04.580856 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-01 08:39:08.580847647 +0000 UTC m=+28.653135551 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 01 08:39:04 crc kubenswrapper[4689]: E1201 08:39:04.580924 4689 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 01 08:39:04 crc kubenswrapper[4689]: E1201 08:39:04.580934 4689 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 01 08:39:04 crc kubenswrapper[4689]: E1201 08:39:04.580952 4689 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 08:39:04 crc kubenswrapper[4689]: E1201 08:39:04.580980 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-01 08:39:08.58096933 +0000 UTC m=+28.653257234 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 08:39:04 crc kubenswrapper[4689]: I1201 08:39:04.611875 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:04Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:04 crc kubenswrapper[4689]: I1201 08:39:04.651427 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4z9l8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74395c07-d5ab-45ec-a616-1d0b1b336583\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f9b1493d035a89354477e779e8277075127a85756f190d5f0c3bf98a15f15b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjzb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4z9l8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:04Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:04 crc kubenswrapper[4689]: I1201 08:39:04.654311 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:04 crc kubenswrapper[4689]: I1201 08:39:04.654393 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:04 crc kubenswrapper[4689]: I1201 08:39:04.654409 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:04 crc kubenswrapper[4689]: I1201 08:39:04.654434 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:04 crc kubenswrapper[4689]: I1201 08:39:04.654449 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:04Z","lastTransitionTime":"2025-12-01T08:39:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:04 crc kubenswrapper[4689]: I1201 08:39:04.679296 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 08:39:04 crc kubenswrapper[4689]: E1201 08:39:04.679549 4689 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 01 08:39:04 crc kubenswrapper[4689]: E1201 08:39:04.679572 4689 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 01 08:39:04 crc kubenswrapper[4689]: E1201 08:39:04.679583 4689 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 08:39:04 crc kubenswrapper[4689]: E1201 08:39:04.679633 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-01 08:39:08.679619029 +0000 UTC m=+28.751906933 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 08:39:04 crc kubenswrapper[4689]: I1201 08:39:04.728963 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"988f960f-52fa-406f-9320-a8eec7a04f76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8zn56\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:04Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:04 crc kubenswrapper[4689]: I1201 08:39:04.760772 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:04 crc kubenswrapper[4689]: I1201 08:39:04.760811 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:04 crc kubenswrapper[4689]: I1201 08:39:04.760823 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:04 crc kubenswrapper[4689]: I1201 08:39:04.760840 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:04 crc kubenswrapper[4689]: I1201 08:39:04.760851 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:04Z","lastTransitionTime":"2025-12-01T08:39:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:04 crc kubenswrapper[4689]: I1201 08:39:04.763130 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1946844b-83ee-401e-b3b6-5994ef81c85e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dbad86a3834b9ec3334de7ccc49988da1f7e42c423801c9789ce12ba70390b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://848171dd65d193193056bff78092de5ea65abaf7af4d168a9c27409765bd0ac2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad9f8bfeec0bea5ec9df44017223d633ce320bba477e99d5cc325712f92fff65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5920f4efa4e812090eb100b51bcf8e37eb82eb6b9fa495c8560ceeea3a94d55b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83ec64e82504ae546c78bdb3eb8b6cf47514373616749723ec457191b95b73c7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 08:38:54.283267 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 08:38:54.288329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1100834793/tls.crt::/tmp/serving-cert-1100834793/tls.key\\\\\\\"\\\\nI1201 08:39:00.137331 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 08:39:00.140699 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 08:39:00.140719 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 08:39:00.140740 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 08:39:00.140746 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 08:39:00.151271 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 08:39:00.151299 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 08:39:00.151305 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 08:39:00.151312 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 08:39:00.151316 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 08:39:00.151322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 08:39:00.151326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 08:39:00.151604 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 08:39:00.154622 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7190f95ad8a5cce27417903e343138ff345b97c0713609f47db91d8fe0c44a7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bba1efe2a197a5c47fc4220c1c53da0db51bcbdf317261c978b7f31348034f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bba1efe2a197a5c47fc4220c1c53da0db51bcbdf317261c978b7f31348034f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:38:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:04Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:04 crc kubenswrapper[4689]: I1201 08:39:04.783399 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3947625d-75bf-4332-a233-1491b2ee9d96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5v5pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5v5pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hmdnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:04Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:04 crc kubenswrapper[4689]: I1201 08:39:04.810270 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:04Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:04 crc kubenswrapper[4689]: I1201 08:39:04.844864 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:04Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:04 crc kubenswrapper[4689]: I1201 08:39:04.863453 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:04 crc kubenswrapper[4689]: I1201 08:39:04.863490 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:04 crc kubenswrapper[4689]: I1201 08:39:04.863500 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:04 crc kubenswrapper[4689]: I1201 08:39:04.863557 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:04 crc kubenswrapper[4689]: I1201 08:39:04.863569 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:04Z","lastTransitionTime":"2025-12-01T08:39:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:04 crc kubenswrapper[4689]: I1201 08:39:04.866182 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dl2st" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bebcb50-c292-4bca-9299-2fdc21439b18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://768aa329fc277550bd549890a176d80246fcbfa50fbc3b88c444434b980b9986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98wj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dl2st\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:04Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:04 crc kubenswrapper[4689]: I1201 08:39:04.892730 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7p2p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcab587f-eb9b-4dde-a0a1-75ed175999b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6954a34226801e4552090e2c6e76c36de94e28ea8e175bc26392f534fb12214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6954a34226801e4552090e2c6e76c36de94e28ea8e175bc26392f534fb12214\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7p2p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:04Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:04 crc kubenswrapper[4689]: I1201 08:39:04.911831 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e81b25bc4fc917a5480b84f4c7d2550829f5ad209d4cac14d2cd64d5b792b9e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:04Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:04 crc kubenswrapper[4689]: I1201 08:39:04.926248 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:04Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:04 crc kubenswrapper[4689]: I1201 08:39:04.953574 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62ff4a03d2e50e2d881287008cfedc7ff067e04d3dc24ace25b56677b0eb117c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1ed68789c1f37ddbc952b0927feba7a704b01f3ee52e057e29c989f1d475e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:04Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:04 crc kubenswrapper[4689]: I1201 08:39:04.967273 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:04 crc kubenswrapper[4689]: I1201 08:39:04.967329 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:04 crc kubenswrapper[4689]: I1201 08:39:04.967340 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:04 crc kubenswrapper[4689]: I1201 08:39:04.967360 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:04 crc kubenswrapper[4689]: I1201 08:39:04.967401 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:04Z","lastTransitionTime":"2025-12-01T08:39:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:04 crc kubenswrapper[4689]: I1201 08:39:04.971057 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:04Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:05 crc kubenswrapper[4689]: I1201 08:39:05.003050 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:04Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:05 crc kubenswrapper[4689]: I1201 08:39:05.032638 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dl2st" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bebcb50-c292-4bca-9299-2fdc21439b18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://768aa329fc277550bd549890a176d80246fcbfa50fbc3b88c444434b980b9986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98wj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dl2st\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:05Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:05 crc kubenswrapper[4689]: I1201 08:39:05.047002 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 08:39:05 crc kubenswrapper[4689]: E1201 08:39:05.047229 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 08:39:05 crc kubenswrapper[4689]: I1201 08:39:05.074503 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:05 crc kubenswrapper[4689]: I1201 08:39:05.074559 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:05 crc kubenswrapper[4689]: I1201 08:39:05.074571 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:05 crc kubenswrapper[4689]: I1201 08:39:05.074592 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:05 crc kubenswrapper[4689]: I1201 08:39:05.074605 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:05Z","lastTransitionTime":"2025-12-01T08:39:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:05 crc kubenswrapper[4689]: I1201 08:39:05.087338 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7p2p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcab587f-eb9b-4dde-a0a1-75ed175999b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6954a34226801e4552090e2c6e76c36de94e28ea8e175bc26392f534fb12214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6954a34226801e4552090e2c6e76c36de94e28ea8e175bc26392f534fb12214\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7p2p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:05Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:05 crc kubenswrapper[4689]: I1201 08:39:05.217558 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e81b25bc4fc917a5480b84f4c7d2550829f5ad209d4cac14d2cd64d5b792b9e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:05Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:05 crc kubenswrapper[4689]: I1201 08:39:05.219548 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:05 crc kubenswrapper[4689]: I1201 08:39:05.219591 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:05 crc kubenswrapper[4689]: I1201 08:39:05.219616 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:05 crc kubenswrapper[4689]: I1201 08:39:05.219637 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:05 crc kubenswrapper[4689]: I1201 08:39:05.219647 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:05Z","lastTransitionTime":"2025-12-01T08:39:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:05 crc kubenswrapper[4689]: I1201 08:39:05.246968 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78a7f1a0302034697497f8a7a3d9547400a4311d7e304fdba95a19750c66658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:05Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:05 crc kubenswrapper[4689]: I1201 08:39:05.276463 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62ff4a03d2e50e2d881287008cfedc7ff067e04d3dc24ace25b56677b0eb117c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1ed68789c1f37ddbc952b0927feba7a704b01f3ee52e057e29c989f1d475e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:05Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:05 crc kubenswrapper[4689]: I1201 08:39:05.300332 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f71ff62f-7141-4d50-b8ad-8312dc8a80e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccd36e732198fa380968b1ca9a174e8f380c9d3c3c762c3267b2f65367c4360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d09630f7eb6a631d65473e272c4f98de5ba071d33992776b4101ac797b0854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f804833ecd52f1cd93a4df02639ff912e5c7ff38bfce3ba1c47e8cd5fca46e11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35299e45eecb4b2b98003295a39b7945d1fa2943a5a7f97301054d255687877e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:38:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:05Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:05 crc kubenswrapper[4689]: I1201 08:39:05.317781 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:05Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:05 crc kubenswrapper[4689]: I1201 08:39:05.329846 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:05 crc kubenswrapper[4689]: I1201 08:39:05.330434 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:05 crc kubenswrapper[4689]: I1201 08:39:05.330456 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:05 crc kubenswrapper[4689]: I1201 08:39:05.330480 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:05 crc kubenswrapper[4689]: I1201 08:39:05.330495 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:05Z","lastTransitionTime":"2025-12-01T08:39:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:05 crc kubenswrapper[4689]: I1201 08:39:05.332509 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4z9l8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74395c07-d5ab-45ec-a616-1d0b1b336583\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f9b1493d035a89354477e779e8277075127a85756f190d5f0c3bf98a15f15b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjzb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4z9l8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:05Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:05 crc kubenswrapper[4689]: I1201 08:39:05.369694 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"988f960f-52fa-406f-9320-a8eec7a04f76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://496b7a56d1e3bb23f4d1da0fc5bbe009995c46b1f44faad80ee18a18ab301a18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://496b7a56d1e3bb23f4d1da0fc5bbe009995c46b1f44faad80ee18a18ab301a18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8zn56\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:05Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:05 crc kubenswrapper[4689]: I1201 08:39:05.388170 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1946844b-83ee-401e-b3b6-5994ef81c85e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dbad86a3834b9ec3334de7ccc49988da1f7e42c423801c9789ce12ba70390b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://848171dd65d193193056bff78092de5ea65abaf7af4d168a9c27409765bd0ac2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad9f8bfeec0bea5ec9df44017223d633ce320bba477e99d5cc325712f92fff65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5920f4efa4e812090eb100b51bcf8e37eb82eb6b9fa495c8560ceeea3a94d55b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83ec64e82504ae546c78bdb3eb8b6cf47514373616749723ec457191b95b73c7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 08:38:54.283267 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 08:38:54.288329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1100834793/tls.crt::/tmp/serving-cert-1100834793/tls.key\\\\\\\"\\\\nI1201 08:39:00.137331 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 08:39:00.140699 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 08:39:00.140719 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 08:39:00.140740 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 08:39:00.140746 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 08:39:00.151271 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 08:39:00.151299 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 08:39:00.151305 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 08:39:00.151312 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 08:39:00.151316 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 08:39:00.151322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 08:39:00.151326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 08:39:00.151604 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 08:39:00.154622 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7190f95ad8a5cce27417903e343138ff345b97c0713609f47db91d8fe0c44a7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bba1efe2a197a5c47fc4220c1c53da0db51bcbdf317261c978b7f31348034f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bba1efe2a197a5c47fc4220c1c53da0db51bcbdf317261c978b7f31348034f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:38:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:05Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:05 crc kubenswrapper[4689]: I1201 08:39:05.407227 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3947625d-75bf-4332-a233-1491b2ee9d96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5095b8eb2f2f6319a0221ed4d7ec1fb81540fb75f610f7b57dc23fefbd196b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5v5pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdcf58565a332d76dac9cc12df1e0cd89eee6934be4b3234f8c36e9a8e22983a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5v5pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hmdnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:05Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:05 crc kubenswrapper[4689]: I1201 08:39:05.419043 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7p2p7" event={"ID":"dcab587f-eb9b-4dde-a0a1-75ed175999b0","Type":"ContainerStarted","Data":"33d22ffe5ececc3e9210cb8c45471bd5f4b8a76e9ff2002cfc4c099b6973a7ab"} Dec 01 08:39:05 crc kubenswrapper[4689]: I1201 08:39:05.422822 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" event={"ID":"988f960f-52fa-406f-9320-a8eec7a04f76","Type":"ContainerStarted","Data":"8be3fd2d2a091ced00ce091ea6f81df99a6718fa9ead81d56590d2c8ebc2a36c"} Dec 01 08:39:05 crc kubenswrapper[4689]: I1201 08:39:05.422852 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" event={"ID":"988f960f-52fa-406f-9320-a8eec7a04f76","Type":"ContainerStarted","Data":"b81345c33ddfbccd0c6b7d9c505ee81d513ce884fa1326add04026fd82d441e1"} Dec 01 08:39:05 crc kubenswrapper[4689]: I1201 08:39:05.422866 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" event={"ID":"988f960f-52fa-406f-9320-a8eec7a04f76","Type":"ContainerStarted","Data":"211c2ff150fa2d5fb73b0df4cfc5e694bf0b6b6761eec1e6827ac983fd759829"} Dec 01 08:39:05 crc kubenswrapper[4689]: I1201 08:39:05.432903 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:05 crc kubenswrapper[4689]: I1201 08:39:05.432960 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:05 crc kubenswrapper[4689]: I1201 08:39:05.432973 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:05 crc kubenswrapper[4689]: I1201 08:39:05.433000 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:05 crc kubenswrapper[4689]: I1201 08:39:05.433014 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:05Z","lastTransitionTime":"2025-12-01T08:39:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:05 crc kubenswrapper[4689]: I1201 08:39:05.437172 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:05Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:05 crc kubenswrapper[4689]: I1201 08:39:05.459384 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dl2st" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bebcb50-c292-4bca-9299-2fdc21439b18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://768aa329fc277550bd549890a176d80246fcbfa50fbc3b88c444434b980b9986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98wj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dl2st\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:05Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:05 crc kubenswrapper[4689]: I1201 08:39:05.487756 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7p2p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcab587f-eb9b-4dde-a0a1-75ed175999b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6954a34226801e4552090e2c6e76c36de94e28ea8e175bc26392f534fb12214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6954a34226801e4552090e2c6e76c36de94e28ea8e175bc26392f534fb12214\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33d22ffe5ececc3e9210cb8c45471bd5f4b8a76e9ff2002cfc4c099b6973a7ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7p2p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:05Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:05 crc kubenswrapper[4689]: I1201 08:39:05.503455 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:05Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:05 crc kubenswrapper[4689]: I1201 08:39:05.517821 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e81b25bc4fc917a5480b84f4c7d2550829f5ad209d4cac14d2cd64d5b792b9e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:05Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:05 crc kubenswrapper[4689]: I1201 08:39:05.539288 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:05 crc kubenswrapper[4689]: I1201 08:39:05.539342 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:05 crc kubenswrapper[4689]: I1201 08:39:05.539354 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:05 crc kubenswrapper[4689]: I1201 08:39:05.539384 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:05 crc kubenswrapper[4689]: I1201 08:39:05.539395 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:05Z","lastTransitionTime":"2025-12-01T08:39:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:05 crc kubenswrapper[4689]: I1201 08:39:05.544679 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78a7f1a0302034697497f8a7a3d9547400a4311d7e304fdba95a19750c66658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:05Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:05 crc kubenswrapper[4689]: I1201 08:39:05.559500 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62ff4a03d2e50e2d881287008cfedc7ff067e04d3dc24ace25b56677b0eb117c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1ed68789c1f37ddbc952b0927feba7a704b01f3ee52e057e29c989f1d475e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:05Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:05 crc kubenswrapper[4689]: I1201 08:39:05.574236 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f71ff62f-7141-4d50-b8ad-8312dc8a80e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccd36e732198fa380968b1ca9a174e8f380c9d3c3c762c3267b2f65367c4360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d09630f7eb6a631d65473e272c4f98de5ba071d33992776b4101ac797b0854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f804833ecd52f1cd93a4df02639ff912e5c7ff38bfce3ba1c47e8cd5fca46e11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35299e45eecb4b2b98003295a39b7945d1fa2943a5a7f97301054d255687877e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:38:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:05Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:05 crc kubenswrapper[4689]: I1201 08:39:05.589882 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:05Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:05 crc kubenswrapper[4689]: I1201 08:39:05.601698 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4z9l8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74395c07-d5ab-45ec-a616-1d0b1b336583\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f9b1493d035a89354477e779e8277075127a85756f190d5f0c3bf98a15f15b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjzb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4z9l8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:05Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:05 crc kubenswrapper[4689]: I1201 08:39:05.629667 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"988f960f-52fa-406f-9320-a8eec7a04f76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://496b7a56d1e3bb23f4d1da0fc5bbe009995c46b1f44faad80ee18a18ab301a18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://496b7a56d1e3bb23f4d1da0fc5bbe009995c46b1f44faad80ee18a18ab301a18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8zn56\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:05Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:05 crc kubenswrapper[4689]: I1201 08:39:05.641502 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:05 crc kubenswrapper[4689]: I1201 08:39:05.641558 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:05 crc kubenswrapper[4689]: I1201 08:39:05.641570 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:05 crc kubenswrapper[4689]: I1201 08:39:05.641587 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:05 crc kubenswrapper[4689]: I1201 08:39:05.641598 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:05Z","lastTransitionTime":"2025-12-01T08:39:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:05 crc kubenswrapper[4689]: I1201 08:39:05.646874 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1946844b-83ee-401e-b3b6-5994ef81c85e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dbad86a3834b9ec3334de7ccc49988da1f7e42c423801c9789ce12ba70390b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://848171dd65d193193056bff78092de5ea65abaf7af4d168a9c27409765bd0ac2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad9f8bfeec0bea5ec9df44017223d633ce320bba477e99d5cc325712f92fff65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5920f4efa4e812090eb100b51bcf8e37eb82eb6b9fa495c8560ceeea3a94d55b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83ec64e82504ae546c78bdb3eb8b6cf47514373616749723ec457191b95b73c7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 08:38:54.283267 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 08:38:54.288329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1100834793/tls.crt::/tmp/serving-cert-1100834793/tls.key\\\\\\\"\\\\nI1201 08:39:00.137331 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 08:39:00.140699 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 08:39:00.140719 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 08:39:00.140740 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 08:39:00.140746 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 08:39:00.151271 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 08:39:00.151299 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 08:39:00.151305 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 08:39:00.151312 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 08:39:00.151316 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 08:39:00.151322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 08:39:00.151326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 08:39:00.151604 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 08:39:00.154622 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7190f95ad8a5cce27417903e343138ff345b97c0713609f47db91d8fe0c44a7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bba1efe2a197a5c47fc4220c1c53da0db51bcbdf317261c978b7f31348034f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bba1efe2a197a5c47fc4220c1c53da0db51bcbdf317261c978b7f31348034f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:38:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:05Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:05 crc kubenswrapper[4689]: I1201 08:39:05.663426 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3947625d-75bf-4332-a233-1491b2ee9d96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5095b8eb2f2f6319a0221ed4d7ec1fb81540fb75f610f7b57dc23fefbd196b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5v5pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdcf58565a332d76dac9cc12df1e0cd89eee6934be4b3234f8c36e9a8e22983a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5v5pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hmdnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:05Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:05 crc kubenswrapper[4689]: I1201 08:39:05.754255 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:05 crc kubenswrapper[4689]: I1201 08:39:05.754297 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:05 crc kubenswrapper[4689]: I1201 08:39:05.754305 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:05 crc kubenswrapper[4689]: I1201 08:39:05.754320 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:05 crc kubenswrapper[4689]: I1201 08:39:05.754329 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:05Z","lastTransitionTime":"2025-12-01T08:39:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:05 crc kubenswrapper[4689]: I1201 08:39:05.856613 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:05 crc kubenswrapper[4689]: I1201 08:39:05.856722 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:05 crc kubenswrapper[4689]: I1201 08:39:05.856736 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:05 crc kubenswrapper[4689]: I1201 08:39:05.856755 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:05 crc kubenswrapper[4689]: I1201 08:39:05.856766 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:05Z","lastTransitionTime":"2025-12-01T08:39:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:05 crc kubenswrapper[4689]: I1201 08:39:05.959665 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:05 crc kubenswrapper[4689]: I1201 08:39:05.959704 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:05 crc kubenswrapper[4689]: I1201 08:39:05.959713 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:05 crc kubenswrapper[4689]: I1201 08:39:05.959728 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:05 crc kubenswrapper[4689]: I1201 08:39:05.959738 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:05Z","lastTransitionTime":"2025-12-01T08:39:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:06 crc kubenswrapper[4689]: I1201 08:39:06.047137 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 08:39:06 crc kubenswrapper[4689]: E1201 08:39:06.048146 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 08:39:06 crc kubenswrapper[4689]: I1201 08:39:06.047170 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 08:39:06 crc kubenswrapper[4689]: E1201 08:39:06.048438 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 08:39:06 crc kubenswrapper[4689]: I1201 08:39:06.062540 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:06 crc kubenswrapper[4689]: I1201 08:39:06.062873 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:06 crc kubenswrapper[4689]: I1201 08:39:06.063171 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:06 crc kubenswrapper[4689]: I1201 08:39:06.063280 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:06 crc kubenswrapper[4689]: I1201 08:39:06.063390 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:06Z","lastTransitionTime":"2025-12-01T08:39:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:06 crc kubenswrapper[4689]: I1201 08:39:06.165751 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:06 crc kubenswrapper[4689]: I1201 08:39:06.165791 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:06 crc kubenswrapper[4689]: I1201 08:39:06.165801 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:06 crc kubenswrapper[4689]: I1201 08:39:06.165817 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:06 crc kubenswrapper[4689]: I1201 08:39:06.165827 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:06Z","lastTransitionTime":"2025-12-01T08:39:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:06 crc kubenswrapper[4689]: I1201 08:39:06.274741 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:06 crc kubenswrapper[4689]: I1201 08:39:06.274837 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:06 crc kubenswrapper[4689]: I1201 08:39:06.274852 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:06 crc kubenswrapper[4689]: I1201 08:39:06.274879 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:06 crc kubenswrapper[4689]: I1201 08:39:06.274894 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:06Z","lastTransitionTime":"2025-12-01T08:39:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:06 crc kubenswrapper[4689]: I1201 08:39:06.371014 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-kg5bw"] Dec 01 08:39:06 crc kubenswrapper[4689]: I1201 08:39:06.371574 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-kg5bw" Dec 01 08:39:06 crc kubenswrapper[4689]: I1201 08:39:06.373674 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 01 08:39:06 crc kubenswrapper[4689]: I1201 08:39:06.374033 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 01 08:39:06 crc kubenswrapper[4689]: I1201 08:39:06.374700 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 01 08:39:06 crc kubenswrapper[4689]: I1201 08:39:06.374691 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 01 08:39:06 crc kubenswrapper[4689]: I1201 08:39:06.378773 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:06 crc kubenswrapper[4689]: I1201 08:39:06.378822 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:06 crc kubenswrapper[4689]: I1201 08:39:06.378857 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:06 crc kubenswrapper[4689]: I1201 08:39:06.378882 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:06 crc kubenswrapper[4689]: I1201 08:39:06.378897 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:06Z","lastTransitionTime":"2025-12-01T08:39:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:06 crc kubenswrapper[4689]: I1201 08:39:06.391991 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f71ff62f-7141-4d50-b8ad-8312dc8a80e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccd36e732198fa380968b1ca9a174e8f380c9d3c3c762c3267b2f65367c4360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d09630f7eb6a631d65473e272c4f98de5ba071d33992776b4101ac797b0854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f804833ecd52f1cd93a4df02639ff912e5c7ff38bfce3ba1c47e8cd5fca46e11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35299e45eecb4b2b98003295a39b7945d1fa2943a5a7f97301054d255687877e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:38:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:06Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:06 crc kubenswrapper[4689]: I1201 08:39:06.408380 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:06Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:06 crc kubenswrapper[4689]: I1201 08:39:06.428386 4689 generic.go:334] "Generic (PLEG): container finished" podID="dcab587f-eb9b-4dde-a0a1-75ed175999b0" containerID="33d22ffe5ececc3e9210cb8c45471bd5f4b8a76e9ff2002cfc4c099b6973a7ab" exitCode=0 Dec 01 08:39:06 crc kubenswrapper[4689]: I1201 08:39:06.428469 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7p2p7" event={"ID":"dcab587f-eb9b-4dde-a0a1-75ed175999b0","Type":"ContainerDied","Data":"33d22ffe5ececc3e9210cb8c45471bd5f4b8a76e9ff2002cfc4c099b6973a7ab"} Dec 01 08:39:06 crc kubenswrapper[4689]: I1201 08:39:06.433104 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" event={"ID":"988f960f-52fa-406f-9320-a8eec7a04f76","Type":"ContainerStarted","Data":"2afbacac849e9ed880e975ae167ea5b2329811c28516152f4d809b043ab61746"} Dec 01 08:39:06 crc kubenswrapper[4689]: I1201 08:39:06.433158 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" event={"ID":"988f960f-52fa-406f-9320-a8eec7a04f76","Type":"ContainerStarted","Data":"0be8fa8da5eef9035e80121c75adb9d6653d8683948fa83cfb1ff87a5fb3e8b2"} Dec 01 08:39:06 crc kubenswrapper[4689]: I1201 08:39:06.433180 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" event={"ID":"988f960f-52fa-406f-9320-a8eec7a04f76","Type":"ContainerStarted","Data":"46c9dc400db69538c36365f4e2d66aae47defc4224c2881532870f875e2b30e7"} Dec 01 08:39:06 crc kubenswrapper[4689]: I1201 08:39:06.436020 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4z9l8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74395c07-d5ab-45ec-a616-1d0b1b336583\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f9b1493d035a89354477e779e8277075127a85756f190d5f0c3bf98a15f15b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjzb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4z9l8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:06Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:06 crc kubenswrapper[4689]: I1201 08:39:06.461889 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"988f960f-52fa-406f-9320-a8eec7a04f76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://496b7a56d1e3bb23f4d1da0fc5bbe009995c46b1f44faad80ee18a18ab301a18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://496b7a56d1e3bb23f4d1da0fc5bbe009995c46b1f44faad80ee18a18ab301a18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8zn56\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:06Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:06 crc kubenswrapper[4689]: I1201 08:39:06.475769 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/af90efaa-97be-48b4-bfe6-dc25956d2b5c-host\") pod \"node-ca-kg5bw\" (UID: \"af90efaa-97be-48b4-bfe6-dc25956d2b5c\") " pod="openshift-image-registry/node-ca-kg5bw" Dec 01 08:39:06 crc kubenswrapper[4689]: I1201 08:39:06.475865 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2sbc\" (UniqueName: \"kubernetes.io/projected/af90efaa-97be-48b4-bfe6-dc25956d2b5c-kube-api-access-t2sbc\") pod \"node-ca-kg5bw\" (UID: \"af90efaa-97be-48b4-bfe6-dc25956d2b5c\") " pod="openshift-image-registry/node-ca-kg5bw" Dec 01 08:39:06 crc kubenswrapper[4689]: I1201 08:39:06.475903 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/af90efaa-97be-48b4-bfe6-dc25956d2b5c-serviceca\") pod \"node-ca-kg5bw\" (UID: \"af90efaa-97be-48b4-bfe6-dc25956d2b5c\") " pod="openshift-image-registry/node-ca-kg5bw" Dec 01 08:39:06 crc kubenswrapper[4689]: I1201 08:39:06.482230 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:06 crc kubenswrapper[4689]: I1201 08:39:06.482260 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:06 crc kubenswrapper[4689]: I1201 08:39:06.482268 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:06 crc kubenswrapper[4689]: I1201 08:39:06.482283 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:06 crc kubenswrapper[4689]: I1201 08:39:06.482293 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:06Z","lastTransitionTime":"2025-12-01T08:39:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:06 crc kubenswrapper[4689]: I1201 08:39:06.486590 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1946844b-83ee-401e-b3b6-5994ef81c85e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dbad86a3834b9ec3334de7ccc49988da1f7e42c423801c9789ce12ba70390b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://848171dd65d193193056bff78092de5ea65abaf7af4d168a9c27409765bd0ac2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad9f8bfeec0bea5ec9df44017223d633ce320bba477e99d5cc325712f92fff65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5920f4efa4e812090eb100b51bcf8e37eb82eb6b9fa495c8560ceeea3a94d55b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83ec64e82504ae546c78bdb3eb8b6cf47514373616749723ec457191b95b73c7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 08:38:54.283267 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 08:38:54.288329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1100834793/tls.crt::/tmp/serving-cert-1100834793/tls.key\\\\\\\"\\\\nI1201 08:39:00.137331 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 08:39:00.140699 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 08:39:00.140719 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 08:39:00.140740 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 08:39:00.140746 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 08:39:00.151271 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 08:39:00.151299 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 08:39:00.151305 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 08:39:00.151312 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 08:39:00.151316 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 08:39:00.151322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 08:39:00.151326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 08:39:00.151604 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 08:39:00.154622 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7190f95ad8a5cce27417903e343138ff345b97c0713609f47db91d8fe0c44a7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bba1efe2a197a5c47fc4220c1c53da0db51bcbdf317261c978b7f31348034f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bba1efe2a197a5c47fc4220c1c53da0db51bcbdf317261c978b7f31348034f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:38:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:06Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:06 crc kubenswrapper[4689]: I1201 08:39:06.500286 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3947625d-75bf-4332-a233-1491b2ee9d96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5095b8eb2f2f6319a0221ed4d7ec1fb81540fb75f610f7b57dc23fefbd196b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5v5pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdcf58565a332d76dac9cc12df1e0cd89eee6934be4b3234f8c36e9a8e22983a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5v5pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hmdnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:06Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:06 crc kubenswrapper[4689]: I1201 08:39:06.514695 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:06Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:06 crc kubenswrapper[4689]: I1201 08:39:06.530971 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:06Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:06 crc kubenswrapper[4689]: I1201 08:39:06.545972 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dl2st" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bebcb50-c292-4bca-9299-2fdc21439b18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://768aa329fc277550bd549890a176d80246fcbfa50fbc3b88c444434b980b9986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98wj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dl2st\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:06Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:06 crc kubenswrapper[4689]: I1201 08:39:06.563520 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7p2p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcab587f-eb9b-4dde-a0a1-75ed175999b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6954a34226801e4552090e2c6e76c36de94e28ea8e175bc26392f534fb12214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6954a34226801e4552090e2c6e76c36de94e28ea8e175bc26392f534fb12214\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33d22ffe5ececc3e9210cb8c45471bd5f4b8a76e9ff2002cfc4c099b6973a7ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7p2p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:06Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:06 crc kubenswrapper[4689]: I1201 08:39:06.576454 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kg5bw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af90efaa-97be-48b4-bfe6-dc25956d2b5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2sbc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kg5bw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:06Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:06 crc kubenswrapper[4689]: I1201 08:39:06.576784 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2sbc\" (UniqueName: \"kubernetes.io/projected/af90efaa-97be-48b4-bfe6-dc25956d2b5c-kube-api-access-t2sbc\") pod \"node-ca-kg5bw\" (UID: \"af90efaa-97be-48b4-bfe6-dc25956d2b5c\") " pod="openshift-image-registry/node-ca-kg5bw" Dec 01 08:39:06 crc kubenswrapper[4689]: I1201 08:39:06.576850 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/af90efaa-97be-48b4-bfe6-dc25956d2b5c-serviceca\") pod \"node-ca-kg5bw\" (UID: \"af90efaa-97be-48b4-bfe6-dc25956d2b5c\") " pod="openshift-image-registry/node-ca-kg5bw" Dec 01 08:39:06 crc kubenswrapper[4689]: I1201 08:39:06.576912 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/af90efaa-97be-48b4-bfe6-dc25956d2b5c-host\") pod \"node-ca-kg5bw\" (UID: \"af90efaa-97be-48b4-bfe6-dc25956d2b5c\") " pod="openshift-image-registry/node-ca-kg5bw" Dec 01 08:39:06 crc kubenswrapper[4689]: I1201 08:39:06.577014 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/af90efaa-97be-48b4-bfe6-dc25956d2b5c-host\") pod \"node-ca-kg5bw\" (UID: \"af90efaa-97be-48b4-bfe6-dc25956d2b5c\") " pod="openshift-image-registry/node-ca-kg5bw" Dec 01 08:39:06 crc kubenswrapper[4689]: I1201 08:39:06.582886 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/af90efaa-97be-48b4-bfe6-dc25956d2b5c-serviceca\") pod \"node-ca-kg5bw\" (UID: \"af90efaa-97be-48b4-bfe6-dc25956d2b5c\") " pod="openshift-image-registry/node-ca-kg5bw" Dec 01 08:39:06 crc kubenswrapper[4689]: I1201 08:39:06.584797 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:06 crc kubenswrapper[4689]: I1201 08:39:06.584822 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:06 crc kubenswrapper[4689]: I1201 08:39:06.584830 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:06 crc kubenswrapper[4689]: I1201 08:39:06.584846 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:06 crc kubenswrapper[4689]: I1201 08:39:06.584857 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:06Z","lastTransitionTime":"2025-12-01T08:39:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:06 crc kubenswrapper[4689]: I1201 08:39:06.594248 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e81b25bc4fc917a5480b84f4c7d2550829f5ad209d4cac14d2cd64d5b792b9e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:06Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:06 crc kubenswrapper[4689]: I1201 08:39:06.603709 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2sbc\" (UniqueName: \"kubernetes.io/projected/af90efaa-97be-48b4-bfe6-dc25956d2b5c-kube-api-access-t2sbc\") pod \"node-ca-kg5bw\" (UID: \"af90efaa-97be-48b4-bfe6-dc25956d2b5c\") " pod="openshift-image-registry/node-ca-kg5bw" Dec 01 08:39:06 crc kubenswrapper[4689]: I1201 08:39:06.609187 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78a7f1a0302034697497f8a7a3d9547400a4311d7e304fdba95a19750c66658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:06Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:06 crc kubenswrapper[4689]: I1201 08:39:06.623606 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62ff4a03d2e50e2d881287008cfedc7ff067e04d3dc24ace25b56677b0eb117c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1ed68789c1f37ddbc952b0927feba7a704b01f3ee52e057e29c989f1d475e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:06Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:06 crc kubenswrapper[4689]: I1201 08:39:06.638904 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7p2p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcab587f-eb9b-4dde-a0a1-75ed175999b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6954a34226801e4552090e2c6e76c36de94e28ea8e175bc26392f534fb12214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6954a34226801e4552090e2c6e76c36de94e28ea8e175bc26392f534fb12214\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33d22ffe5ececc3e9210cb8c45471bd5f4b8a76e9ff2002cfc4c099b6973a7ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33d22ffe5ececc3e9210cb8c45471bd5f4b8a76e9ff2002cfc4c099b6973a7ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7p2p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:06Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:06 crc kubenswrapper[4689]: I1201 08:39:06.648732 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kg5bw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af90efaa-97be-48b4-bfe6-dc25956d2b5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2sbc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kg5bw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:06Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:06 crc kubenswrapper[4689]: I1201 08:39:06.660462 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:06Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:06 crc kubenswrapper[4689]: I1201 08:39:06.671160 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:06Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:06 crc kubenswrapper[4689]: I1201 08:39:06.682111 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dl2st" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bebcb50-c292-4bca-9299-2fdc21439b18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://768aa329fc277550bd549890a176d80246fcbfa50fbc3b88c444434b980b9986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98wj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dl2st\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:06Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:06 crc kubenswrapper[4689]: I1201 08:39:06.686871 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-kg5bw" Dec 01 08:39:06 crc kubenswrapper[4689]: I1201 08:39:06.689722 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:06 crc kubenswrapper[4689]: I1201 08:39:06.689965 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:06 crc kubenswrapper[4689]: I1201 08:39:06.690074 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:06 crc kubenswrapper[4689]: I1201 08:39:06.690519 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:06 crc kubenswrapper[4689]: I1201 08:39:06.690626 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:06Z","lastTransitionTime":"2025-12-01T08:39:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:06 crc kubenswrapper[4689]: I1201 08:39:06.695318 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62ff4a03d2e50e2d881287008cfedc7ff067e04d3dc24ace25b56677b0eb117c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1ed68789c1f37ddbc952b0927feba7a704b01f3ee52e057e29c989f1d475e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:06Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:06 crc kubenswrapper[4689]: W1201 08:39:06.711028 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaf90efaa_97be_48b4_bfe6_dc25956d2b5c.slice/crio-b4337f5d0fbd8019a382e7c3f202652118b1c758a65e03ed66908be23c034ddc WatchSource:0}: Error finding container b4337f5d0fbd8019a382e7c3f202652118b1c758a65e03ed66908be23c034ddc: Status 404 returned error can't find the container with id b4337f5d0fbd8019a382e7c3f202652118b1c758a65e03ed66908be23c034ddc Dec 01 08:39:06 crc kubenswrapper[4689]: I1201 08:39:06.715526 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e81b25bc4fc917a5480b84f4c7d2550829f5ad209d4cac14d2cd64d5b792b9e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:06Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:06 crc kubenswrapper[4689]: I1201 08:39:06.726736 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78a7f1a0302034697497f8a7a3d9547400a4311d7e304fdba95a19750c66658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:06Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:06 crc kubenswrapper[4689]: I1201 08:39:06.737102 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4z9l8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74395c07-d5ab-45ec-a616-1d0b1b336583\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f9b1493d035a89354477e779e8277075127a85756f190d5f0c3bf98a15f15b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjzb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4z9l8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:06Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:06 crc kubenswrapper[4689]: I1201 08:39:06.763025 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"988f960f-52fa-406f-9320-a8eec7a04f76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://496b7a56d1e3bb23f4d1da0fc5bbe009995c46b1f44faad80ee18a18ab301a18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://496b7a56d1e3bb23f4d1da0fc5bbe009995c46b1f44faad80ee18a18ab301a18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8zn56\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:06Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:06 crc kubenswrapper[4689]: I1201 08:39:06.774892 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f71ff62f-7141-4d50-b8ad-8312dc8a80e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccd36e732198fa380968b1ca9a174e8f380c9d3c3c762c3267b2f65367c4360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d09630f7eb6a631d65473e272c4f98de5ba071d33992776b4101ac797b0854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f804833ecd52f1cd93a4df02639ff912e5c7ff38bfce3ba1c47e8cd5fca46e11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35299e45eecb4b2b98003295a39b7945d1fa2943a5a7f97301054d255687877e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:38:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:06Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:06 crc kubenswrapper[4689]: I1201 08:39:06.788511 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:06Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:06 crc kubenswrapper[4689]: I1201 08:39:06.793717 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:06 crc kubenswrapper[4689]: I1201 08:39:06.793761 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:06 crc kubenswrapper[4689]: I1201 08:39:06.793777 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:06 crc kubenswrapper[4689]: I1201 08:39:06.793798 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:06 crc kubenswrapper[4689]: I1201 08:39:06.793825 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:06Z","lastTransitionTime":"2025-12-01T08:39:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:06 crc kubenswrapper[4689]: I1201 08:39:06.805480 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1946844b-83ee-401e-b3b6-5994ef81c85e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dbad86a3834b9ec3334de7ccc49988da1f7e42c423801c9789ce12ba70390b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://848171dd65d193193056bff78092de5ea65abaf7af4d168a9c27409765bd0ac2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad9f8bfeec0bea5ec9df44017223d633ce320bba477e99d5cc325712f92fff65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5920f4efa4e812090eb100b51bcf8e37eb82eb6b9fa495c8560ceeea3a94d55b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83ec64e82504ae546c78bdb3eb8b6cf47514373616749723ec457191b95b73c7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 08:38:54.283267 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 08:38:54.288329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1100834793/tls.crt::/tmp/serving-cert-1100834793/tls.key\\\\\\\"\\\\nI1201 08:39:00.137331 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 08:39:00.140699 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 08:39:00.140719 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 08:39:00.140740 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 08:39:00.140746 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 08:39:00.151271 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 08:39:00.151299 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 08:39:00.151305 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 08:39:00.151312 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 08:39:00.151316 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 08:39:00.151322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 08:39:00.151326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 08:39:00.151604 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 08:39:00.154622 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7190f95ad8a5cce27417903e343138ff345b97c0713609f47db91d8fe0c44a7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bba1efe2a197a5c47fc4220c1c53da0db51bcbdf317261c978b7f31348034f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bba1efe2a197a5c47fc4220c1c53da0db51bcbdf317261c978b7f31348034f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:38:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:06Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:06 crc kubenswrapper[4689]: I1201 08:39:06.820298 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3947625d-75bf-4332-a233-1491b2ee9d96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5095b8eb2f2f6319a0221ed4d7ec1fb81540fb75f610f7b57dc23fefbd196b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5v5pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdcf58565a332d76dac9cc12df1e0cd89eee6934be4b3234f8c36e9a8e22983a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5v5pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hmdnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:06Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:06 crc kubenswrapper[4689]: I1201 08:39:06.896331 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:06 crc kubenswrapper[4689]: I1201 08:39:06.896447 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:06 crc kubenswrapper[4689]: I1201 08:39:06.896465 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:06 crc kubenswrapper[4689]: I1201 08:39:06.896483 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:06 crc kubenswrapper[4689]: I1201 08:39:06.896497 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:06Z","lastTransitionTime":"2025-12-01T08:39:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:07 crc kubenswrapper[4689]: I1201 08:39:07.004168 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:07 crc kubenswrapper[4689]: I1201 08:39:07.004208 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:07 crc kubenswrapper[4689]: I1201 08:39:07.004225 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:07 crc kubenswrapper[4689]: I1201 08:39:07.004242 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:07 crc kubenswrapper[4689]: I1201 08:39:07.004253 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:07Z","lastTransitionTime":"2025-12-01T08:39:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:07 crc kubenswrapper[4689]: I1201 08:39:07.046863 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 08:39:07 crc kubenswrapper[4689]: E1201 08:39:07.046998 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 08:39:07 crc kubenswrapper[4689]: I1201 08:39:07.106810 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:07 crc kubenswrapper[4689]: I1201 08:39:07.106839 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:07 crc kubenswrapper[4689]: I1201 08:39:07.106850 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:07 crc kubenswrapper[4689]: I1201 08:39:07.106864 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:07 crc kubenswrapper[4689]: I1201 08:39:07.106873 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:07Z","lastTransitionTime":"2025-12-01T08:39:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:07 crc kubenswrapper[4689]: I1201 08:39:07.218122 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:07 crc kubenswrapper[4689]: I1201 08:39:07.218156 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:07 crc kubenswrapper[4689]: I1201 08:39:07.218165 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:07 crc kubenswrapper[4689]: I1201 08:39:07.218181 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:07 crc kubenswrapper[4689]: I1201 08:39:07.218192 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:07Z","lastTransitionTime":"2025-12-01T08:39:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:07 crc kubenswrapper[4689]: I1201 08:39:07.320824 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:07 crc kubenswrapper[4689]: I1201 08:39:07.320870 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:07 crc kubenswrapper[4689]: I1201 08:39:07.320881 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:07 crc kubenswrapper[4689]: I1201 08:39:07.320900 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:07 crc kubenswrapper[4689]: I1201 08:39:07.320913 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:07Z","lastTransitionTime":"2025-12-01T08:39:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:07 crc kubenswrapper[4689]: I1201 08:39:07.423152 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:07 crc kubenswrapper[4689]: I1201 08:39:07.423198 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:07 crc kubenswrapper[4689]: I1201 08:39:07.423210 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:07 crc kubenswrapper[4689]: I1201 08:39:07.423226 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:07 crc kubenswrapper[4689]: I1201 08:39:07.423238 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:07Z","lastTransitionTime":"2025-12-01T08:39:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:07 crc kubenswrapper[4689]: I1201 08:39:07.438988 4689 generic.go:334] "Generic (PLEG): container finished" podID="dcab587f-eb9b-4dde-a0a1-75ed175999b0" containerID="30e1f218df66bf86cef3d56c80b2d5bafacfabbac8868f3127a70f88431d00e9" exitCode=0 Dec 01 08:39:07 crc kubenswrapper[4689]: I1201 08:39:07.439095 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7p2p7" event={"ID":"dcab587f-eb9b-4dde-a0a1-75ed175999b0","Type":"ContainerDied","Data":"30e1f218df66bf86cef3d56c80b2d5bafacfabbac8868f3127a70f88431d00e9"} Dec 01 08:39:07 crc kubenswrapper[4689]: I1201 08:39:07.441467 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-kg5bw" event={"ID":"af90efaa-97be-48b4-bfe6-dc25956d2b5c","Type":"ContainerStarted","Data":"b4337f5d0fbd8019a382e7c3f202652118b1c758a65e03ed66908be23c034ddc"} Dec 01 08:39:07 crc kubenswrapper[4689]: I1201 08:39:07.456221 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7p2p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcab587f-eb9b-4dde-a0a1-75ed175999b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6954a34226801e4552090e2c6e76c36de94e28ea8e175bc26392f534fb12214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6954a34226801e4552090e2c6e76c36de94e28ea8e175bc26392f534fb12214\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33d22ffe5ececc3e9210cb8c45471bd5f4b8a76e9ff2002cfc4c099b6973a7ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33d22ffe5ececc3e9210cb8c45471bd5f4b8a76e9ff2002cfc4c099b6973a7ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30e1f218df66bf86cef3d56c80b2d5bafacfabbac8868f3127a70f88431d00e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30e1f218df66bf86cef3d56c80b2d5bafacfabbac8868f3127a70f88431d00e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7p2p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:07Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:07 crc kubenswrapper[4689]: I1201 08:39:07.477762 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kg5bw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af90efaa-97be-48b4-bfe6-dc25956d2b5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:06Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2sbc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kg5bw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:07Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:07 crc kubenswrapper[4689]: I1201 08:39:07.491018 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:07Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:07 crc kubenswrapper[4689]: I1201 08:39:07.525591 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:07 crc kubenswrapper[4689]: I1201 08:39:07.525623 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:07 crc kubenswrapper[4689]: I1201 08:39:07.525634 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:07 crc kubenswrapper[4689]: I1201 08:39:07.525649 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:07 crc kubenswrapper[4689]: I1201 08:39:07.525659 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:07Z","lastTransitionTime":"2025-12-01T08:39:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:07 crc kubenswrapper[4689]: I1201 08:39:07.535332 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:07Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:07 crc kubenswrapper[4689]: I1201 08:39:07.553208 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dl2st" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bebcb50-c292-4bca-9299-2fdc21439b18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://768aa329fc277550bd549890a176d80246fcbfa50fbc3b88c444434b980b9986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98wj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dl2st\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:07Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:07 crc kubenswrapper[4689]: I1201 08:39:07.570181 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62ff4a03d2e50e2d881287008cfedc7ff067e04d3dc24ace25b56677b0eb117c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1ed68789c1f37ddbc952b0927feba7a704b01f3ee52e057e29c989f1d475e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:07Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:07 crc kubenswrapper[4689]: I1201 08:39:07.595139 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e81b25bc4fc917a5480b84f4c7d2550829f5ad209d4cac14d2cd64d5b792b9e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:07Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:07 crc kubenswrapper[4689]: I1201 08:39:07.607693 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78a7f1a0302034697497f8a7a3d9547400a4311d7e304fdba95a19750c66658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:07Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:07 crc kubenswrapper[4689]: I1201 08:39:07.623468 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4z9l8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74395c07-d5ab-45ec-a616-1d0b1b336583\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f9b1493d035a89354477e779e8277075127a85756f190d5f0c3bf98a15f15b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjzb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4z9l8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:07Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:07 crc kubenswrapper[4689]: I1201 08:39:07.627585 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:07 crc kubenswrapper[4689]: I1201 08:39:07.627628 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:07 crc kubenswrapper[4689]: I1201 08:39:07.627639 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:07 crc kubenswrapper[4689]: I1201 08:39:07.627657 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:07 crc kubenswrapper[4689]: I1201 08:39:07.627668 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:07Z","lastTransitionTime":"2025-12-01T08:39:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:07 crc kubenswrapper[4689]: I1201 08:39:07.643079 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"988f960f-52fa-406f-9320-a8eec7a04f76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://496b7a56d1e3bb23f4d1da0fc5bbe009995c46b1f44faad80ee18a18ab301a18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://496b7a56d1e3bb23f4d1da0fc5bbe009995c46b1f44faad80ee18a18ab301a18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8zn56\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:07Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:07 crc kubenswrapper[4689]: I1201 08:39:07.654703 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f71ff62f-7141-4d50-b8ad-8312dc8a80e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccd36e732198fa380968b1ca9a174e8f380c9d3c3c762c3267b2f65367c4360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d09630f7eb6a631d65473e272c4f98de5ba071d33992776b4101ac797b0854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f804833ecd52f1cd93a4df02639ff912e5c7ff38bfce3ba1c47e8cd5fca46e11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35299e45eecb4b2b98003295a39b7945d1fa2943a5a7f97301054d255687877e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:38:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:07Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:07 crc kubenswrapper[4689]: I1201 08:39:07.665753 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:07Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:07 crc kubenswrapper[4689]: I1201 08:39:07.684435 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1946844b-83ee-401e-b3b6-5994ef81c85e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dbad86a3834b9ec3334de7ccc49988da1f7e42c423801c9789ce12ba70390b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://848171dd65d193193056bff78092de5ea65abaf7af4d168a9c27409765bd0ac2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad9f8bfeec0bea5ec9df44017223d633ce320bba477e99d5cc325712f92fff65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5920f4efa4e812090eb100b51bcf8e37eb82eb6b9fa495c8560ceeea3a94d55b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83ec64e82504ae546c78bdb3eb8b6cf47514373616749723ec457191b95b73c7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 08:38:54.283267 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 08:38:54.288329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1100834793/tls.crt::/tmp/serving-cert-1100834793/tls.key\\\\\\\"\\\\nI1201 08:39:00.137331 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 08:39:00.140699 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 08:39:00.140719 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 08:39:00.140740 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 08:39:00.140746 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 08:39:00.151271 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 08:39:00.151299 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 08:39:00.151305 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 08:39:00.151312 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 08:39:00.151316 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 08:39:00.151322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 08:39:00.151326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 08:39:00.151604 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 08:39:00.154622 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7190f95ad8a5cce27417903e343138ff345b97c0713609f47db91d8fe0c44a7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bba1efe2a197a5c47fc4220c1c53da0db51bcbdf317261c978b7f31348034f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bba1efe2a197a5c47fc4220c1c53da0db51bcbdf317261c978b7f31348034f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:38:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:07Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:07 crc kubenswrapper[4689]: I1201 08:39:07.697301 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3947625d-75bf-4332-a233-1491b2ee9d96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5095b8eb2f2f6319a0221ed4d7ec1fb81540fb75f610f7b57dc23fefbd196b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5v5pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdcf58565a332d76dac9cc12df1e0cd89eee6934be4b3234f8c36e9a8e22983a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5v5pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hmdnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:07Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:07 crc kubenswrapper[4689]: I1201 08:39:07.732084 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:07 crc kubenswrapper[4689]: I1201 08:39:07.732152 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:07 crc kubenswrapper[4689]: I1201 08:39:07.732164 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:07 crc kubenswrapper[4689]: I1201 08:39:07.732209 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:07 crc kubenswrapper[4689]: I1201 08:39:07.732224 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:07Z","lastTransitionTime":"2025-12-01T08:39:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:07 crc kubenswrapper[4689]: I1201 08:39:07.834933 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:07 crc kubenswrapper[4689]: I1201 08:39:07.834977 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:07 crc kubenswrapper[4689]: I1201 08:39:07.834988 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:07 crc kubenswrapper[4689]: I1201 08:39:07.835004 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:07 crc kubenswrapper[4689]: I1201 08:39:07.835015 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:07Z","lastTransitionTime":"2025-12-01T08:39:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:07 crc kubenswrapper[4689]: I1201 08:39:07.937697 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:07 crc kubenswrapper[4689]: I1201 08:39:07.937726 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:07 crc kubenswrapper[4689]: I1201 08:39:07.937743 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:07 crc kubenswrapper[4689]: I1201 08:39:07.937757 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:07 crc kubenswrapper[4689]: I1201 08:39:07.937766 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:07Z","lastTransitionTime":"2025-12-01T08:39:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:08 crc kubenswrapper[4689]: I1201 08:39:08.044090 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:08 crc kubenswrapper[4689]: I1201 08:39:08.044165 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:08 crc kubenswrapper[4689]: I1201 08:39:08.044177 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:08 crc kubenswrapper[4689]: I1201 08:39:08.044202 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:08 crc kubenswrapper[4689]: I1201 08:39:08.044217 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:08Z","lastTransitionTime":"2025-12-01T08:39:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:08 crc kubenswrapper[4689]: I1201 08:39:08.046468 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 08:39:08 crc kubenswrapper[4689]: E1201 08:39:08.046745 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 08:39:08 crc kubenswrapper[4689]: I1201 08:39:08.047485 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 08:39:08 crc kubenswrapper[4689]: E1201 08:39:08.047642 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 08:39:08 crc kubenswrapper[4689]: I1201 08:39:08.147652 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:08 crc kubenswrapper[4689]: I1201 08:39:08.147717 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:08 crc kubenswrapper[4689]: I1201 08:39:08.147734 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:08 crc kubenswrapper[4689]: I1201 08:39:08.147760 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:08 crc kubenswrapper[4689]: I1201 08:39:08.147779 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:08Z","lastTransitionTime":"2025-12-01T08:39:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:08 crc kubenswrapper[4689]: I1201 08:39:08.251110 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:08 crc kubenswrapper[4689]: I1201 08:39:08.251163 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:08 crc kubenswrapper[4689]: I1201 08:39:08.251175 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:08 crc kubenswrapper[4689]: I1201 08:39:08.251195 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:08 crc kubenswrapper[4689]: I1201 08:39:08.251209 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:08Z","lastTransitionTime":"2025-12-01T08:39:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:08 crc kubenswrapper[4689]: I1201 08:39:08.353728 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:08 crc kubenswrapper[4689]: I1201 08:39:08.353771 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:08 crc kubenswrapper[4689]: I1201 08:39:08.353782 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:08 crc kubenswrapper[4689]: I1201 08:39:08.353795 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:08 crc kubenswrapper[4689]: I1201 08:39:08.353804 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:08Z","lastTransitionTime":"2025-12-01T08:39:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:08 crc kubenswrapper[4689]: I1201 08:39:08.446197 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-kg5bw" event={"ID":"af90efaa-97be-48b4-bfe6-dc25956d2b5c","Type":"ContainerStarted","Data":"15e208d5bee2ddf73e62749c6865d4a6cc138365c59ab582e2b28e8c01480591"} Dec 01 08:39:08 crc kubenswrapper[4689]: I1201 08:39:08.448779 4689 generic.go:334] "Generic (PLEG): container finished" podID="dcab587f-eb9b-4dde-a0a1-75ed175999b0" containerID="548c59ec353473b69fd54aa96a077763c420647673b64000364532db2c3beefe" exitCode=0 Dec 01 08:39:08 crc kubenswrapper[4689]: I1201 08:39:08.448827 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7p2p7" event={"ID":"dcab587f-eb9b-4dde-a0a1-75ed175999b0","Type":"ContainerDied","Data":"548c59ec353473b69fd54aa96a077763c420647673b64000364532db2c3beefe"} Dec 01 08:39:08 crc kubenswrapper[4689]: I1201 08:39:08.460593 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:08 crc kubenswrapper[4689]: I1201 08:39:08.460631 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:08 crc kubenswrapper[4689]: I1201 08:39:08.460641 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:08 crc kubenswrapper[4689]: I1201 08:39:08.460657 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:08 crc kubenswrapper[4689]: I1201 08:39:08.460668 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:08Z","lastTransitionTime":"2025-12-01T08:39:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:08 crc kubenswrapper[4689]: I1201 08:39:08.463315 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f71ff62f-7141-4d50-b8ad-8312dc8a80e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccd36e732198fa380968b1ca9a174e8f380c9d3c3c762c3267b2f65367c4360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d09630f7eb6a631d65473e272c4f98de5ba071d33992776b4101ac797b0854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f804833ecd52f1cd93a4df02639ff912e5c7ff38bfce3ba1c47e8cd5fca46e11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35299e45eecb4b2b98003295a39b7945d1fa2943a5a7f97301054d255687877e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:38:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:08Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:08 crc kubenswrapper[4689]: I1201 08:39:08.474806 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:08Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:08 crc kubenswrapper[4689]: I1201 08:39:08.484923 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4z9l8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74395c07-d5ab-45ec-a616-1d0b1b336583\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f9b1493d035a89354477e779e8277075127a85756f190d5f0c3bf98a15f15b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjzb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4z9l8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:08Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:08 crc kubenswrapper[4689]: I1201 08:39:08.503014 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"988f960f-52fa-406f-9320-a8eec7a04f76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://496b7a56d1e3bb23f4d1da0fc5bbe009995c46b1f44faad80ee18a18ab301a18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://496b7a56d1e3bb23f4d1da0fc5bbe009995c46b1f44faad80ee18a18ab301a18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8zn56\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:08Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:08 crc kubenswrapper[4689]: I1201 08:39:08.518496 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1946844b-83ee-401e-b3b6-5994ef81c85e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dbad86a3834b9ec3334de7ccc49988da1f7e42c423801c9789ce12ba70390b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://848171dd65d193193056bff78092de5ea65abaf7af4d168a9c27409765bd0ac2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad9f8bfeec0bea5ec9df44017223d633ce320bba477e99d5cc325712f92fff65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5920f4efa4e812090eb100b51bcf8e37eb82eb6b9fa495c8560ceeea3a94d55b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83ec64e82504ae546c78bdb3eb8b6cf47514373616749723ec457191b95b73c7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 08:38:54.283267 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 08:38:54.288329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1100834793/tls.crt::/tmp/serving-cert-1100834793/tls.key\\\\\\\"\\\\nI1201 08:39:00.137331 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 08:39:00.140699 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 08:39:00.140719 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 08:39:00.140740 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 08:39:00.140746 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 08:39:00.151271 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 08:39:00.151299 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 08:39:00.151305 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 08:39:00.151312 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 08:39:00.151316 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 08:39:00.151322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 08:39:00.151326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 08:39:00.151604 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 08:39:00.154622 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7190f95ad8a5cce27417903e343138ff345b97c0713609f47db91d8fe0c44a7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bba1efe2a197a5c47fc4220c1c53da0db51bcbdf317261c978b7f31348034f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bba1efe2a197a5c47fc4220c1c53da0db51bcbdf317261c978b7f31348034f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:38:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:08Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:08 crc kubenswrapper[4689]: I1201 08:39:08.533182 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3947625d-75bf-4332-a233-1491b2ee9d96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5095b8eb2f2f6319a0221ed4d7ec1fb81540fb75f610f7b57dc23fefbd196b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5v5pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdcf58565a332d76dac9cc12df1e0cd89eee6934be4b3234f8c36e9a8e22983a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5v5pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hmdnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:08Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:08 crc kubenswrapper[4689]: I1201 08:39:08.548759 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:08Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:08 crc kubenswrapper[4689]: I1201 08:39:08.561021 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:08Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:08 crc kubenswrapper[4689]: I1201 08:39:08.562588 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:08 crc kubenswrapper[4689]: I1201 08:39:08.562612 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:08 crc kubenswrapper[4689]: I1201 08:39:08.562619 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:08 crc kubenswrapper[4689]: I1201 08:39:08.562633 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:08 crc kubenswrapper[4689]: I1201 08:39:08.562642 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:08Z","lastTransitionTime":"2025-12-01T08:39:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:08 crc kubenswrapper[4689]: I1201 08:39:08.575712 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dl2st" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bebcb50-c292-4bca-9299-2fdc21439b18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://768aa329fc277550bd549890a176d80246fcbfa50fbc3b88c444434b980b9986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98wj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dl2st\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:08Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:08 crc kubenswrapper[4689]: I1201 08:39:08.594587 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7p2p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcab587f-eb9b-4dde-a0a1-75ed175999b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6954a34226801e4552090e2c6e76c36de94e28ea8e175bc26392f534fb12214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6954a34226801e4552090e2c6e76c36de94e28ea8e175bc26392f534fb12214\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33d22ffe5ececc3e9210cb8c45471bd5f4b8a76e9ff2002cfc4c099b6973a7ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33d22ffe5ececc3e9210cb8c45471bd5f4b8a76e9ff2002cfc4c099b6973a7ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30e1f218df66bf86cef3d56c80b2d5bafacfabbac8868f3127a70f88431d00e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30e1f218df66bf86cef3d56c80b2d5bafacfabbac8868f3127a70f88431d00e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7p2p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:08Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:08 crc kubenswrapper[4689]: I1201 08:39:08.598402 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:39:08 crc kubenswrapper[4689]: E1201 08:39:08.598532 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:39:16.598510993 +0000 UTC m=+36.670798887 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:39:08 crc kubenswrapper[4689]: I1201 08:39:08.598599 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 08:39:08 crc kubenswrapper[4689]: I1201 08:39:08.598626 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 08:39:08 crc kubenswrapper[4689]: I1201 08:39:08.598647 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 08:39:08 crc kubenswrapper[4689]: E1201 08:39:08.598715 4689 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 01 08:39:08 crc kubenswrapper[4689]: E1201 08:39:08.598745 4689 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 01 08:39:08 crc kubenswrapper[4689]: E1201 08:39:08.598748 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-01 08:39:16.598740909 +0000 UTC m=+36.671028813 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 01 08:39:08 crc kubenswrapper[4689]: E1201 08:39:08.598786 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-01 08:39:16.59877726 +0000 UTC m=+36.671065164 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 01 08:39:08 crc kubenswrapper[4689]: E1201 08:39:08.598911 4689 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 01 08:39:08 crc kubenswrapper[4689]: E1201 08:39:08.598961 4689 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 01 08:39:08 crc kubenswrapper[4689]: E1201 08:39:08.598979 4689 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 08:39:08 crc kubenswrapper[4689]: E1201 08:39:08.599065 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-01 08:39:16.599042377 +0000 UTC m=+36.671330331 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 08:39:08 crc kubenswrapper[4689]: I1201 08:39:08.609214 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kg5bw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af90efaa-97be-48b4-bfe6-dc25956d2b5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e208d5bee2ddf73e62749c6865d4a6cc138365c59ab582e2b28e8c01480591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2sbc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kg5bw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:08Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:08 crc kubenswrapper[4689]: I1201 08:39:08.620423 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e81b25bc4fc917a5480b84f4c7d2550829f5ad209d4cac14d2cd64d5b792b9e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:08Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:08 crc kubenswrapper[4689]: I1201 08:39:08.631276 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78a7f1a0302034697497f8a7a3d9547400a4311d7e304fdba95a19750c66658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:08Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:08 crc kubenswrapper[4689]: I1201 08:39:08.650966 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62ff4a03d2e50e2d881287008cfedc7ff067e04d3dc24ace25b56677b0eb117c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1ed68789c1f37ddbc952b0927feba7a704b01f3ee52e057e29c989f1d475e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:08Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:08 crc kubenswrapper[4689]: I1201 08:39:08.665116 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:08 crc kubenswrapper[4689]: I1201 08:39:08.665158 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:08 crc kubenswrapper[4689]: I1201 08:39:08.665172 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:08 crc kubenswrapper[4689]: I1201 08:39:08.665226 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:08 crc kubenswrapper[4689]: I1201 08:39:08.665242 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:08Z","lastTransitionTime":"2025-12-01T08:39:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:08 crc kubenswrapper[4689]: I1201 08:39:08.667563 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7p2p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcab587f-eb9b-4dde-a0a1-75ed175999b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6954a34226801e4552090e2c6e76c36de94e28ea8e175bc26392f534fb12214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6954a34226801e4552090e2c6e76c36de94e28ea8e175bc26392f534fb12214\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33d22ffe5ececc3e9210cb8c45471bd5f4b8a76e9ff2002cfc4c099b6973a7ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33d22ffe5ececc3e9210cb8c45471bd5f4b8a76e9ff2002cfc4c099b6973a7ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30e1f218df66bf86cef3d56c80b2d5bafacfabbac8868f3127a70f88431d00e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30e1f218df66bf86cef3d56c80b2d5bafacfabbac8868f3127a70f88431d00e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://548c59ec353473b69fd54aa96a077763c420647673b64000364532db2c3beefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://548c59ec353473b69fd54aa96a077763c420647673b64000364532db2c3beefe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7p2p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:08Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:08 crc kubenswrapper[4689]: I1201 08:39:08.680863 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kg5bw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af90efaa-97be-48b4-bfe6-dc25956d2b5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e208d5bee2ddf73e62749c6865d4a6cc138365c59ab582e2b28e8c01480591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2sbc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kg5bw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:08Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:08 crc kubenswrapper[4689]: I1201 08:39:08.694921 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:08Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:08 crc kubenswrapper[4689]: I1201 08:39:08.699901 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 08:39:08 crc kubenswrapper[4689]: E1201 08:39:08.700113 4689 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 01 08:39:08 crc kubenswrapper[4689]: E1201 08:39:08.700140 4689 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 01 08:39:08 crc kubenswrapper[4689]: E1201 08:39:08.700155 4689 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 08:39:08 crc kubenswrapper[4689]: E1201 08:39:08.700214 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-01 08:39:16.700193312 +0000 UTC m=+36.772481216 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 08:39:08 crc kubenswrapper[4689]: I1201 08:39:08.706898 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:08Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:08 crc kubenswrapper[4689]: I1201 08:39:08.723004 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dl2st" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bebcb50-c292-4bca-9299-2fdc21439b18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://768aa329fc277550bd549890a176d80246fcbfa50fbc3b88c444434b980b9986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98wj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dl2st\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:08Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:08 crc kubenswrapper[4689]: I1201 08:39:08.738509 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62ff4a03d2e50e2d881287008cfedc7ff067e04d3dc24ace25b56677b0eb117c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1ed68789c1f37ddbc952b0927feba7a704b01f3ee52e057e29c989f1d475e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:08Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:08 crc kubenswrapper[4689]: I1201 08:39:08.751994 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e81b25bc4fc917a5480b84f4c7d2550829f5ad209d4cac14d2cd64d5b792b9e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:08Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:08 crc kubenswrapper[4689]: I1201 08:39:08.768929 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78a7f1a0302034697497f8a7a3d9547400a4311d7e304fdba95a19750c66658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:08Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:08 crc kubenswrapper[4689]: I1201 08:39:08.773493 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:08 crc kubenswrapper[4689]: I1201 08:39:08.773545 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:08 crc kubenswrapper[4689]: I1201 08:39:08.773558 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:08 crc kubenswrapper[4689]: I1201 08:39:08.773575 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:08 crc kubenswrapper[4689]: I1201 08:39:08.773585 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:08Z","lastTransitionTime":"2025-12-01T08:39:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:08 crc kubenswrapper[4689]: I1201 08:39:08.782775 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4z9l8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74395c07-d5ab-45ec-a616-1d0b1b336583\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f9b1493d035a89354477e779e8277075127a85756f190d5f0c3bf98a15f15b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjzb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4z9l8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:08Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:08 crc kubenswrapper[4689]: I1201 08:39:08.801802 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"988f960f-52fa-406f-9320-a8eec7a04f76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://496b7a56d1e3bb23f4d1da0fc5bbe009995c46b1f44faad80ee18a18ab301a18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://496b7a56d1e3bb23f4d1da0fc5bbe009995c46b1f44faad80ee18a18ab301a18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8zn56\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:08Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:08 crc kubenswrapper[4689]: I1201 08:39:08.812897 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f71ff62f-7141-4d50-b8ad-8312dc8a80e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccd36e732198fa380968b1ca9a174e8f380c9d3c3c762c3267b2f65367c4360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d09630f7eb6a631d65473e272c4f98de5ba071d33992776b4101ac797b0854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f804833ecd52f1cd93a4df02639ff912e5c7ff38bfce3ba1c47e8cd5fca46e11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35299e45eecb4b2b98003295a39b7945d1fa2943a5a7f97301054d255687877e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:38:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:08Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:08 crc kubenswrapper[4689]: I1201 08:39:08.824604 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:08Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:08 crc kubenswrapper[4689]: I1201 08:39:08.838966 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1946844b-83ee-401e-b3b6-5994ef81c85e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dbad86a3834b9ec3334de7ccc49988da1f7e42c423801c9789ce12ba70390b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://848171dd65d193193056bff78092de5ea65abaf7af4d168a9c27409765bd0ac2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad9f8bfeec0bea5ec9df44017223d633ce320bba477e99d5cc325712f92fff65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5920f4efa4e812090eb100b51bcf8e37eb82eb6b9fa495c8560ceeea3a94d55b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83ec64e82504ae546c78bdb3eb8b6cf47514373616749723ec457191b95b73c7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 08:38:54.283267 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 08:38:54.288329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1100834793/tls.crt::/tmp/serving-cert-1100834793/tls.key\\\\\\\"\\\\nI1201 08:39:00.137331 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 08:39:00.140699 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 08:39:00.140719 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 08:39:00.140740 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 08:39:00.140746 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 08:39:00.151271 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 08:39:00.151299 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 08:39:00.151305 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 08:39:00.151312 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 08:39:00.151316 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 08:39:00.151322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 08:39:00.151326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 08:39:00.151604 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 08:39:00.154622 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7190f95ad8a5cce27417903e343138ff345b97c0713609f47db91d8fe0c44a7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bba1efe2a197a5c47fc4220c1c53da0db51bcbdf317261c978b7f31348034f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bba1efe2a197a5c47fc4220c1c53da0db51bcbdf317261c978b7f31348034f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:38:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:08Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:08 crc kubenswrapper[4689]: I1201 08:39:08.851392 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3947625d-75bf-4332-a233-1491b2ee9d96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5095b8eb2f2f6319a0221ed4d7ec1fb81540fb75f610f7b57dc23fefbd196b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5v5pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdcf58565a332d76dac9cc12df1e0cd89eee6934be4b3234f8c36e9a8e22983a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5v5pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hmdnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:08Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:08 crc kubenswrapper[4689]: I1201 08:39:08.877136 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:08 crc kubenswrapper[4689]: I1201 08:39:08.877176 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:08 crc kubenswrapper[4689]: I1201 08:39:08.877189 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:08 crc kubenswrapper[4689]: I1201 08:39:08.877210 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:08 crc kubenswrapper[4689]: I1201 08:39:08.877227 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:08Z","lastTransitionTime":"2025-12-01T08:39:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:08 crc kubenswrapper[4689]: I1201 08:39:08.980584 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:08 crc kubenswrapper[4689]: I1201 08:39:08.980637 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:08 crc kubenswrapper[4689]: I1201 08:39:08.980648 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:08 crc kubenswrapper[4689]: I1201 08:39:08.980670 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:08 crc kubenswrapper[4689]: I1201 08:39:08.980681 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:08Z","lastTransitionTime":"2025-12-01T08:39:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:09 crc kubenswrapper[4689]: I1201 08:39:09.047407 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 08:39:09 crc kubenswrapper[4689]: E1201 08:39:09.047619 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 08:39:09 crc kubenswrapper[4689]: I1201 08:39:09.084121 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:09 crc kubenswrapper[4689]: I1201 08:39:09.084173 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:09 crc kubenswrapper[4689]: I1201 08:39:09.084184 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:09 crc kubenswrapper[4689]: I1201 08:39:09.084208 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:09 crc kubenswrapper[4689]: I1201 08:39:09.084222 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:09Z","lastTransitionTime":"2025-12-01T08:39:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:09 crc kubenswrapper[4689]: I1201 08:39:09.186975 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:09 crc kubenswrapper[4689]: I1201 08:39:09.187034 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:09 crc kubenswrapper[4689]: I1201 08:39:09.187049 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:09 crc kubenswrapper[4689]: I1201 08:39:09.187067 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:09 crc kubenswrapper[4689]: I1201 08:39:09.187079 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:09Z","lastTransitionTime":"2025-12-01T08:39:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:09 crc kubenswrapper[4689]: I1201 08:39:09.290925 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:09 crc kubenswrapper[4689]: I1201 08:39:09.291322 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:09 crc kubenswrapper[4689]: I1201 08:39:09.291612 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:09 crc kubenswrapper[4689]: I1201 08:39:09.291851 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:09 crc kubenswrapper[4689]: I1201 08:39:09.292064 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:09Z","lastTransitionTime":"2025-12-01T08:39:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:09 crc kubenswrapper[4689]: I1201 08:39:09.402341 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:09 crc kubenswrapper[4689]: I1201 08:39:09.403209 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:09 crc kubenswrapper[4689]: I1201 08:39:09.403304 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:09 crc kubenswrapper[4689]: I1201 08:39:09.403423 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:09 crc kubenswrapper[4689]: I1201 08:39:09.403517 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:09Z","lastTransitionTime":"2025-12-01T08:39:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:09 crc kubenswrapper[4689]: I1201 08:39:09.458736 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" event={"ID":"988f960f-52fa-406f-9320-a8eec7a04f76","Type":"ContainerStarted","Data":"e96fecee94f6f25ea0bbde3c6a23854fb5dc476f4b5cfb82871358fbcf6809c5"} Dec 01 08:39:09 crc kubenswrapper[4689]: I1201 08:39:09.507673 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:09 crc kubenswrapper[4689]: I1201 08:39:09.507763 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:09 crc kubenswrapper[4689]: I1201 08:39:09.507783 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:09 crc kubenswrapper[4689]: I1201 08:39:09.507812 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:09 crc kubenswrapper[4689]: I1201 08:39:09.507832 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:09Z","lastTransitionTime":"2025-12-01T08:39:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:09 crc kubenswrapper[4689]: I1201 08:39:09.613150 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:09 crc kubenswrapper[4689]: I1201 08:39:09.613207 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:09 crc kubenswrapper[4689]: I1201 08:39:09.613219 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:09 crc kubenswrapper[4689]: I1201 08:39:09.613240 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:09 crc kubenswrapper[4689]: I1201 08:39:09.613253 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:09Z","lastTransitionTime":"2025-12-01T08:39:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:09 crc kubenswrapper[4689]: I1201 08:39:09.717578 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:09 crc kubenswrapper[4689]: I1201 08:39:09.717999 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:09 crc kubenswrapper[4689]: I1201 08:39:09.718066 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:09 crc kubenswrapper[4689]: I1201 08:39:09.718155 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:09 crc kubenswrapper[4689]: I1201 08:39:09.718584 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:09Z","lastTransitionTime":"2025-12-01T08:39:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:09 crc kubenswrapper[4689]: I1201 08:39:09.906810 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:09 crc kubenswrapper[4689]: I1201 08:39:09.907289 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:09 crc kubenswrapper[4689]: I1201 08:39:09.907308 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:09 crc kubenswrapper[4689]: I1201 08:39:09.907334 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:09 crc kubenswrapper[4689]: I1201 08:39:09.907352 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:09Z","lastTransitionTime":"2025-12-01T08:39:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:10 crc kubenswrapper[4689]: I1201 08:39:10.011598 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:10 crc kubenswrapper[4689]: I1201 08:39:10.011651 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:10 crc kubenswrapper[4689]: I1201 08:39:10.011662 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:10 crc kubenswrapper[4689]: I1201 08:39:10.011681 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:10 crc kubenswrapper[4689]: I1201 08:39:10.011690 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:10Z","lastTransitionTime":"2025-12-01T08:39:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:10 crc kubenswrapper[4689]: I1201 08:39:10.047126 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 08:39:10 crc kubenswrapper[4689]: I1201 08:39:10.047229 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 08:39:10 crc kubenswrapper[4689]: E1201 08:39:10.047332 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 08:39:10 crc kubenswrapper[4689]: E1201 08:39:10.047507 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 08:39:10 crc kubenswrapper[4689]: I1201 08:39:10.115035 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:10 crc kubenswrapper[4689]: I1201 08:39:10.115091 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:10 crc kubenswrapper[4689]: I1201 08:39:10.115101 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:10 crc kubenswrapper[4689]: I1201 08:39:10.115121 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:10 crc kubenswrapper[4689]: I1201 08:39:10.115134 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:10Z","lastTransitionTime":"2025-12-01T08:39:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:10 crc kubenswrapper[4689]: I1201 08:39:10.218671 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:10 crc kubenswrapper[4689]: I1201 08:39:10.218747 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:10 crc kubenswrapper[4689]: I1201 08:39:10.218761 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:10 crc kubenswrapper[4689]: I1201 08:39:10.218792 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:10 crc kubenswrapper[4689]: I1201 08:39:10.218805 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:10Z","lastTransitionTime":"2025-12-01T08:39:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:10 crc kubenswrapper[4689]: I1201 08:39:10.321709 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:10 crc kubenswrapper[4689]: I1201 08:39:10.321753 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:10 crc kubenswrapper[4689]: I1201 08:39:10.321762 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:10 crc kubenswrapper[4689]: I1201 08:39:10.321780 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:10 crc kubenswrapper[4689]: I1201 08:39:10.321791 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:10Z","lastTransitionTime":"2025-12-01T08:39:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:10 crc kubenswrapper[4689]: I1201 08:39:10.424336 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:10 crc kubenswrapper[4689]: I1201 08:39:10.424405 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:10 crc kubenswrapper[4689]: I1201 08:39:10.424420 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:10 crc kubenswrapper[4689]: I1201 08:39:10.424441 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:10 crc kubenswrapper[4689]: I1201 08:39:10.424457 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:10Z","lastTransitionTime":"2025-12-01T08:39:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:10 crc kubenswrapper[4689]: I1201 08:39:10.466035 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7p2p7" event={"ID":"dcab587f-eb9b-4dde-a0a1-75ed175999b0","Type":"ContainerStarted","Data":"ae04920175b7c9665a739c2f390f5b21a7f8227277909132d9719eaba43c9dd2"} Dec 01 08:39:10 crc kubenswrapper[4689]: I1201 08:39:10.482201 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:10Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:10 crc kubenswrapper[4689]: I1201 08:39:10.504537 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:10Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:10 crc kubenswrapper[4689]: I1201 08:39:10.525335 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dl2st" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bebcb50-c292-4bca-9299-2fdc21439b18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://768aa329fc277550bd549890a176d80246fcbfa50fbc3b88c444434b980b9986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98wj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dl2st\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:10Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:10 crc kubenswrapper[4689]: I1201 08:39:10.526909 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:10 crc kubenswrapper[4689]: I1201 08:39:10.526938 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:10 crc kubenswrapper[4689]: I1201 08:39:10.526948 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:10 crc kubenswrapper[4689]: I1201 08:39:10.526966 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:10 crc kubenswrapper[4689]: I1201 08:39:10.526976 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:10Z","lastTransitionTime":"2025-12-01T08:39:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:10 crc kubenswrapper[4689]: I1201 08:39:10.542746 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7p2p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcab587f-eb9b-4dde-a0a1-75ed175999b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6954a34226801e4552090e2c6e76c36de94e28ea8e175bc26392f534fb12214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6954a34226801e4552090e2c6e76c36de94e28ea8e175bc26392f534fb12214\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33d22ffe5ececc3e9210cb8c45471bd5f4b8a76e9ff2002cfc4c099b6973a7ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33d22ffe5ececc3e9210cb8c45471bd5f4b8a76e9ff2002cfc4c099b6973a7ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30e1f218df66bf86cef3d56c80b2d5bafacfabbac8868f3127a70f88431d00e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30e1f218df66bf86cef3d56c80b2d5bafacfabbac8868f3127a70f88431d00e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://548c59ec353473b69fd54aa96a077763c420647673b64000364532db2c3beefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://548c59ec353473b69fd54aa96a077763c420647673b64000364532db2c3beefe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae04920175b7c9665a739c2f390f5b21a7f8227277909132d9719eaba43c9dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7p2p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:10Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:10 crc kubenswrapper[4689]: I1201 08:39:10.563224 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kg5bw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af90efaa-97be-48b4-bfe6-dc25956d2b5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e208d5bee2ddf73e62749c6865d4a6cc138365c59ab582e2b28e8c01480591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2sbc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kg5bw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:10Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:10 crc kubenswrapper[4689]: I1201 08:39:10.584205 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e81b25bc4fc917a5480b84f4c7d2550829f5ad209d4cac14d2cd64d5b792b9e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:10Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:10 crc kubenswrapper[4689]: I1201 08:39:10.601799 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78a7f1a0302034697497f8a7a3d9547400a4311d7e304fdba95a19750c66658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:10Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:10 crc kubenswrapper[4689]: I1201 08:39:10.617571 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62ff4a03d2e50e2d881287008cfedc7ff067e04d3dc24ace25b56677b0eb117c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1ed68789c1f37ddbc952b0927feba7a704b01f3ee52e057e29c989f1d475e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:10Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:10 crc kubenswrapper[4689]: I1201 08:39:10.780540 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:10 crc kubenswrapper[4689]: I1201 08:39:10.780619 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:10 crc kubenswrapper[4689]: I1201 08:39:10.780687 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:10 crc kubenswrapper[4689]: I1201 08:39:10.780707 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:10 crc kubenswrapper[4689]: I1201 08:39:10.780724 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:10Z","lastTransitionTime":"2025-12-01T08:39:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:10 crc kubenswrapper[4689]: I1201 08:39:10.797266 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f71ff62f-7141-4d50-b8ad-8312dc8a80e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccd36e732198fa380968b1ca9a174e8f380c9d3c3c762c3267b2f65367c4360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d09630f7eb6a631d65473e272c4f98de5ba071d33992776b4101ac797b0854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f804833ecd52f1cd93a4df02639ff912e5c7ff38bfce3ba1c47e8cd5fca46e11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35299e45eecb4b2b98003295a39b7945d1fa2943a5a7f97301054d255687877e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:38:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:10Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:10 crc kubenswrapper[4689]: I1201 08:39:10.816423 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:10Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:10 crc kubenswrapper[4689]: I1201 08:39:10.831116 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4z9l8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74395c07-d5ab-45ec-a616-1d0b1b336583\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f9b1493d035a89354477e779e8277075127a85756f190d5f0c3bf98a15f15b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjzb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4z9l8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:10Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:10 crc kubenswrapper[4689]: I1201 08:39:10.862028 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"988f960f-52fa-406f-9320-a8eec7a04f76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://496b7a56d1e3bb23f4d1da0fc5bbe009995c46b1f44faad80ee18a18ab301a18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://496b7a56d1e3bb23f4d1da0fc5bbe009995c46b1f44faad80ee18a18ab301a18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8zn56\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:10Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:10 crc kubenswrapper[4689]: I1201 08:39:10.879846 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1946844b-83ee-401e-b3b6-5994ef81c85e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dbad86a3834b9ec3334de7ccc49988da1f7e42c423801c9789ce12ba70390b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://848171dd65d193193056bff78092de5ea65abaf7af4d168a9c27409765bd0ac2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad9f8bfeec0bea5ec9df44017223d633ce320bba477e99d5cc325712f92fff65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5920f4efa4e812090eb100b51bcf8e37eb82eb6b9fa495c8560ceeea3a94d55b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83ec64e82504ae546c78bdb3eb8b6cf47514373616749723ec457191b95b73c7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 08:38:54.283267 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 08:38:54.288329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1100834793/tls.crt::/tmp/serving-cert-1100834793/tls.key\\\\\\\"\\\\nI1201 08:39:00.137331 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 08:39:00.140699 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 08:39:00.140719 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 08:39:00.140740 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 08:39:00.140746 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 08:39:00.151271 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 08:39:00.151299 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 08:39:00.151305 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 08:39:00.151312 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 08:39:00.151316 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 08:39:00.151322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 08:39:00.151326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 08:39:00.151604 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 08:39:00.154622 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7190f95ad8a5cce27417903e343138ff345b97c0713609f47db91d8fe0c44a7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bba1efe2a197a5c47fc4220c1c53da0db51bcbdf317261c978b7f31348034f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bba1efe2a197a5c47fc4220c1c53da0db51bcbdf317261c978b7f31348034f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:38:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:10Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:10 crc kubenswrapper[4689]: I1201 08:39:10.883709 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:10 crc kubenswrapper[4689]: I1201 08:39:10.883749 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:10 crc kubenswrapper[4689]: I1201 08:39:10.883758 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:10 crc kubenswrapper[4689]: I1201 08:39:10.883777 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:10 crc kubenswrapper[4689]: I1201 08:39:10.883788 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:10Z","lastTransitionTime":"2025-12-01T08:39:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:10 crc kubenswrapper[4689]: I1201 08:39:10.892042 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3947625d-75bf-4332-a233-1491b2ee9d96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5095b8eb2f2f6319a0221ed4d7ec1fb81540fb75f610f7b57dc23fefbd196b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5v5pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdcf58565a332d76dac9cc12df1e0cd89eee6934be4b3234f8c36e9a8e22983a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5v5pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hmdnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:10Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:10 crc kubenswrapper[4689]: I1201 08:39:10.993745 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:10 crc kubenswrapper[4689]: I1201 08:39:10.993827 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:10 crc kubenswrapper[4689]: I1201 08:39:10.993845 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:10 crc kubenswrapper[4689]: I1201 08:39:10.993872 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:10 crc kubenswrapper[4689]: I1201 08:39:10.993886 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:10Z","lastTransitionTime":"2025-12-01T08:39:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:11 crc kubenswrapper[4689]: I1201 08:39:11.047733 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 08:39:11 crc kubenswrapper[4689]: E1201 08:39:11.047878 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 08:39:11 crc kubenswrapper[4689]: I1201 08:39:11.076825 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:11Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:11 crc kubenswrapper[4689]: I1201 08:39:11.095757 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dl2st" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bebcb50-c292-4bca-9299-2fdc21439b18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://768aa329fc277550bd549890a176d80246fcbfa50fbc3b88c444434b980b9986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98wj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dl2st\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:11Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:11 crc kubenswrapper[4689]: I1201 08:39:11.097784 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:11 crc kubenswrapper[4689]: I1201 08:39:11.097829 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:11 crc kubenswrapper[4689]: I1201 08:39:11.097843 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:11 crc kubenswrapper[4689]: I1201 08:39:11.097871 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:11 crc kubenswrapper[4689]: I1201 08:39:11.097898 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:11Z","lastTransitionTime":"2025-12-01T08:39:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:11 crc kubenswrapper[4689]: I1201 08:39:11.118236 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7p2p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcab587f-eb9b-4dde-a0a1-75ed175999b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6954a34226801e4552090e2c6e76c36de94e28ea8e175bc26392f534fb12214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6954a34226801e4552090e2c6e76c36de94e28ea8e175bc26392f534fb12214\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33d22ffe5ececc3e9210cb8c45471bd5f4b8a76e9ff2002cfc4c099b6973a7ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33d22ffe5ececc3e9210cb8c45471bd5f4b8a76e9ff2002cfc4c099b6973a7ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30e1f218df66bf86cef3d56c80b2d5bafacfabbac8868f3127a70f88431d00e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30e1f218df66bf86cef3d56c80b2d5bafacfabbac8868f3127a70f88431d00e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://548c59ec353473b69fd54aa96a077763c420647673b64000364532db2c3beefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://548c59ec353473b69fd54aa96a077763c420647673b64000364532db2c3beefe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae04920175b7c9665a739c2f390f5b21a7f8227277909132d9719eaba43c9dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7p2p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:11Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:11 crc kubenswrapper[4689]: I1201 08:39:11.130046 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kg5bw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af90efaa-97be-48b4-bfe6-dc25956d2b5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e208d5bee2ddf73e62749c6865d4a6cc138365c59ab582e2b28e8c01480591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2sbc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kg5bw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:11Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:11 crc kubenswrapper[4689]: I1201 08:39:11.195115 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:11Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:11 crc kubenswrapper[4689]: I1201 08:39:11.213643 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e81b25bc4fc917a5480b84f4c7d2550829f5ad209d4cac14d2cd64d5b792b9e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:11Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:11 crc kubenswrapper[4689]: I1201 08:39:11.222968 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:11 crc kubenswrapper[4689]: I1201 08:39:11.223008 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:11 crc kubenswrapper[4689]: I1201 08:39:11.223021 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:11 crc kubenswrapper[4689]: I1201 08:39:11.223345 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:11 crc kubenswrapper[4689]: I1201 08:39:11.223394 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:11Z","lastTransitionTime":"2025-12-01T08:39:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:11 crc kubenswrapper[4689]: I1201 08:39:11.230300 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78a7f1a0302034697497f8a7a3d9547400a4311d7e304fdba95a19750c66658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:11Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:11 crc kubenswrapper[4689]: I1201 08:39:11.250595 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62ff4a03d2e50e2d881287008cfedc7ff067e04d3dc24ace25b56677b0eb117c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1ed68789c1f37ddbc952b0927feba7a704b01f3ee52e057e29c989f1d475e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:11Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:11 crc kubenswrapper[4689]: I1201 08:39:11.268846 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f71ff62f-7141-4d50-b8ad-8312dc8a80e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccd36e732198fa380968b1ca9a174e8f380c9d3c3c762c3267b2f65367c4360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d09630f7eb6a631d65473e272c4f98de5ba071d33992776b4101ac797b0854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f804833ecd52f1cd93a4df02639ff912e5c7ff38bfce3ba1c47e8cd5fca46e11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35299e45eecb4b2b98003295a39b7945d1fa2943a5a7f97301054d255687877e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:38:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:11Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:11 crc kubenswrapper[4689]: I1201 08:39:11.293416 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:11Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:11 crc kubenswrapper[4689]: I1201 08:39:11.306579 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4z9l8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74395c07-d5ab-45ec-a616-1d0b1b336583\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f9b1493d035a89354477e779e8277075127a85756f190d5f0c3bf98a15f15b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjzb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4z9l8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:11Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:11 crc kubenswrapper[4689]: I1201 08:39:11.329554 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:11 crc kubenswrapper[4689]: I1201 08:39:11.329906 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:11 crc kubenswrapper[4689]: I1201 08:39:11.329922 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:11 crc kubenswrapper[4689]: I1201 08:39:11.329945 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:11 crc kubenswrapper[4689]: I1201 08:39:11.329957 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:11Z","lastTransitionTime":"2025-12-01T08:39:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:11 crc kubenswrapper[4689]: I1201 08:39:11.400250 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"988f960f-52fa-406f-9320-a8eec7a04f76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://496b7a56d1e3bb23f4d1da0fc5bbe009995c46b1f44faad80ee18a18ab301a18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://496b7a56d1e3bb23f4d1da0fc5bbe009995c46b1f44faad80ee18a18ab301a18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8zn56\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:11Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:11 crc kubenswrapper[4689]: I1201 08:39:11.439772 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1946844b-83ee-401e-b3b6-5994ef81c85e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dbad86a3834b9ec3334de7ccc49988da1f7e42c423801c9789ce12ba70390b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://848171dd65d193193056bff78092de5ea65abaf7af4d168a9c27409765bd0ac2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad9f8bfeec0bea5ec9df44017223d633ce320bba477e99d5cc325712f92fff65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5920f4efa4e812090eb100b51bcf8e37eb82eb6b9fa495c8560ceeea3a94d55b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83ec64e82504ae546c78bdb3eb8b6cf47514373616749723ec457191b95b73c7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 08:38:54.283267 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 08:38:54.288329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1100834793/tls.crt::/tmp/serving-cert-1100834793/tls.key\\\\\\\"\\\\nI1201 08:39:00.137331 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 08:39:00.140699 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 08:39:00.140719 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 08:39:00.140740 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 08:39:00.140746 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 08:39:00.151271 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 08:39:00.151299 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 08:39:00.151305 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 08:39:00.151312 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 08:39:00.151316 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 08:39:00.151322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 08:39:00.151326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 08:39:00.151604 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 08:39:00.154622 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7190f95ad8a5cce27417903e343138ff345b97c0713609f47db91d8fe0c44a7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bba1efe2a197a5c47fc4220c1c53da0db51bcbdf317261c978b7f31348034f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bba1efe2a197a5c47fc4220c1c53da0db51bcbdf317261c978b7f31348034f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:38:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:11Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:11 crc kubenswrapper[4689]: I1201 08:39:11.443946 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:11 crc kubenswrapper[4689]: I1201 08:39:11.444025 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:11 crc kubenswrapper[4689]: I1201 08:39:11.444040 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:11 crc kubenswrapper[4689]: I1201 08:39:11.444059 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:11 crc kubenswrapper[4689]: I1201 08:39:11.444071 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:11Z","lastTransitionTime":"2025-12-01T08:39:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:11 crc kubenswrapper[4689]: I1201 08:39:11.458486 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3947625d-75bf-4332-a233-1491b2ee9d96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5095b8eb2f2f6319a0221ed4d7ec1fb81540fb75f610f7b57dc23fefbd196b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5v5pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdcf58565a332d76dac9cc12df1e0cd89eee6934be4b3234f8c36e9a8e22983a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5v5pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hmdnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:11Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:11 crc kubenswrapper[4689]: I1201 08:39:11.513445 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" event={"ID":"988f960f-52fa-406f-9320-a8eec7a04f76","Type":"ContainerStarted","Data":"71c071c39e8559860137dcb6dae34e217ed5619f1d7ddd40ef9a2794f87943ac"} Dec 01 08:39:11 crc kubenswrapper[4689]: I1201 08:39:11.536546 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e81b25bc4fc917a5480b84f4c7d2550829f5ad209d4cac14d2cd64d5b792b9e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:11Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:11 crc kubenswrapper[4689]: I1201 08:39:11.547029 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:11 crc kubenswrapper[4689]: I1201 08:39:11.547093 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:11 crc kubenswrapper[4689]: I1201 08:39:11.547106 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:11 crc kubenswrapper[4689]: I1201 08:39:11.547128 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:11 crc kubenswrapper[4689]: I1201 08:39:11.547139 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:11Z","lastTransitionTime":"2025-12-01T08:39:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:11 crc kubenswrapper[4689]: I1201 08:39:11.552776 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78a7f1a0302034697497f8a7a3d9547400a4311d7e304fdba95a19750c66658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:11Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:11 crc kubenswrapper[4689]: I1201 08:39:11.579110 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62ff4a03d2e50e2d881287008cfedc7ff067e04d3dc24ace25b56677b0eb117c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1ed68789c1f37ddbc952b0927feba7a704b01f3ee52e057e29c989f1d475e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:11Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:11 crc kubenswrapper[4689]: I1201 08:39:11.592919 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f71ff62f-7141-4d50-b8ad-8312dc8a80e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccd36e732198fa380968b1ca9a174e8f380c9d3c3c762c3267b2f65367c4360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d09630f7eb6a631d65473e272c4f98de5ba071d33992776b4101ac797b0854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f804833ecd52f1cd93a4df02639ff912e5c7ff38bfce3ba1c47e8cd5fca46e11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35299e45eecb4b2b98003295a39b7945d1fa2943a5a7f97301054d255687877e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:38:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:11Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:11 crc kubenswrapper[4689]: I1201 08:39:11.607670 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:11Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:11 crc kubenswrapper[4689]: I1201 08:39:11.619130 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4z9l8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74395c07-d5ab-45ec-a616-1d0b1b336583\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f9b1493d035a89354477e779e8277075127a85756f190d5f0c3bf98a15f15b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjzb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4z9l8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:11Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:11 crc kubenswrapper[4689]: I1201 08:39:11.643641 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"988f960f-52fa-406f-9320-a8eec7a04f76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8be3fd2d2a091ced00ce091ea6f81df99a6718fa9ead81d56590d2c8ebc2a36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46c9dc400db69538c36365f4e2d66aae47defc4224c2881532870f875e2b30e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2afbacac849e9ed880e975ae167ea5b2329811c28516152f4d809b043ab61746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be8fa8da5eef9035e80121c75adb9d6653d8683948fa83cfb1ff87a5fb3e8b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81345c33ddfbccd0c6b7d9c505ee81d513ce884fa1326add04026fd82d441e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://211c2ff150fa2d5fb73b0df4cfc5e694bf0b6b6761eec1e6827ac983fd759829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c071c39e8559860137dcb6dae34e217ed5619f1d7ddd40ef9a2794f87943ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e96fecee94f6f25ea0bbde3c6a23854fb5dc476f4b5cfb82871358fbcf6809c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://496b7a56d1e3bb23f4d1da0fc5bbe009995c46b1f44faad80ee18a18ab301a18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://496b7a56d1e3bb23f4d1da0fc5bbe009995c46b1f44faad80ee18a18ab301a18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8zn56\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:11Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:11 crc kubenswrapper[4689]: I1201 08:39:11.653277 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:11 crc kubenswrapper[4689]: I1201 08:39:11.653338 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:11 crc kubenswrapper[4689]: I1201 08:39:11.653361 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:11 crc kubenswrapper[4689]: I1201 08:39:11.653414 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:11 crc kubenswrapper[4689]: I1201 08:39:11.653449 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:11Z","lastTransitionTime":"2025-12-01T08:39:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:11 crc kubenswrapper[4689]: I1201 08:39:11.662522 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1946844b-83ee-401e-b3b6-5994ef81c85e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dbad86a3834b9ec3334de7ccc49988da1f7e42c423801c9789ce12ba70390b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://848171dd65d193193056bff78092de5ea65abaf7af4d168a9c27409765bd0ac2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad9f8bfeec0bea5ec9df44017223d633ce320bba477e99d5cc325712f92fff65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5920f4efa4e812090eb100b51bcf8e37eb82eb6b9fa495c8560ceeea3a94d55b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83ec64e82504ae546c78bdb3eb8b6cf47514373616749723ec457191b95b73c7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 08:38:54.283267 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 08:38:54.288329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1100834793/tls.crt::/tmp/serving-cert-1100834793/tls.key\\\\\\\"\\\\nI1201 08:39:00.137331 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 08:39:00.140699 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 08:39:00.140719 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 08:39:00.140740 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 08:39:00.140746 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 08:39:00.151271 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 08:39:00.151299 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 08:39:00.151305 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 08:39:00.151312 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 08:39:00.151316 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 08:39:00.151322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 08:39:00.151326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 08:39:00.151604 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 08:39:00.154622 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7190f95ad8a5cce27417903e343138ff345b97c0713609f47db91d8fe0c44a7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bba1efe2a197a5c47fc4220c1c53da0db51bcbdf317261c978b7f31348034f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bba1efe2a197a5c47fc4220c1c53da0db51bcbdf317261c978b7f31348034f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:38:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:11Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:11 crc kubenswrapper[4689]: I1201 08:39:11.673649 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3947625d-75bf-4332-a233-1491b2ee9d96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5095b8eb2f2f6319a0221ed4d7ec1fb81540fb75f610f7b57dc23fefbd196b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5v5pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdcf58565a332d76dac9cc12df1e0cd89eee6934be4b3234f8c36e9a8e22983a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5v5pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hmdnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:11Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:11 crc kubenswrapper[4689]: I1201 08:39:11.685122 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:11Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:11 crc kubenswrapper[4689]: I1201 08:39:11.695813 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:11Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:11 crc kubenswrapper[4689]: I1201 08:39:11.708599 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dl2st" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bebcb50-c292-4bca-9299-2fdc21439b18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://768aa329fc277550bd549890a176d80246fcbfa50fbc3b88c444434b980b9986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98wj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dl2st\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:11Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:11 crc kubenswrapper[4689]: I1201 08:39:11.724112 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7p2p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcab587f-eb9b-4dde-a0a1-75ed175999b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6954a34226801e4552090e2c6e76c36de94e28ea8e175bc26392f534fb12214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6954a34226801e4552090e2c6e76c36de94e28ea8e175bc26392f534fb12214\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33d22ffe5ececc3e9210cb8c45471bd5f4b8a76e9ff2002cfc4c099b6973a7ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33d22ffe5ececc3e9210cb8c45471bd5f4b8a76e9ff2002cfc4c099b6973a7ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30e1f218df66bf86cef3d56c80b2d5bafacfabbac8868f3127a70f88431d00e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30e1f218df66bf86cef3d56c80b2d5bafacfabbac8868f3127a70f88431d00e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://548c59ec353473b69fd54aa96a077763c420647673b64000364532db2c3beefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://548c59ec353473b69fd54aa96a077763c420647673b64000364532db2c3beefe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae04920175b7c9665a739c2f390f5b21a7f8227277909132d9719eaba43c9dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7p2p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:11Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:11 crc kubenswrapper[4689]: I1201 08:39:11.739835 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kg5bw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af90efaa-97be-48b4-bfe6-dc25956d2b5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e208d5bee2ddf73e62749c6865d4a6cc138365c59ab582e2b28e8c01480591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2sbc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kg5bw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:11Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:11 crc kubenswrapper[4689]: I1201 08:39:11.756618 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:11 crc kubenswrapper[4689]: I1201 08:39:11.756702 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:11 crc kubenswrapper[4689]: I1201 08:39:11.756716 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:11 crc kubenswrapper[4689]: I1201 08:39:11.756739 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:11 crc kubenswrapper[4689]: I1201 08:39:11.756749 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:11Z","lastTransitionTime":"2025-12-01T08:39:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:11 crc kubenswrapper[4689]: I1201 08:39:11.860493 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:11 crc kubenswrapper[4689]: I1201 08:39:11.860888 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:11 crc kubenswrapper[4689]: I1201 08:39:11.861005 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:11 crc kubenswrapper[4689]: I1201 08:39:11.861133 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:11 crc kubenswrapper[4689]: I1201 08:39:11.861219 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:11Z","lastTransitionTime":"2025-12-01T08:39:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:11 crc kubenswrapper[4689]: I1201 08:39:11.964217 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:11 crc kubenswrapper[4689]: I1201 08:39:11.964597 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:11 crc kubenswrapper[4689]: I1201 08:39:11.964733 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:11 crc kubenswrapper[4689]: I1201 08:39:11.964884 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:11 crc kubenswrapper[4689]: I1201 08:39:11.964997 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:11Z","lastTransitionTime":"2025-12-01T08:39:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:12 crc kubenswrapper[4689]: I1201 08:39:12.047342 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 08:39:12 crc kubenswrapper[4689]: I1201 08:39:12.047342 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 08:39:12 crc kubenswrapper[4689]: E1201 08:39:12.047653 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 08:39:12 crc kubenswrapper[4689]: E1201 08:39:12.047749 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 08:39:12 crc kubenswrapper[4689]: I1201 08:39:12.067771 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:12 crc kubenswrapper[4689]: I1201 08:39:12.068030 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:12 crc kubenswrapper[4689]: I1201 08:39:12.068206 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:12 crc kubenswrapper[4689]: I1201 08:39:12.068289 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:12 crc kubenswrapper[4689]: I1201 08:39:12.068395 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:12Z","lastTransitionTime":"2025-12-01T08:39:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:12 crc kubenswrapper[4689]: I1201 08:39:12.171536 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:12 crc kubenswrapper[4689]: I1201 08:39:12.171601 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:12 crc kubenswrapper[4689]: I1201 08:39:12.171610 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:12 crc kubenswrapper[4689]: I1201 08:39:12.171645 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:12 crc kubenswrapper[4689]: I1201 08:39:12.171660 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:12Z","lastTransitionTime":"2025-12-01T08:39:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:12 crc kubenswrapper[4689]: I1201 08:39:12.275305 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:12 crc kubenswrapper[4689]: I1201 08:39:12.275395 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:12 crc kubenswrapper[4689]: I1201 08:39:12.275413 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:12 crc kubenswrapper[4689]: I1201 08:39:12.275452 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:12 crc kubenswrapper[4689]: I1201 08:39:12.275466 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:12Z","lastTransitionTime":"2025-12-01T08:39:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:12 crc kubenswrapper[4689]: I1201 08:39:12.378889 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:12 crc kubenswrapper[4689]: I1201 08:39:12.378949 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:12 crc kubenswrapper[4689]: I1201 08:39:12.378967 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:12 crc kubenswrapper[4689]: I1201 08:39:12.378990 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:12 crc kubenswrapper[4689]: I1201 08:39:12.379007 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:12Z","lastTransitionTime":"2025-12-01T08:39:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:12 crc kubenswrapper[4689]: I1201 08:39:12.480980 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:12 crc kubenswrapper[4689]: I1201 08:39:12.481039 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:12 crc kubenswrapper[4689]: I1201 08:39:12.481057 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:12 crc kubenswrapper[4689]: I1201 08:39:12.481081 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:12 crc kubenswrapper[4689]: I1201 08:39:12.481106 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:12Z","lastTransitionTime":"2025-12-01T08:39:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:12 crc kubenswrapper[4689]: I1201 08:39:12.516882 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" Dec 01 08:39:12 crc kubenswrapper[4689]: I1201 08:39:12.516952 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" Dec 01 08:39:12 crc kubenswrapper[4689]: I1201 08:39:12.516969 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" Dec 01 08:39:12 crc kubenswrapper[4689]: I1201 08:39:12.584270 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:12 crc kubenswrapper[4689]: I1201 08:39:12.584315 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:12 crc kubenswrapper[4689]: I1201 08:39:12.584324 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:12 crc kubenswrapper[4689]: I1201 08:39:12.584344 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:12 crc kubenswrapper[4689]: I1201 08:39:12.584355 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:12Z","lastTransitionTime":"2025-12-01T08:39:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:12 crc kubenswrapper[4689]: I1201 08:39:12.665767 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" Dec 01 08:39:12 crc kubenswrapper[4689]: I1201 08:39:12.670024 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" Dec 01 08:39:12 crc kubenswrapper[4689]: I1201 08:39:12.688715 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1946844b-83ee-401e-b3b6-5994ef81c85e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dbad86a3834b9ec3334de7ccc49988da1f7e42c423801c9789ce12ba70390b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://848171dd65d193193056bff78092de5ea65abaf7af4d168a9c27409765bd0ac2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad9f8bfeec0bea5ec9df44017223d633ce320bba477e99d5cc325712f92fff65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5920f4efa4e812090eb100b51bcf8e37eb82eb6b9fa495c8560ceeea3a94d55b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83ec64e82504ae546c78bdb3eb8b6cf47514373616749723ec457191b95b73c7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 08:38:54.283267 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 08:38:54.288329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1100834793/tls.crt::/tmp/serving-cert-1100834793/tls.key\\\\\\\"\\\\nI1201 08:39:00.137331 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 08:39:00.140699 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 08:39:00.140719 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 08:39:00.140740 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 08:39:00.140746 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 08:39:00.151271 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 08:39:00.151299 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 08:39:00.151305 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 08:39:00.151312 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 08:39:00.151316 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 08:39:00.151322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 08:39:00.151326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 08:39:00.151604 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 08:39:00.154622 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7190f95ad8a5cce27417903e343138ff345b97c0713609f47db91d8fe0c44a7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bba1efe2a197a5c47fc4220c1c53da0db51bcbdf317261c978b7f31348034f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bba1efe2a197a5c47fc4220c1c53da0db51bcbdf317261c978b7f31348034f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:38:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:12Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:12 crc kubenswrapper[4689]: I1201 08:39:12.691523 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:12 crc kubenswrapper[4689]: I1201 08:39:12.691577 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:12 crc kubenswrapper[4689]: I1201 08:39:12.691587 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:12 crc kubenswrapper[4689]: I1201 08:39:12.691607 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:12 crc kubenswrapper[4689]: I1201 08:39:12.691660 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:12Z","lastTransitionTime":"2025-12-01T08:39:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:12 crc kubenswrapper[4689]: I1201 08:39:12.701950 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3947625d-75bf-4332-a233-1491b2ee9d96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5095b8eb2f2f6319a0221ed4d7ec1fb81540fb75f610f7b57dc23fefbd196b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5v5pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdcf58565a332d76dac9cc12df1e0cd89eee6934be4b3234f8c36e9a8e22983a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5v5pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hmdnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:12Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:12 crc kubenswrapper[4689]: I1201 08:39:12.719738 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7p2p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcab587f-eb9b-4dde-a0a1-75ed175999b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6954a34226801e4552090e2c6e76c36de94e28ea8e175bc26392f534fb12214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6954a34226801e4552090e2c6e76c36de94e28ea8e175bc26392f534fb12214\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33d22ffe5ececc3e9210cb8c45471bd5f4b8a76e9ff2002cfc4c099b6973a7ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33d22ffe5ececc3e9210cb8c45471bd5f4b8a76e9ff2002cfc4c099b6973a7ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30e1f218df66bf86cef3d56c80b2d5bafacfabbac8868f3127a70f88431d00e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30e1f218df66bf86cef3d56c80b2d5bafacfabbac8868f3127a70f88431d00e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://548c59ec353473b69fd54aa96a077763c420647673b64000364532db2c3beefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://548c59ec353473b69fd54aa96a077763c420647673b64000364532db2c3beefe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae04920175b7c9665a739c2f390f5b21a7f8227277909132d9719eaba43c9dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7p2p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:12Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:12 crc kubenswrapper[4689]: I1201 08:39:12.733150 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kg5bw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af90efaa-97be-48b4-bfe6-dc25956d2b5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e208d5bee2ddf73e62749c6865d4a6cc138365c59ab582e2b28e8c01480591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2sbc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kg5bw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:12Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:12 crc kubenswrapper[4689]: I1201 08:39:12.747661 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:12Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:12 crc kubenswrapper[4689]: I1201 08:39:12.764780 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:12Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:12 crc kubenswrapper[4689]: I1201 08:39:12.789565 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dl2st" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bebcb50-c292-4bca-9299-2fdc21439b18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://768aa329fc277550bd549890a176d80246fcbfa50fbc3b88c444434b980b9986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98wj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dl2st\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:12Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:12 crc kubenswrapper[4689]: I1201 08:39:12.794461 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:12 crc kubenswrapper[4689]: I1201 08:39:12.794499 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:12 crc kubenswrapper[4689]: I1201 08:39:12.794516 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:12 crc kubenswrapper[4689]: I1201 08:39:12.794536 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:12 crc kubenswrapper[4689]: I1201 08:39:12.794550 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:12Z","lastTransitionTime":"2025-12-01T08:39:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:12 crc kubenswrapper[4689]: I1201 08:39:12.816808 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62ff4a03d2e50e2d881287008cfedc7ff067e04d3dc24ace25b56677b0eb117c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1ed68789c1f37ddbc952b0927feba7a704b01f3ee52e057e29c989f1d475e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:12Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:12 crc kubenswrapper[4689]: I1201 08:39:12.830698 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e81b25bc4fc917a5480b84f4c7d2550829f5ad209d4cac14d2cd64d5b792b9e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:12Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:12 crc kubenswrapper[4689]: I1201 08:39:12.842341 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78a7f1a0302034697497f8a7a3d9547400a4311d7e304fdba95a19750c66658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:12Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:12 crc kubenswrapper[4689]: I1201 08:39:12.855960 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4z9l8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74395c07-d5ab-45ec-a616-1d0b1b336583\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f9b1493d035a89354477e779e8277075127a85756f190d5f0c3bf98a15f15b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjzb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4z9l8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:12Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:12 crc kubenswrapper[4689]: I1201 08:39:12.878067 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"988f960f-52fa-406f-9320-a8eec7a04f76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8be3fd2d2a091ced00ce091ea6f81df99a6718fa9ead81d56590d2c8ebc2a36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46c9dc400db69538c36365f4e2d66aae47defc4224c2881532870f875e2b30e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2afbacac849e9ed880e975ae167ea5b2329811c28516152f4d809b043ab61746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be8fa8da5eef9035e80121c75adb9d6653d8683948fa83cfb1ff87a5fb3e8b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81345c33ddfbccd0c6b7d9c505ee81d513ce884fa1326add04026fd82d441e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://211c2ff150fa2d5fb73b0df4cfc5e694bf0b6b6761eec1e6827ac983fd759829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c071c39e8559860137dcb6dae34e217ed5619f1d7ddd40ef9a2794f87943ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e96fecee94f6f25ea0bbde3c6a23854fb5dc476f4b5cfb82871358fbcf6809c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://496b7a56d1e3bb23f4d1da0fc5bbe009995c46b1f44faad80ee18a18ab301a18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://496b7a56d1e3bb23f4d1da0fc5bbe009995c46b1f44faad80ee18a18ab301a18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8zn56\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:12Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:12 crc kubenswrapper[4689]: I1201 08:39:12.894217 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f71ff62f-7141-4d50-b8ad-8312dc8a80e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccd36e732198fa380968b1ca9a174e8f380c9d3c3c762c3267b2f65367c4360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d09630f7eb6a631d65473e272c4f98de5ba071d33992776b4101ac797b0854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f804833ecd52f1cd93a4df02639ff912e5c7ff38bfce3ba1c47e8cd5fca46e11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35299e45eecb4b2b98003295a39b7945d1fa2943a5a7f97301054d255687877e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:38:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:12Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:12 crc kubenswrapper[4689]: I1201 08:39:12.896848 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:12 crc kubenswrapper[4689]: I1201 08:39:12.896891 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:12 crc kubenswrapper[4689]: I1201 08:39:12.896903 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:12 crc kubenswrapper[4689]: I1201 08:39:12.896918 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:12 crc kubenswrapper[4689]: I1201 08:39:12.896938 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:12Z","lastTransitionTime":"2025-12-01T08:39:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:12 crc kubenswrapper[4689]: I1201 08:39:12.908669 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:12Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:12 crc kubenswrapper[4689]: I1201 08:39:12.927006 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1946844b-83ee-401e-b3b6-5994ef81c85e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dbad86a3834b9ec3334de7ccc49988da1f7e42c423801c9789ce12ba70390b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://848171dd65d193193056bff78092de5ea65abaf7af4d168a9c27409765bd0ac2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad9f8bfeec0bea5ec9df44017223d633ce320bba477e99d5cc325712f92fff65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5920f4efa4e812090eb100b51bcf8e37eb82eb6b9fa495c8560ceeea3a94d55b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83ec64e82504ae546c78bdb3eb8b6cf47514373616749723ec457191b95b73c7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 08:38:54.283267 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 08:38:54.288329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1100834793/tls.crt::/tmp/serving-cert-1100834793/tls.key\\\\\\\"\\\\nI1201 08:39:00.137331 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 08:39:00.140699 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 08:39:00.140719 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 08:39:00.140740 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 08:39:00.140746 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 08:39:00.151271 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 08:39:00.151299 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 08:39:00.151305 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 08:39:00.151312 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 08:39:00.151316 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 08:39:00.151322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 08:39:00.151326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 08:39:00.151604 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 08:39:00.154622 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7190f95ad8a5cce27417903e343138ff345b97c0713609f47db91d8fe0c44a7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bba1efe2a197a5c47fc4220c1c53da0db51bcbdf317261c978b7f31348034f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bba1efe2a197a5c47fc4220c1c53da0db51bcbdf317261c978b7f31348034f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:38:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:12Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:12 crc kubenswrapper[4689]: I1201 08:39:12.939932 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3947625d-75bf-4332-a233-1491b2ee9d96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5095b8eb2f2f6319a0221ed4d7ec1fb81540fb75f610f7b57dc23fefbd196b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5v5pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdcf58565a332d76dac9cc12df1e0cd89eee6934be4b3234f8c36e9a8e22983a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5v5pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hmdnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:12Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:12 crc kubenswrapper[4689]: I1201 08:39:12.952345 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:12Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:12 crc kubenswrapper[4689]: I1201 08:39:12.965920 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:12Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:12 crc kubenswrapper[4689]: I1201 08:39:12.979054 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dl2st" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bebcb50-c292-4bca-9299-2fdc21439b18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://768aa329fc277550bd549890a176d80246fcbfa50fbc3b88c444434b980b9986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98wj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dl2st\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:12Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:12 crc kubenswrapper[4689]: I1201 08:39:12.996210 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7p2p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcab587f-eb9b-4dde-a0a1-75ed175999b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6954a34226801e4552090e2c6e76c36de94e28ea8e175bc26392f534fb12214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6954a34226801e4552090e2c6e76c36de94e28ea8e175bc26392f534fb12214\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33d22ffe5ececc3e9210cb8c45471bd5f4b8a76e9ff2002cfc4c099b6973a7ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33d22ffe5ececc3e9210cb8c45471bd5f4b8a76e9ff2002cfc4c099b6973a7ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30e1f218df66bf86cef3d56c80b2d5bafacfabbac8868f3127a70f88431d00e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30e1f218df66bf86cef3d56c80b2d5bafacfabbac8868f3127a70f88431d00e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://548c59ec353473b69fd54aa96a077763c420647673b64000364532db2c3beefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://548c59ec353473b69fd54aa96a077763c420647673b64000364532db2c3beefe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae04920175b7c9665a739c2f390f5b21a7f8227277909132d9719eaba43c9dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7p2p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:12Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:13 crc kubenswrapper[4689]: I1201 08:39:13.000078 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:13 crc kubenswrapper[4689]: I1201 08:39:13.000130 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:13 crc kubenswrapper[4689]: I1201 08:39:13.000142 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:13 crc kubenswrapper[4689]: I1201 08:39:13.000160 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:13 crc kubenswrapper[4689]: I1201 08:39:13.000172 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:13Z","lastTransitionTime":"2025-12-01T08:39:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:13 crc kubenswrapper[4689]: I1201 08:39:13.008798 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kg5bw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af90efaa-97be-48b4-bfe6-dc25956d2b5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e208d5bee2ddf73e62749c6865d4a6cc138365c59ab582e2b28e8c01480591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2sbc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kg5bw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:13Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:13 crc kubenswrapper[4689]: I1201 08:39:13.026020 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e81b25bc4fc917a5480b84f4c7d2550829f5ad209d4cac14d2cd64d5b792b9e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:13Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:13 crc kubenswrapper[4689]: I1201 08:39:13.036203 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78a7f1a0302034697497f8a7a3d9547400a4311d7e304fdba95a19750c66658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:13Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:13 crc kubenswrapper[4689]: I1201 08:39:13.049396 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 08:39:13 crc kubenswrapper[4689]: E1201 08:39:13.049538 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 08:39:13 crc kubenswrapper[4689]: I1201 08:39:13.054075 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62ff4a03d2e50e2d881287008cfedc7ff067e04d3dc24ace25b56677b0eb117c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1ed68789c1f37ddbc952b0927feba7a704b01f3ee52e057e29c989f1d475e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:13Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:13 crc kubenswrapper[4689]: I1201 08:39:13.067425 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f71ff62f-7141-4d50-b8ad-8312dc8a80e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccd36e732198fa380968b1ca9a174e8f380c9d3c3c762c3267b2f65367c4360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d09630f7eb6a631d65473e272c4f98de5ba071d33992776b4101ac797b0854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f804833ecd52f1cd93a4df02639ff912e5c7ff38bfce3ba1c47e8cd5fca46e11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35299e45eecb4b2b98003295a39b7945d1fa2943a5a7f97301054d255687877e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:38:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:13Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:13 crc kubenswrapper[4689]: I1201 08:39:13.080073 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:13Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:13 crc kubenswrapper[4689]: I1201 08:39:13.090443 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4z9l8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74395c07-d5ab-45ec-a616-1d0b1b336583\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f9b1493d035a89354477e779e8277075127a85756f190d5f0c3bf98a15f15b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjzb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4z9l8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:13Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:13 crc kubenswrapper[4689]: I1201 08:39:13.107077 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:13 crc kubenswrapper[4689]: I1201 08:39:13.107348 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:13 crc kubenswrapper[4689]: I1201 08:39:13.107493 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:13 crc kubenswrapper[4689]: I1201 08:39:13.107592 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:13 crc kubenswrapper[4689]: I1201 08:39:13.107684 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:13Z","lastTransitionTime":"2025-12-01T08:39:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:13 crc kubenswrapper[4689]: I1201 08:39:13.120597 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"988f960f-52fa-406f-9320-a8eec7a04f76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8be3fd2d2a091ced00ce091ea6f81df99a6718fa9ead81d56590d2c8ebc2a36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46c9dc400db69538c36365f4e2d66aae47defc4224c2881532870f875e2b30e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2afbacac849e9ed880e975ae167ea5b2329811c28516152f4d809b043ab61746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be8fa8da5eef9035e80121c75adb9d6653d8683948fa83cfb1ff87a5fb3e8b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81345c33ddfbccd0c6b7d9c505ee81d513ce884fa1326add04026fd82d441e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://211c2ff150fa2d5fb73b0df4cfc5e694bf0b6b6761eec1e6827ac983fd759829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c071c39e8559860137dcb6dae34e217ed5619f1d7ddd40ef9a2794f87943ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e96fecee94f6f25ea0bbde3c6a23854fb5dc476f4b5cfb82871358fbcf6809c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://496b7a56d1e3bb23f4d1da0fc5bbe009995c46b1f44faad80ee18a18ab301a18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://496b7a56d1e3bb23f4d1da0fc5bbe009995c46b1f44faad80ee18a18ab301a18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8zn56\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:13Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:13 crc kubenswrapper[4689]: I1201 08:39:13.210747 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:13 crc kubenswrapper[4689]: I1201 08:39:13.210798 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:13 crc kubenswrapper[4689]: I1201 08:39:13.210824 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:13 crc kubenswrapper[4689]: I1201 08:39:13.210848 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:13 crc kubenswrapper[4689]: I1201 08:39:13.210861 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:13Z","lastTransitionTime":"2025-12-01T08:39:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:13 crc kubenswrapper[4689]: I1201 08:39:13.313564 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:13 crc kubenswrapper[4689]: I1201 08:39:13.313609 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:13 crc kubenswrapper[4689]: I1201 08:39:13.313622 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:13 crc kubenswrapper[4689]: I1201 08:39:13.313642 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:13 crc kubenswrapper[4689]: I1201 08:39:13.313656 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:13Z","lastTransitionTime":"2025-12-01T08:39:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:13 crc kubenswrapper[4689]: I1201 08:39:13.416753 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:13 crc kubenswrapper[4689]: I1201 08:39:13.416792 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:13 crc kubenswrapper[4689]: I1201 08:39:13.416800 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:13 crc kubenswrapper[4689]: I1201 08:39:13.416817 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:13 crc kubenswrapper[4689]: I1201 08:39:13.416827 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:13Z","lastTransitionTime":"2025-12-01T08:39:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:13 crc kubenswrapper[4689]: I1201 08:39:13.521678 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:13 crc kubenswrapper[4689]: I1201 08:39:13.521721 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:13 crc kubenswrapper[4689]: I1201 08:39:13.521732 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:13 crc kubenswrapper[4689]: I1201 08:39:13.521750 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:13 crc kubenswrapper[4689]: I1201 08:39:13.521763 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:13Z","lastTransitionTime":"2025-12-01T08:39:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:13 crc kubenswrapper[4689]: I1201 08:39:13.525834 4689 generic.go:334] "Generic (PLEG): container finished" podID="dcab587f-eb9b-4dde-a0a1-75ed175999b0" containerID="ae04920175b7c9665a739c2f390f5b21a7f8227277909132d9719eaba43c9dd2" exitCode=0 Dec 01 08:39:13 crc kubenswrapper[4689]: I1201 08:39:13.527147 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7p2p7" event={"ID":"dcab587f-eb9b-4dde-a0a1-75ed175999b0","Type":"ContainerDied","Data":"ae04920175b7c9665a739c2f390f5b21a7f8227277909132d9719eaba43c9dd2"} Dec 01 08:39:13 crc kubenswrapper[4689]: I1201 08:39:13.557548 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1946844b-83ee-401e-b3b6-5994ef81c85e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dbad86a3834b9ec3334de7ccc49988da1f7e42c423801c9789ce12ba70390b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://848171dd65d193193056bff78092de5ea65abaf7af4d168a9c27409765bd0ac2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad9f8bfeec0bea5ec9df44017223d633ce320bba477e99d5cc325712f92fff65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5920f4efa4e812090eb100b51bcf8e37eb82eb6b9fa495c8560ceeea3a94d55b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83ec64e82504ae546c78bdb3eb8b6cf47514373616749723ec457191b95b73c7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 08:38:54.283267 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 08:38:54.288329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1100834793/tls.crt::/tmp/serving-cert-1100834793/tls.key\\\\\\\"\\\\nI1201 08:39:00.137331 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 08:39:00.140699 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 08:39:00.140719 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 08:39:00.140740 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 08:39:00.140746 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 08:39:00.151271 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 08:39:00.151299 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 08:39:00.151305 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 08:39:00.151312 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 08:39:00.151316 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 08:39:00.151322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 08:39:00.151326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 08:39:00.151604 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 08:39:00.154622 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7190f95ad8a5cce27417903e343138ff345b97c0713609f47db91d8fe0c44a7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bba1efe2a197a5c47fc4220c1c53da0db51bcbdf317261c978b7f31348034f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bba1efe2a197a5c47fc4220c1c53da0db51bcbdf317261c978b7f31348034f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:38:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:13Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:13 crc kubenswrapper[4689]: I1201 08:39:13.572538 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3947625d-75bf-4332-a233-1491b2ee9d96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5095b8eb2f2f6319a0221ed4d7ec1fb81540fb75f610f7b57dc23fefbd196b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5v5pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdcf58565a332d76dac9cc12df1e0cd89eee6934be4b3234f8c36e9a8e22983a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5v5pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hmdnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:13Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:13 crc kubenswrapper[4689]: I1201 08:39:13.590793 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:13Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:13 crc kubenswrapper[4689]: I1201 08:39:13.607202 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:13Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:13 crc kubenswrapper[4689]: I1201 08:39:13.622622 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dl2st" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bebcb50-c292-4bca-9299-2fdc21439b18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://768aa329fc277550bd549890a176d80246fcbfa50fbc3b88c444434b980b9986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98wj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dl2st\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:13Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:13 crc kubenswrapper[4689]: I1201 08:39:13.624649 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:13 crc kubenswrapper[4689]: I1201 08:39:13.624688 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:13 crc kubenswrapper[4689]: I1201 08:39:13.624730 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:13 crc kubenswrapper[4689]: I1201 08:39:13.624755 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:13 crc kubenswrapper[4689]: I1201 08:39:13.624767 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:13Z","lastTransitionTime":"2025-12-01T08:39:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:13 crc kubenswrapper[4689]: I1201 08:39:13.639110 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7p2p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcab587f-eb9b-4dde-a0a1-75ed175999b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6954a34226801e4552090e2c6e76c36de94e28ea8e175bc26392f534fb12214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6954a34226801e4552090e2c6e76c36de94e28ea8e175bc26392f534fb12214\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33d22ffe5ececc3e9210cb8c45471bd5f4b8a76e9ff2002cfc4c099b6973a7ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33d22ffe5ececc3e9210cb8c45471bd5f4b8a76e9ff2002cfc4c099b6973a7ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30e1f218df66bf86cef3d56c80b2d5bafacfabbac8868f3127a70f88431d00e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30e1f218df66bf86cef3d56c80b2d5bafacfabbac8868f3127a70f88431d00e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://548c59ec353473b69fd54aa96a077763c420647673b64000364532db2c3beefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://548c59ec353473b69fd54aa96a077763c420647673b64000364532db2c3beefe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae04920175b7c9665a739c2f390f5b21a7f8227277909132d9719eaba43c9dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae04920175b7c9665a739c2f390f5b21a7f8227277909132d9719eaba43c9dd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7p2p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:13Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:13 crc kubenswrapper[4689]: I1201 08:39:13.661497 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kg5bw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af90efaa-97be-48b4-bfe6-dc25956d2b5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e208d5bee2ddf73e62749c6865d4a6cc138365c59ab582e2b28e8c01480591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2sbc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kg5bw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:13Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:13 crc kubenswrapper[4689]: I1201 08:39:13.675561 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e81b25bc4fc917a5480b84f4c7d2550829f5ad209d4cac14d2cd64d5b792b9e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:13Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:13 crc kubenswrapper[4689]: I1201 08:39:13.687261 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78a7f1a0302034697497f8a7a3d9547400a4311d7e304fdba95a19750c66658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:13Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:13 crc kubenswrapper[4689]: I1201 08:39:13.700701 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62ff4a03d2e50e2d881287008cfedc7ff067e04d3dc24ace25b56677b0eb117c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1ed68789c1f37ddbc952b0927feba7a704b01f3ee52e057e29c989f1d475e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:13Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:13 crc kubenswrapper[4689]: I1201 08:39:13.716280 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f71ff62f-7141-4d50-b8ad-8312dc8a80e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccd36e732198fa380968b1ca9a174e8f380c9d3c3c762c3267b2f65367c4360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d09630f7eb6a631d65473e272c4f98de5ba071d33992776b4101ac797b0854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f804833ecd52f1cd93a4df02639ff912e5c7ff38bfce3ba1c47e8cd5fca46e11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35299e45eecb4b2b98003295a39b7945d1fa2943a5a7f97301054d255687877e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:38:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:13Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:13 crc kubenswrapper[4689]: I1201 08:39:13.730931 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:13Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:13 crc kubenswrapper[4689]: I1201 08:39:13.732769 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:13 crc kubenswrapper[4689]: I1201 08:39:13.732819 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:13 crc kubenswrapper[4689]: I1201 08:39:13.732831 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:13 crc kubenswrapper[4689]: I1201 08:39:13.732850 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:13 crc kubenswrapper[4689]: I1201 08:39:13.732861 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:13Z","lastTransitionTime":"2025-12-01T08:39:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:13 crc kubenswrapper[4689]: I1201 08:39:13.744595 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4z9l8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74395c07-d5ab-45ec-a616-1d0b1b336583\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f9b1493d035a89354477e779e8277075127a85756f190d5f0c3bf98a15f15b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjzb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4z9l8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:13Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:13 crc kubenswrapper[4689]: I1201 08:39:13.764925 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"988f960f-52fa-406f-9320-a8eec7a04f76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8be3fd2d2a091ced00ce091ea6f81df99a6718fa9ead81d56590d2c8ebc2a36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46c9dc400db69538c36365f4e2d66aae47defc4224c2881532870f875e2b30e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2afbacac849e9ed880e975ae167ea5b2329811c28516152f4d809b043ab61746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be8fa8da5eef9035e80121c75adb9d6653d8683948fa83cfb1ff87a5fb3e8b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81345c33ddfbccd0c6b7d9c505ee81d513ce884fa1326add04026fd82d441e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://211c2ff150fa2d5fb73b0df4cfc5e694bf0b6b6761eec1e6827ac983fd759829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c071c39e8559860137dcb6dae34e217ed5619f1d7ddd40ef9a2794f87943ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e96fecee94f6f25ea0bbde3c6a23854fb5dc476f4b5cfb82871358fbcf6809c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://496b7a56d1e3bb23f4d1da0fc5bbe009995c46b1f44faad80ee18a18ab301a18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://496b7a56d1e3bb23f4d1da0fc5bbe009995c46b1f44faad80ee18a18ab301a18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8zn56\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:13Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:13 crc kubenswrapper[4689]: I1201 08:39:13.832318 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 08:39:13 crc kubenswrapper[4689]: I1201 08:39:13.836060 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:13 crc kubenswrapper[4689]: I1201 08:39:13.836103 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:13 crc kubenswrapper[4689]: I1201 08:39:13.836115 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:13 crc kubenswrapper[4689]: I1201 08:39:13.836131 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:13 crc kubenswrapper[4689]: I1201 08:39:13.836144 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:13Z","lastTransitionTime":"2025-12-01T08:39:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:13 crc kubenswrapper[4689]: I1201 08:39:13.850495 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e81b25bc4fc917a5480b84f4c7d2550829f5ad209d4cac14d2cd64d5b792b9e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:13Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:13 crc kubenswrapper[4689]: I1201 08:39:13.862168 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78a7f1a0302034697497f8a7a3d9547400a4311d7e304fdba95a19750c66658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:13Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:13 crc kubenswrapper[4689]: I1201 08:39:13.874683 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62ff4a03d2e50e2d881287008cfedc7ff067e04d3dc24ace25b56677b0eb117c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1ed68789c1f37ddbc952b0927feba7a704b01f3ee52e057e29c989f1d475e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:13Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:13 crc kubenswrapper[4689]: I1201 08:39:13.886036 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f71ff62f-7141-4d50-b8ad-8312dc8a80e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccd36e732198fa380968b1ca9a174e8f380c9d3c3c762c3267b2f65367c4360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d09630f7eb6a631d65473e272c4f98de5ba071d33992776b4101ac797b0854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f804833ecd52f1cd93a4df02639ff912e5c7ff38bfce3ba1c47e8cd5fca46e11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35299e45eecb4b2b98003295a39b7945d1fa2943a5a7f97301054d255687877e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:38:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:13Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:13 crc kubenswrapper[4689]: I1201 08:39:13.899816 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:13Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:13 crc kubenswrapper[4689]: I1201 08:39:13.914828 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4z9l8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74395c07-d5ab-45ec-a616-1d0b1b336583\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f9b1493d035a89354477e779e8277075127a85756f190d5f0c3bf98a15f15b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjzb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4z9l8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:13Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:13 crc kubenswrapper[4689]: I1201 08:39:13.940404 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"988f960f-52fa-406f-9320-a8eec7a04f76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8be3fd2d2a091ced00ce091ea6f81df99a6718fa9ead81d56590d2c8ebc2a36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46c9dc400db69538c36365f4e2d66aae47defc4224c2881532870f875e2b30e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2afbacac849e9ed880e975ae167ea5b2329811c28516152f4d809b043ab61746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be8fa8da5eef9035e80121c75adb9d6653d8683948fa83cfb1ff87a5fb3e8b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81345c33ddfbccd0c6b7d9c505ee81d513ce884fa1326add04026fd82d441e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://211c2ff150fa2d5fb73b0df4cfc5e694bf0b6b6761eec1e6827ac983fd759829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c071c39e8559860137dcb6dae34e217ed5619f1d7ddd40ef9a2794f87943ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e96fecee94f6f25ea0bbde3c6a23854fb5dc476f4b5cfb82871358fbcf6809c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://496b7a56d1e3bb23f4d1da0fc5bbe009995c46b1f44faad80ee18a18ab301a18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://496b7a56d1e3bb23f4d1da0fc5bbe009995c46b1f44faad80ee18a18ab301a18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8zn56\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:13Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:13 crc kubenswrapper[4689]: I1201 08:39:13.945446 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:13 crc kubenswrapper[4689]: I1201 08:39:13.945503 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:13 crc kubenswrapper[4689]: I1201 08:39:13.945519 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:13 crc kubenswrapper[4689]: I1201 08:39:13.945538 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:13 crc kubenswrapper[4689]: I1201 08:39:13.945550 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:13Z","lastTransitionTime":"2025-12-01T08:39:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:13 crc kubenswrapper[4689]: I1201 08:39:13.957067 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1946844b-83ee-401e-b3b6-5994ef81c85e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dbad86a3834b9ec3334de7ccc49988da1f7e42c423801c9789ce12ba70390b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://848171dd65d193193056bff78092de5ea65abaf7af4d168a9c27409765bd0ac2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad9f8bfeec0bea5ec9df44017223d633ce320bba477e99d5cc325712f92fff65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5920f4efa4e812090eb100b51bcf8e37eb82eb6b9fa495c8560ceeea3a94d55b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83ec64e82504ae546c78bdb3eb8b6cf47514373616749723ec457191b95b73c7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 08:38:54.283267 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 08:38:54.288329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1100834793/tls.crt::/tmp/serving-cert-1100834793/tls.key\\\\\\\"\\\\nI1201 08:39:00.137331 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 08:39:00.140699 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 08:39:00.140719 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 08:39:00.140740 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 08:39:00.140746 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 08:39:00.151271 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 08:39:00.151299 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 08:39:00.151305 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 08:39:00.151312 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 08:39:00.151316 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 08:39:00.151322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 08:39:00.151326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 08:39:00.151604 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 08:39:00.154622 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7190f95ad8a5cce27417903e343138ff345b97c0713609f47db91d8fe0c44a7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bba1efe2a197a5c47fc4220c1c53da0db51bcbdf317261c978b7f31348034f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bba1efe2a197a5c47fc4220c1c53da0db51bcbdf317261c978b7f31348034f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:38:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:13Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:13 crc kubenswrapper[4689]: I1201 08:39:13.971669 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3947625d-75bf-4332-a233-1491b2ee9d96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5095b8eb2f2f6319a0221ed4d7ec1fb81540fb75f610f7b57dc23fefbd196b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5v5pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdcf58565a332d76dac9cc12df1e0cd89eee6934be4b3234f8c36e9a8e22983a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5v5pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hmdnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:13Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:13 crc kubenswrapper[4689]: I1201 08:39:13.984742 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:13Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:13 crc kubenswrapper[4689]: I1201 08:39:13.999715 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:13Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:14 crc kubenswrapper[4689]: I1201 08:39:14.016716 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dl2st" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bebcb50-c292-4bca-9299-2fdc21439b18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://768aa329fc277550bd549890a176d80246fcbfa50fbc3b88c444434b980b9986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98wj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dl2st\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:14Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:14 crc kubenswrapper[4689]: I1201 08:39:14.037714 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7p2p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcab587f-eb9b-4dde-a0a1-75ed175999b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6954a34226801e4552090e2c6e76c36de94e28ea8e175bc26392f534fb12214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6954a34226801e4552090e2c6e76c36de94e28ea8e175bc26392f534fb12214\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33d22ffe5ececc3e9210cb8c45471bd5f4b8a76e9ff2002cfc4c099b6973a7ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33d22ffe5ececc3e9210cb8c45471bd5f4b8a76e9ff2002cfc4c099b6973a7ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30e1f218df66bf86cef3d56c80b2d5bafacfabbac8868f3127a70f88431d00e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30e1f218df66bf86cef3d56c80b2d5bafacfabbac8868f3127a70f88431d00e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://548c59ec353473b69fd54aa96a077763c420647673b64000364532db2c3beefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://548c59ec353473b69fd54aa96a077763c420647673b64000364532db2c3beefe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae04920175b7c9665a739c2f390f5b21a7f8227277909132d9719eaba43c9dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae04920175b7c9665a739c2f390f5b21a7f8227277909132d9719eaba43c9dd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7p2p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:14Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:14 crc kubenswrapper[4689]: I1201 08:39:14.046673 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 08:39:14 crc kubenswrapper[4689]: I1201 08:39:14.046790 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 08:39:14 crc kubenswrapper[4689]: E1201 08:39:14.046899 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 08:39:14 crc kubenswrapper[4689]: E1201 08:39:14.047202 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 08:39:14 crc kubenswrapper[4689]: I1201 08:39:14.049242 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:14 crc kubenswrapper[4689]: I1201 08:39:14.049263 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:14 crc kubenswrapper[4689]: I1201 08:39:14.049273 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:14 crc kubenswrapper[4689]: I1201 08:39:14.049296 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:14 crc kubenswrapper[4689]: I1201 08:39:14.049310 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:14Z","lastTransitionTime":"2025-12-01T08:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:14 crc kubenswrapper[4689]: I1201 08:39:14.053183 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kg5bw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af90efaa-97be-48b4-bfe6-dc25956d2b5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e208d5bee2ddf73e62749c6865d4a6cc138365c59ab582e2b28e8c01480591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2sbc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kg5bw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:14Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:14 crc kubenswrapper[4689]: I1201 08:39:14.151951 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:14 crc kubenswrapper[4689]: I1201 08:39:14.152267 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:14 crc kubenswrapper[4689]: I1201 08:39:14.152332 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:14 crc kubenswrapper[4689]: I1201 08:39:14.152416 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:14 crc kubenswrapper[4689]: I1201 08:39:14.152533 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:14Z","lastTransitionTime":"2025-12-01T08:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:14 crc kubenswrapper[4689]: I1201 08:39:14.188640 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:14 crc kubenswrapper[4689]: I1201 08:39:14.188699 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:14 crc kubenswrapper[4689]: I1201 08:39:14.188711 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:14 crc kubenswrapper[4689]: I1201 08:39:14.188767 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:14 crc kubenswrapper[4689]: I1201 08:39:14.188783 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:14Z","lastTransitionTime":"2025-12-01T08:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:14 crc kubenswrapper[4689]: E1201 08:39:14.212924 4689 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"87f9359d-17aa-499d-90bc-b05146c26f0f\\\",\\\"systemUUID\\\":\\\"1b1c64ae-9dbc-417f-9fac-7f3e657b08f7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:14Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:14 crc kubenswrapper[4689]: I1201 08:39:14.217663 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:14 crc kubenswrapper[4689]: I1201 08:39:14.217733 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:14 crc kubenswrapper[4689]: I1201 08:39:14.217745 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:14 crc kubenswrapper[4689]: I1201 08:39:14.217768 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:14 crc kubenswrapper[4689]: I1201 08:39:14.217777 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:14Z","lastTransitionTime":"2025-12-01T08:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:14 crc kubenswrapper[4689]: E1201 08:39:14.239722 4689 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"87f9359d-17aa-499d-90bc-b05146c26f0f\\\",\\\"systemUUID\\\":\\\"1b1c64ae-9dbc-417f-9fac-7f3e657b08f7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:14Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:14 crc kubenswrapper[4689]: I1201 08:39:14.246334 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:14 crc kubenswrapper[4689]: I1201 08:39:14.246429 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:14 crc kubenswrapper[4689]: I1201 08:39:14.246446 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:14 crc kubenswrapper[4689]: I1201 08:39:14.246467 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:14 crc kubenswrapper[4689]: I1201 08:39:14.246480 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:14Z","lastTransitionTime":"2025-12-01T08:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:14 crc kubenswrapper[4689]: E1201 08:39:14.257532 4689 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"87f9359d-17aa-499d-90bc-b05146c26f0f\\\",\\\"systemUUID\\\":\\\"1b1c64ae-9dbc-417f-9fac-7f3e657b08f7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:14Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:14 crc kubenswrapper[4689]: I1201 08:39:14.260827 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:14 crc kubenswrapper[4689]: I1201 08:39:14.260875 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:14 crc kubenswrapper[4689]: I1201 08:39:14.260892 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:14 crc kubenswrapper[4689]: I1201 08:39:14.260911 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:14 crc kubenswrapper[4689]: I1201 08:39:14.260924 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:14Z","lastTransitionTime":"2025-12-01T08:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:14 crc kubenswrapper[4689]: E1201 08:39:14.272545 4689 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"87f9359d-17aa-499d-90bc-b05146c26f0f\\\",\\\"systemUUID\\\":\\\"1b1c64ae-9dbc-417f-9fac-7f3e657b08f7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:14Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:14 crc kubenswrapper[4689]: I1201 08:39:14.275724 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:14 crc kubenswrapper[4689]: I1201 08:39:14.275766 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:14 crc kubenswrapper[4689]: I1201 08:39:14.275777 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:14 crc kubenswrapper[4689]: I1201 08:39:14.275796 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:14 crc kubenswrapper[4689]: I1201 08:39:14.275811 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:14Z","lastTransitionTime":"2025-12-01T08:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:14 crc kubenswrapper[4689]: E1201 08:39:14.287060 4689 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"87f9359d-17aa-499d-90bc-b05146c26f0f\\\",\\\"systemUUID\\\":\\\"1b1c64ae-9dbc-417f-9fac-7f3e657b08f7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:14Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:14 crc kubenswrapper[4689]: E1201 08:39:14.287247 4689 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 01 08:39:14 crc kubenswrapper[4689]: I1201 08:39:14.289227 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:14 crc kubenswrapper[4689]: I1201 08:39:14.289268 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:14 crc kubenswrapper[4689]: I1201 08:39:14.289282 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:14 crc kubenswrapper[4689]: I1201 08:39:14.289300 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:14 crc kubenswrapper[4689]: I1201 08:39:14.289319 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:14Z","lastTransitionTime":"2025-12-01T08:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:14 crc kubenswrapper[4689]: I1201 08:39:14.394186 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:14 crc kubenswrapper[4689]: I1201 08:39:14.394254 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:14 crc kubenswrapper[4689]: I1201 08:39:14.394273 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:14 crc kubenswrapper[4689]: I1201 08:39:14.394308 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:14 crc kubenswrapper[4689]: I1201 08:39:14.394325 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:14Z","lastTransitionTime":"2025-12-01T08:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:14 crc kubenswrapper[4689]: I1201 08:39:14.496833 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:14 crc kubenswrapper[4689]: I1201 08:39:14.496874 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:14 crc kubenswrapper[4689]: I1201 08:39:14.496884 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:14 crc kubenswrapper[4689]: I1201 08:39:14.496899 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:14 crc kubenswrapper[4689]: I1201 08:39:14.496910 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:14Z","lastTransitionTime":"2025-12-01T08:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:14 crc kubenswrapper[4689]: I1201 08:39:14.532830 4689 generic.go:334] "Generic (PLEG): container finished" podID="dcab587f-eb9b-4dde-a0a1-75ed175999b0" containerID="512fd55cf3633a9fcdf425af76d251823e7638bd08a7f32341108470103ea67d" exitCode=0 Dec 01 08:39:14 crc kubenswrapper[4689]: I1201 08:39:14.532897 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7p2p7" event={"ID":"dcab587f-eb9b-4dde-a0a1-75ed175999b0","Type":"ContainerDied","Data":"512fd55cf3633a9fcdf425af76d251823e7638bd08a7f32341108470103ea67d"} Dec 01 08:39:14 crc kubenswrapper[4689]: I1201 08:39:14.554930 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e81b25bc4fc917a5480b84f4c7d2550829f5ad209d4cac14d2cd64d5b792b9e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:14Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:14 crc kubenswrapper[4689]: I1201 08:39:14.567247 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78a7f1a0302034697497f8a7a3d9547400a4311d7e304fdba95a19750c66658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:14Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:14 crc kubenswrapper[4689]: I1201 08:39:14.582540 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62ff4a03d2e50e2d881287008cfedc7ff067e04d3dc24ace25b56677b0eb117c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1ed68789c1f37ddbc952b0927feba7a704b01f3ee52e057e29c989f1d475e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:14Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:14 crc kubenswrapper[4689]: I1201 08:39:14.594559 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f71ff62f-7141-4d50-b8ad-8312dc8a80e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccd36e732198fa380968b1ca9a174e8f380c9d3c3c762c3267b2f65367c4360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d09630f7eb6a631d65473e272c4f98de5ba071d33992776b4101ac797b0854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f804833ecd52f1cd93a4df02639ff912e5c7ff38bfce3ba1c47e8cd5fca46e11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35299e45eecb4b2b98003295a39b7945d1fa2943a5a7f97301054d255687877e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:38:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:14Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:14 crc kubenswrapper[4689]: I1201 08:39:14.601256 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:14 crc kubenswrapper[4689]: I1201 08:39:14.601317 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:14 crc kubenswrapper[4689]: I1201 08:39:14.601331 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:14 crc kubenswrapper[4689]: I1201 08:39:14.601353 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:14 crc kubenswrapper[4689]: I1201 08:39:14.601380 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:14Z","lastTransitionTime":"2025-12-01T08:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:14 crc kubenswrapper[4689]: I1201 08:39:14.607406 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:14Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:14 crc kubenswrapper[4689]: I1201 08:39:14.748150 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4z9l8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74395c07-d5ab-45ec-a616-1d0b1b336583\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f9b1493d035a89354477e779e8277075127a85756f190d5f0c3bf98a15f15b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjzb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4z9l8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:14Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:14 crc kubenswrapper[4689]: I1201 08:39:14.750395 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:14 crc kubenswrapper[4689]: I1201 08:39:14.750451 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:14 crc kubenswrapper[4689]: I1201 08:39:14.750467 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:14 crc kubenswrapper[4689]: I1201 08:39:14.750490 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:14 crc kubenswrapper[4689]: I1201 08:39:14.750502 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:14Z","lastTransitionTime":"2025-12-01T08:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:14 crc kubenswrapper[4689]: I1201 08:39:14.855864 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:14 crc kubenswrapper[4689]: I1201 08:39:14.855916 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:14 crc kubenswrapper[4689]: I1201 08:39:14.855928 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:14 crc kubenswrapper[4689]: I1201 08:39:14.855946 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:14 crc kubenswrapper[4689]: I1201 08:39:14.855958 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:14Z","lastTransitionTime":"2025-12-01T08:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:14 crc kubenswrapper[4689]: I1201 08:39:14.770087 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"988f960f-52fa-406f-9320-a8eec7a04f76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8be3fd2d2a091ced00ce091ea6f81df99a6718fa9ead81d56590d2c8ebc2a36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46c9dc400db69538c36365f4e2d66aae47defc4224c2881532870f875e2b30e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2afbacac849e9ed880e975ae167ea5b2329811c28516152f4d809b043ab61746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be8fa8da5eef9035e80121c75adb9d6653d8683948fa83cfb1ff87a5fb3e8b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81345c33ddfbccd0c6b7d9c505ee81d513ce884fa1326add04026fd82d441e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://211c2ff150fa2d5fb73b0df4cfc5e694bf0b6b6761eec1e6827ac983fd759829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c071c39e8559860137dcb6dae34e217ed5619f1d7ddd40ef9a2794f87943ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e96fecee94f6f25ea0bbde3c6a23854fb5dc476f4b5cfb82871358fbcf6809c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://496b7a56d1e3bb23f4d1da0fc5bbe009995c46b1f44faad80ee18a18ab301a18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://496b7a56d1e3bb23f4d1da0fc5bbe009995c46b1f44faad80ee18a18ab301a18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8zn56\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:14Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:14 crc kubenswrapper[4689]: I1201 08:39:14.919837 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1946844b-83ee-401e-b3b6-5994ef81c85e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dbad86a3834b9ec3334de7ccc49988da1f7e42c423801c9789ce12ba70390b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://848171dd65d193193056bff78092de5ea65abaf7af4d168a9c27409765bd0ac2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad9f8bfeec0bea5ec9df44017223d633ce320bba477e99d5cc325712f92fff65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5920f4efa4e812090eb100b51bcf8e37eb82eb6b9fa495c8560ceeea3a94d55b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83ec64e82504ae546c78bdb3eb8b6cf47514373616749723ec457191b95b73c7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 08:38:54.283267 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 08:38:54.288329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1100834793/tls.crt::/tmp/serving-cert-1100834793/tls.key\\\\\\\"\\\\nI1201 08:39:00.137331 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 08:39:00.140699 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 08:39:00.140719 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 08:39:00.140740 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 08:39:00.140746 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 08:39:00.151271 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 08:39:00.151299 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 08:39:00.151305 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 08:39:00.151312 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 08:39:00.151316 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 08:39:00.151322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 08:39:00.151326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 08:39:00.151604 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 08:39:00.154622 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7190f95ad8a5cce27417903e343138ff345b97c0713609f47db91d8fe0c44a7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bba1efe2a197a5c47fc4220c1c53da0db51bcbdf317261c978b7f31348034f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bba1efe2a197a5c47fc4220c1c53da0db51bcbdf317261c978b7f31348034f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:38:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:14Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:14 crc kubenswrapper[4689]: I1201 08:39:14.933428 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3947625d-75bf-4332-a233-1491b2ee9d96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5095b8eb2f2f6319a0221ed4d7ec1fb81540fb75f610f7b57dc23fefbd196b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5v5pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdcf58565a332d76dac9cc12df1e0cd89eee6934be4b3234f8c36e9a8e22983a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5v5pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hmdnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:14Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:14 crc kubenswrapper[4689]: I1201 08:39:14.950890 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:14Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:14 crc kubenswrapper[4689]: I1201 08:39:14.958609 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:14 crc kubenswrapper[4689]: I1201 08:39:14.958646 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:14 crc kubenswrapper[4689]: I1201 08:39:14.958658 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:14 crc kubenswrapper[4689]: I1201 08:39:14.958680 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:14 crc kubenswrapper[4689]: I1201 08:39:14.958690 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:14Z","lastTransitionTime":"2025-12-01T08:39:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:14 crc kubenswrapper[4689]: I1201 08:39:14.965587 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:14Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:14 crc kubenswrapper[4689]: I1201 08:39:14.980502 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dl2st" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bebcb50-c292-4bca-9299-2fdc21439b18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://768aa329fc277550bd549890a176d80246fcbfa50fbc3b88c444434b980b9986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98wj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dl2st\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:14Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:15 crc kubenswrapper[4689]: I1201 08:39:15.002866 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7p2p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcab587f-eb9b-4dde-a0a1-75ed175999b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6954a34226801e4552090e2c6e76c36de94e28ea8e175bc26392f534fb12214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6954a34226801e4552090e2c6e76c36de94e28ea8e175bc26392f534fb12214\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33d22ffe5ececc3e9210cb8c45471bd5f4b8a76e9ff2002cfc4c099b6973a7ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33d22ffe5ececc3e9210cb8c45471bd5f4b8a76e9ff2002cfc4c099b6973a7ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30e1f218df66bf86cef3d56c80b2d5bafacfabbac8868f3127a70f88431d00e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30e1f218df66bf86cef3d56c80b2d5bafacfabbac8868f3127a70f88431d00e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://548c59ec353473b69fd54aa96a077763c420647673b64000364532db2c3beefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://548c59ec353473b69fd54aa96a077763c420647673b64000364532db2c3beefe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae04920175b7c9665a739c2f390f5b21a7f8227277909132d9719eaba43c9dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae04920175b7c9665a739c2f390f5b21a7f8227277909132d9719eaba43c9dd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://512fd55cf3633a9fcdf425af76d251823e7638bd08a7f32341108470103ea67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://512fd55cf3633a9fcdf425af76d251823e7638bd08a7f32341108470103ea67d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7p2p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:15Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:15 crc kubenswrapper[4689]: I1201 08:39:15.015612 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kg5bw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af90efaa-97be-48b4-bfe6-dc25956d2b5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e208d5bee2ddf73e62749c6865d4a6cc138365c59ab582e2b28e8c01480591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2sbc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kg5bw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:15Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:15 crc kubenswrapper[4689]: I1201 08:39:15.046500 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 08:39:15 crc kubenswrapper[4689]: E1201 08:39:15.046700 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 08:39:15 crc kubenswrapper[4689]: I1201 08:39:15.060967 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:15 crc kubenswrapper[4689]: I1201 08:39:15.061007 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:15 crc kubenswrapper[4689]: I1201 08:39:15.061017 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:15 crc kubenswrapper[4689]: I1201 08:39:15.061031 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:15 crc kubenswrapper[4689]: I1201 08:39:15.061041 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:15Z","lastTransitionTime":"2025-12-01T08:39:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:15 crc kubenswrapper[4689]: I1201 08:39:15.163664 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:15 crc kubenswrapper[4689]: I1201 08:39:15.163715 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:15 crc kubenswrapper[4689]: I1201 08:39:15.163727 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:15 crc kubenswrapper[4689]: I1201 08:39:15.163747 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:15 crc kubenswrapper[4689]: I1201 08:39:15.163761 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:15Z","lastTransitionTime":"2025-12-01T08:39:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:15 crc kubenswrapper[4689]: I1201 08:39:15.267172 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:15 crc kubenswrapper[4689]: I1201 08:39:15.267217 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:15 crc kubenswrapper[4689]: I1201 08:39:15.267228 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:15 crc kubenswrapper[4689]: I1201 08:39:15.267249 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:15 crc kubenswrapper[4689]: I1201 08:39:15.267259 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:15Z","lastTransitionTime":"2025-12-01T08:39:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:15 crc kubenswrapper[4689]: I1201 08:39:15.370099 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:15 crc kubenswrapper[4689]: I1201 08:39:15.370156 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:15 crc kubenswrapper[4689]: I1201 08:39:15.370169 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:15 crc kubenswrapper[4689]: I1201 08:39:15.370191 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:15 crc kubenswrapper[4689]: I1201 08:39:15.370203 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:15Z","lastTransitionTime":"2025-12-01T08:39:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:15 crc kubenswrapper[4689]: I1201 08:39:15.472863 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:15 crc kubenswrapper[4689]: I1201 08:39:15.472901 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:15 crc kubenswrapper[4689]: I1201 08:39:15.472910 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:15 crc kubenswrapper[4689]: I1201 08:39:15.472929 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:15 crc kubenswrapper[4689]: I1201 08:39:15.472940 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:15Z","lastTransitionTime":"2025-12-01T08:39:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:15 crc kubenswrapper[4689]: I1201 08:39:15.575356 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:15 crc kubenswrapper[4689]: I1201 08:39:15.575462 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:15 crc kubenswrapper[4689]: I1201 08:39:15.575477 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:15 crc kubenswrapper[4689]: I1201 08:39:15.575503 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:15 crc kubenswrapper[4689]: I1201 08:39:15.575520 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:15Z","lastTransitionTime":"2025-12-01T08:39:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:15 crc kubenswrapper[4689]: I1201 08:39:15.678467 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:15 crc kubenswrapper[4689]: I1201 08:39:15.678522 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:15 crc kubenswrapper[4689]: I1201 08:39:15.678537 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:15 crc kubenswrapper[4689]: I1201 08:39:15.678558 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:15 crc kubenswrapper[4689]: I1201 08:39:15.678571 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:15Z","lastTransitionTime":"2025-12-01T08:39:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:15 crc kubenswrapper[4689]: I1201 08:39:15.781812 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:15 crc kubenswrapper[4689]: I1201 08:39:15.781863 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:15 crc kubenswrapper[4689]: I1201 08:39:15.781876 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:15 crc kubenswrapper[4689]: I1201 08:39:15.781900 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:15 crc kubenswrapper[4689]: I1201 08:39:15.781914 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:15Z","lastTransitionTime":"2025-12-01T08:39:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:15 crc kubenswrapper[4689]: I1201 08:39:15.884644 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:15 crc kubenswrapper[4689]: I1201 08:39:15.884689 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:15 crc kubenswrapper[4689]: I1201 08:39:15.884698 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:15 crc kubenswrapper[4689]: I1201 08:39:15.884716 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:15 crc kubenswrapper[4689]: I1201 08:39:15.884726 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:15Z","lastTransitionTime":"2025-12-01T08:39:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:16 crc kubenswrapper[4689]: I1201 08:39:16.009265 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:16 crc kubenswrapper[4689]: I1201 08:39:16.009325 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:16 crc kubenswrapper[4689]: I1201 08:39:16.009335 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:16 crc kubenswrapper[4689]: I1201 08:39:16.009350 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:16 crc kubenswrapper[4689]: I1201 08:39:16.009385 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:16Z","lastTransitionTime":"2025-12-01T08:39:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:16 crc kubenswrapper[4689]: I1201 08:39:16.047102 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 08:39:16 crc kubenswrapper[4689]: I1201 08:39:16.047164 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 08:39:16 crc kubenswrapper[4689]: E1201 08:39:16.047291 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 08:39:16 crc kubenswrapper[4689]: E1201 08:39:16.047439 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 08:39:16 crc kubenswrapper[4689]: I1201 08:39:16.112028 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:16 crc kubenswrapper[4689]: I1201 08:39:16.112098 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:16 crc kubenswrapper[4689]: I1201 08:39:16.112128 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:16 crc kubenswrapper[4689]: I1201 08:39:16.112153 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:16 crc kubenswrapper[4689]: I1201 08:39:16.112166 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:16Z","lastTransitionTime":"2025-12-01T08:39:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:16 crc kubenswrapper[4689]: I1201 08:39:16.285564 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:16 crc kubenswrapper[4689]: I1201 08:39:16.288546 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:16 crc kubenswrapper[4689]: I1201 08:39:16.288619 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:16 crc kubenswrapper[4689]: I1201 08:39:16.288663 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:16 crc kubenswrapper[4689]: I1201 08:39:16.288700 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:16Z","lastTransitionTime":"2025-12-01T08:39:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:16 crc kubenswrapper[4689]: I1201 08:39:16.392158 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:16 crc kubenswrapper[4689]: I1201 08:39:16.392226 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:16 crc kubenswrapper[4689]: I1201 08:39:16.392241 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:16 crc kubenswrapper[4689]: I1201 08:39:16.392264 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:16 crc kubenswrapper[4689]: I1201 08:39:16.392279 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:16Z","lastTransitionTime":"2025-12-01T08:39:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:16 crc kubenswrapper[4689]: I1201 08:39:16.495229 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:16 crc kubenswrapper[4689]: I1201 08:39:16.495290 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:16 crc kubenswrapper[4689]: I1201 08:39:16.495301 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:16 crc kubenswrapper[4689]: I1201 08:39:16.495321 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:16 crc kubenswrapper[4689]: I1201 08:39:16.495332 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:16Z","lastTransitionTime":"2025-12-01T08:39:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:16 crc kubenswrapper[4689]: I1201 08:39:16.548747 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7p2p7" event={"ID":"dcab587f-eb9b-4dde-a0a1-75ed175999b0","Type":"ContainerStarted","Data":"761460333be2b369513cc7812afa57b580daf1e0e9add1c20f33ddf45601632c"} Dec 01 08:39:16 crc kubenswrapper[4689]: I1201 08:39:16.570062 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1946844b-83ee-401e-b3b6-5994ef81c85e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dbad86a3834b9ec3334de7ccc49988da1f7e42c423801c9789ce12ba70390b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://848171dd65d193193056bff78092de5ea65abaf7af4d168a9c27409765bd0ac2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad9f8bfeec0bea5ec9df44017223d633ce320bba477e99d5cc325712f92fff65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5920f4efa4e812090eb100b51bcf8e37eb82eb6b9fa495c8560ceeea3a94d55b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83ec64e82504ae546c78bdb3eb8b6cf47514373616749723ec457191b95b73c7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 08:38:54.283267 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 08:38:54.288329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1100834793/tls.crt::/tmp/serving-cert-1100834793/tls.key\\\\\\\"\\\\nI1201 08:39:00.137331 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 08:39:00.140699 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 08:39:00.140719 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 08:39:00.140740 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 08:39:00.140746 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 08:39:00.151271 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 08:39:00.151299 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 08:39:00.151305 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 08:39:00.151312 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 08:39:00.151316 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 08:39:00.151322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 08:39:00.151326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 08:39:00.151604 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 08:39:00.154622 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7190f95ad8a5cce27417903e343138ff345b97c0713609f47db91d8fe0c44a7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bba1efe2a197a5c47fc4220c1c53da0db51bcbdf317261c978b7f31348034f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bba1efe2a197a5c47fc4220c1c53da0db51bcbdf317261c978b7f31348034f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:38:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:16Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:16 crc kubenswrapper[4689]: I1201 08:39:16.688590 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:39:16 crc kubenswrapper[4689]: I1201 08:39:16.688772 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 08:39:16 crc kubenswrapper[4689]: I1201 08:39:16.688806 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 08:39:16 crc kubenswrapper[4689]: I1201 08:39:16.688831 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 08:39:16 crc kubenswrapper[4689]: E1201 08:39:16.689639 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:39:32.689616455 +0000 UTC m=+52.761904359 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:39:16 crc kubenswrapper[4689]: E1201 08:39:16.690150 4689 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 01 08:39:16 crc kubenswrapper[4689]: E1201 08:39:16.690209 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-01 08:39:32.690196701 +0000 UTC m=+52.762484605 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 01 08:39:16 crc kubenswrapper[4689]: E1201 08:39:16.690288 4689 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 01 08:39:16 crc kubenswrapper[4689]: E1201 08:39:16.690302 4689 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 01 08:39:16 crc kubenswrapper[4689]: E1201 08:39:16.690316 4689 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 08:39:16 crc kubenswrapper[4689]: E1201 08:39:16.690348 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-01 08:39:32.690339134 +0000 UTC m=+52.762627038 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 08:39:16 crc kubenswrapper[4689]: E1201 08:39:16.690413 4689 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 01 08:39:16 crc kubenswrapper[4689]: E1201 08:39:16.690447 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-01 08:39:32.690437187 +0000 UTC m=+52.762725091 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 01 08:39:16 crc kubenswrapper[4689]: I1201 08:39:16.699201 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3947625d-75bf-4332-a233-1491b2ee9d96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5095b8eb2f2f6319a0221ed4d7ec1fb81540fb75f610f7b57dc23fefbd196b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5v5pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdcf58565a332d76dac9cc12df1e0cd89eee6934be4b3234f8c36e9a8e22983a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5v5pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hmdnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:16Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:16 crc kubenswrapper[4689]: I1201 08:39:16.699425 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:16 crc kubenswrapper[4689]: I1201 08:39:16.699456 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:16 crc kubenswrapper[4689]: I1201 08:39:16.699467 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:16 crc kubenswrapper[4689]: I1201 08:39:16.699485 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:16 crc kubenswrapper[4689]: I1201 08:39:16.699496 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:16Z","lastTransitionTime":"2025-12-01T08:39:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:16 crc kubenswrapper[4689]: I1201 08:39:16.718181 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:16Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:16 crc kubenswrapper[4689]: I1201 08:39:16.735156 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:16Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:16 crc kubenswrapper[4689]: I1201 08:39:16.754795 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dl2st" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bebcb50-c292-4bca-9299-2fdc21439b18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://768aa329fc277550bd549890a176d80246fcbfa50fbc3b88c444434b980b9986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98wj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dl2st\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:16Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:16 crc kubenswrapper[4689]: I1201 08:39:16.761837 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c2qqj"] Dec 01 08:39:16 crc kubenswrapper[4689]: I1201 08:39:16.762401 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c2qqj" Dec 01 08:39:16 crc kubenswrapper[4689]: I1201 08:39:16.764770 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 01 08:39:16 crc kubenswrapper[4689]: I1201 08:39:16.765260 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 01 08:39:16 crc kubenswrapper[4689]: I1201 08:39:16.768443 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7p2p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcab587f-eb9b-4dde-a0a1-75ed175999b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://761460333be2b369513cc7812afa57b580daf1e0e9add1c20f33ddf45601632c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6954a34226801e4552090e2c6e76c36de94e28ea8e175bc26392f534fb12214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6954a34226801e4552090e2c6e76c36de94e28ea8e175bc26392f534fb12214\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33d22ffe5ececc3e9210cb8c45471bd5f4b8a76e9ff2002cfc4c099b6973a7ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33d22ffe5ececc3e9210cb8c45471bd5f4b8a76e9ff2002cfc4c099b6973a7ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30e1f218df66bf86cef3d56c80b2d5bafacfabbac8868f3127a70f88431d00e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30e1f218df66bf86cef3d56c80b2d5bafacfabbac8868f3127a70f88431d00e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://548c59ec353473b69fd54aa96a077763c420647673b64000364532db2c3beefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://548c59ec353473b69fd54aa96a077763c420647673b64000364532db2c3beefe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae04920175b7c9665a739c2f390f5b21a7f8227277909132d9719eaba43c9dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae04920175b7c9665a739c2f390f5b21a7f8227277909132d9719eaba43c9dd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://512fd55cf3633a9fcdf425af76d251823e7638bd08a7f32341108470103ea67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://512fd55cf3633a9fcdf425af76d251823e7638bd08a7f32341108470103ea67d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7p2p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:16Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:16 crc kubenswrapper[4689]: I1201 08:39:16.782258 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kg5bw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af90efaa-97be-48b4-bfe6-dc25956d2b5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e208d5bee2ddf73e62749c6865d4a6cc138365c59ab582e2b28e8c01480591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2sbc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kg5bw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:16Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:16 crc kubenswrapper[4689]: I1201 08:39:16.790045 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 08:39:16 crc kubenswrapper[4689]: E1201 08:39:16.790411 4689 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 01 08:39:16 crc kubenswrapper[4689]: E1201 08:39:16.790468 4689 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 01 08:39:16 crc kubenswrapper[4689]: E1201 08:39:16.790487 4689 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 08:39:16 crc kubenswrapper[4689]: E1201 08:39:16.790590 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-01 08:39:32.790559685 +0000 UTC m=+52.862847699 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 08:39:16 crc kubenswrapper[4689]: I1201 08:39:16.797255 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e81b25bc4fc917a5480b84f4c7d2550829f5ad209d4cac14d2cd64d5b792b9e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:16Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:16 crc kubenswrapper[4689]: I1201 08:39:16.801947 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:16 crc kubenswrapper[4689]: I1201 08:39:16.801990 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:16 crc kubenswrapper[4689]: I1201 08:39:16.802002 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:16 crc kubenswrapper[4689]: I1201 08:39:16.802021 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:16 crc kubenswrapper[4689]: I1201 08:39:16.802034 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:16Z","lastTransitionTime":"2025-12-01T08:39:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:16 crc kubenswrapper[4689]: I1201 08:39:16.809660 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78a7f1a0302034697497f8a7a3d9547400a4311d7e304fdba95a19750c66658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:16Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:16 crc kubenswrapper[4689]: I1201 08:39:16.823250 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62ff4a03d2e50e2d881287008cfedc7ff067e04d3dc24ace25b56677b0eb117c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1ed68789c1f37ddbc952b0927feba7a704b01f3ee52e057e29c989f1d475e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:16Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:16 crc kubenswrapper[4689]: I1201 08:39:16.838730 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f71ff62f-7141-4d50-b8ad-8312dc8a80e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccd36e732198fa380968b1ca9a174e8f380c9d3c3c762c3267b2f65367c4360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d09630f7eb6a631d65473e272c4f98de5ba071d33992776b4101ac797b0854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f804833ecd52f1cd93a4df02639ff912e5c7ff38bfce3ba1c47e8cd5fca46e11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35299e45eecb4b2b98003295a39b7945d1fa2943a5a7f97301054d255687877e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:38:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:16Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:16 crc kubenswrapper[4689]: I1201 08:39:16.854445 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:16Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:16 crc kubenswrapper[4689]: I1201 08:39:16.871388 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4z9l8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74395c07-d5ab-45ec-a616-1d0b1b336583\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f9b1493d035a89354477e779e8277075127a85756f190d5f0c3bf98a15f15b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjzb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4z9l8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:16Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:16 crc kubenswrapper[4689]: I1201 08:39:16.891486 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pwwx\" (UniqueName: \"kubernetes.io/projected/65456ad6-e7d1-4546-a977-244691fc5722-kube-api-access-2pwwx\") pod \"ovnkube-control-plane-749d76644c-c2qqj\" (UID: \"65456ad6-e7d1-4546-a977-244691fc5722\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c2qqj" Dec 01 08:39:16 crc kubenswrapper[4689]: I1201 08:39:16.891547 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/65456ad6-e7d1-4546-a977-244691fc5722-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-c2qqj\" (UID: \"65456ad6-e7d1-4546-a977-244691fc5722\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c2qqj" Dec 01 08:39:16 crc kubenswrapper[4689]: I1201 08:39:16.891736 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/65456ad6-e7d1-4546-a977-244691fc5722-env-overrides\") pod \"ovnkube-control-plane-749d76644c-c2qqj\" (UID: \"65456ad6-e7d1-4546-a977-244691fc5722\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c2qqj" Dec 01 08:39:16 crc kubenswrapper[4689]: I1201 08:39:16.891683 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"988f960f-52fa-406f-9320-a8eec7a04f76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8be3fd2d2a091ced00ce091ea6f81df99a6718fa9ead81d56590d2c8ebc2a36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46c9dc400db69538c36365f4e2d66aae47defc4224c2881532870f875e2b30e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2afbacac849e9ed880e975ae167ea5b2329811c28516152f4d809b043ab61746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be8fa8da5eef9035e80121c75adb9d6653d8683948fa83cfb1ff87a5fb3e8b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81345c33ddfbccd0c6b7d9c505ee81d513ce884fa1326add04026fd82d441e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://211c2ff150fa2d5fb73b0df4cfc5e694bf0b6b6761eec1e6827ac983fd759829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c071c39e8559860137dcb6dae34e217ed5619f1d7ddd40ef9a2794f87943ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e96fecee94f6f25ea0bbde3c6a23854fb5dc476f4b5cfb82871358fbcf6809c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://496b7a56d1e3bb23f4d1da0fc5bbe009995c46b1f44faad80ee18a18ab301a18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://496b7a56d1e3bb23f4d1da0fc5bbe009995c46b1f44faad80ee18a18ab301a18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8zn56\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:16Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:16 crc kubenswrapper[4689]: I1201 08:39:16.891857 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/65456ad6-e7d1-4546-a977-244691fc5722-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-c2qqj\" (UID: \"65456ad6-e7d1-4546-a977-244691fc5722\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c2qqj" Dec 01 08:39:16 crc kubenswrapper[4689]: I1201 08:39:16.905048 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:16 crc kubenswrapper[4689]: I1201 08:39:16.905109 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:16 crc kubenswrapper[4689]: I1201 08:39:16.905120 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:16 crc kubenswrapper[4689]: I1201 08:39:16.905141 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:16 crc kubenswrapper[4689]: I1201 08:39:16.905154 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:16Z","lastTransitionTime":"2025-12-01T08:39:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:16 crc kubenswrapper[4689]: I1201 08:39:16.907462 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1946844b-83ee-401e-b3b6-5994ef81c85e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dbad86a3834b9ec3334de7ccc49988da1f7e42c423801c9789ce12ba70390b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://848171dd65d193193056bff78092de5ea65abaf7af4d168a9c27409765bd0ac2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad9f8bfeec0bea5ec9df44017223d633ce320bba477e99d5cc325712f92fff65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5920f4efa4e812090eb100b51bcf8e37eb82eb6b9fa495c8560ceeea3a94d55b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83ec64e82504ae546c78bdb3eb8b6cf47514373616749723ec457191b95b73c7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 08:38:54.283267 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 08:38:54.288329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1100834793/tls.crt::/tmp/serving-cert-1100834793/tls.key\\\\\\\"\\\\nI1201 08:39:00.137331 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 08:39:00.140699 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 08:39:00.140719 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 08:39:00.140740 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 08:39:00.140746 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 08:39:00.151271 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 08:39:00.151299 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 08:39:00.151305 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 08:39:00.151312 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 08:39:00.151316 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 08:39:00.151322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 08:39:00.151326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 08:39:00.151604 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 08:39:00.154622 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7190f95ad8a5cce27417903e343138ff345b97c0713609f47db91d8fe0c44a7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bba1efe2a197a5c47fc4220c1c53da0db51bcbdf317261c978b7f31348034f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bba1efe2a197a5c47fc4220c1c53da0db51bcbdf317261c978b7f31348034f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:38:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:16Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:16 crc kubenswrapper[4689]: I1201 08:39:16.919585 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3947625d-75bf-4332-a233-1491b2ee9d96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5095b8eb2f2f6319a0221ed4d7ec1fb81540fb75f610f7b57dc23fefbd196b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5v5pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdcf58565a332d76dac9cc12df1e0cd89eee6934be4b3234f8c36e9a8e22983a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5v5pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hmdnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:16Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:16 crc kubenswrapper[4689]: I1201 08:39:16.933413 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:16Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:16 crc kubenswrapper[4689]: I1201 08:39:16.946586 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:16Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:16 crc kubenswrapper[4689]: I1201 08:39:16.959927 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dl2st" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bebcb50-c292-4bca-9299-2fdc21439b18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://768aa329fc277550bd549890a176d80246fcbfa50fbc3b88c444434b980b9986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98wj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dl2st\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:16Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:16 crc kubenswrapper[4689]: I1201 08:39:16.974754 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7p2p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcab587f-eb9b-4dde-a0a1-75ed175999b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://761460333be2b369513cc7812afa57b580daf1e0e9add1c20f33ddf45601632c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6954a34226801e4552090e2c6e76c36de94e28ea8e175bc26392f534fb12214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6954a34226801e4552090e2c6e76c36de94e28ea8e175bc26392f534fb12214\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33d22ffe5ececc3e9210cb8c45471bd5f4b8a76e9ff2002cfc4c099b6973a7ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33d22ffe5ececc3e9210cb8c45471bd5f4b8a76e9ff2002cfc4c099b6973a7ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30e1f218df66bf86cef3d56c80b2d5bafacfabbac8868f3127a70f88431d00e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30e1f218df66bf86cef3d56c80b2d5bafacfabbac8868f3127a70f88431d00e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://548c59ec353473b69fd54aa96a077763c420647673b64000364532db2c3beefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://548c59ec353473b69fd54aa96a077763c420647673b64000364532db2c3beefe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae04920175b7c9665a739c2f390f5b21a7f8227277909132d9719eaba43c9dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae04920175b7c9665a739c2f390f5b21a7f8227277909132d9719eaba43c9dd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://512fd55cf3633a9fcdf425af76d251823e7638bd08a7f32341108470103ea67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://512fd55cf3633a9fcdf425af76d251823e7638bd08a7f32341108470103ea67d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7p2p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:16Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:16 crc kubenswrapper[4689]: I1201 08:39:16.986799 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kg5bw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af90efaa-97be-48b4-bfe6-dc25956d2b5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e208d5bee2ddf73e62749c6865d4a6cc138365c59ab582e2b28e8c01480591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2sbc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kg5bw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:16Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:16 crc kubenswrapper[4689]: I1201 08:39:16.993495 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/65456ad6-e7d1-4546-a977-244691fc5722-env-overrides\") pod \"ovnkube-control-plane-749d76644c-c2qqj\" (UID: \"65456ad6-e7d1-4546-a977-244691fc5722\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c2qqj" Dec 01 08:39:16 crc kubenswrapper[4689]: I1201 08:39:16.993556 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/65456ad6-e7d1-4546-a977-244691fc5722-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-c2qqj\" (UID: \"65456ad6-e7d1-4546-a977-244691fc5722\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c2qqj" Dec 01 08:39:16 crc kubenswrapper[4689]: I1201 08:39:16.993639 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pwwx\" (UniqueName: \"kubernetes.io/projected/65456ad6-e7d1-4546-a977-244691fc5722-kube-api-access-2pwwx\") pod \"ovnkube-control-plane-749d76644c-c2qqj\" (UID: \"65456ad6-e7d1-4546-a977-244691fc5722\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c2qqj" Dec 01 08:39:16 crc kubenswrapper[4689]: I1201 08:39:16.993680 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/65456ad6-e7d1-4546-a977-244691fc5722-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-c2qqj\" (UID: \"65456ad6-e7d1-4546-a977-244691fc5722\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c2qqj" Dec 01 08:39:16 crc kubenswrapper[4689]: I1201 08:39:16.994524 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/65456ad6-e7d1-4546-a977-244691fc5722-env-overrides\") pod \"ovnkube-control-plane-749d76644c-c2qqj\" (UID: \"65456ad6-e7d1-4546-a977-244691fc5722\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c2qqj" Dec 01 08:39:16 crc kubenswrapper[4689]: I1201 08:39:16.994672 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/65456ad6-e7d1-4546-a977-244691fc5722-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-c2qqj\" (UID: \"65456ad6-e7d1-4546-a977-244691fc5722\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c2qqj" Dec 01 08:39:17 crc kubenswrapper[4689]: I1201 08:39:17.000634 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c2qqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65456ad6-e7d1-4546-a977-244691fc5722\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pwwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pwwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-c2qqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:16Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:17 crc kubenswrapper[4689]: I1201 08:39:17.003093 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/65456ad6-e7d1-4546-a977-244691fc5722-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-c2qqj\" (UID: \"65456ad6-e7d1-4546-a977-244691fc5722\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c2qqj" Dec 01 08:39:17 crc kubenswrapper[4689]: I1201 08:39:17.008897 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:17 crc kubenswrapper[4689]: I1201 08:39:17.008933 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:17 crc kubenswrapper[4689]: I1201 08:39:17.008945 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:17 crc kubenswrapper[4689]: I1201 08:39:17.008967 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:17 crc kubenswrapper[4689]: I1201 08:39:17.008980 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:17Z","lastTransitionTime":"2025-12-01T08:39:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:17 crc kubenswrapper[4689]: I1201 08:39:17.011873 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pwwx\" (UniqueName: \"kubernetes.io/projected/65456ad6-e7d1-4546-a977-244691fc5722-kube-api-access-2pwwx\") pod \"ovnkube-control-plane-749d76644c-c2qqj\" (UID: \"65456ad6-e7d1-4546-a977-244691fc5722\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c2qqj" Dec 01 08:39:17 crc kubenswrapper[4689]: I1201 08:39:17.017997 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e81b25bc4fc917a5480b84f4c7d2550829f5ad209d4cac14d2cd64d5b792b9e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:17Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:17 crc kubenswrapper[4689]: I1201 08:39:17.029497 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78a7f1a0302034697497f8a7a3d9547400a4311d7e304fdba95a19750c66658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:17Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:17 crc kubenswrapper[4689]: I1201 08:39:17.041847 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62ff4a03d2e50e2d881287008cfedc7ff067e04d3dc24ace25b56677b0eb117c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1ed68789c1f37ddbc952b0927feba7a704b01f3ee52e057e29c989f1d475e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:17Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:17 crc kubenswrapper[4689]: I1201 08:39:17.046826 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 08:39:17 crc kubenswrapper[4689]: E1201 08:39:17.046975 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 08:39:17 crc kubenswrapper[4689]: I1201 08:39:17.056062 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f71ff62f-7141-4d50-b8ad-8312dc8a80e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccd36e732198fa380968b1ca9a174e8f380c9d3c3c762c3267b2f65367c4360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d09630f7eb6a631d65473e272c4f98de5ba071d33992776b4101ac797b0854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f804833ecd52f1cd93a4df02639ff912e5c7ff38bfce3ba1c47e8cd5fca46e11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35299e45eecb4b2b98003295a39b7945d1fa2943a5a7f97301054d255687877e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:38:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:17Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:17 crc kubenswrapper[4689]: I1201 08:39:17.078807 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c2qqj" Dec 01 08:39:17 crc kubenswrapper[4689]: I1201 08:39:17.079109 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:17Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:17 crc kubenswrapper[4689]: W1201 08:39:17.097446 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod65456ad6_e7d1_4546_a977_244691fc5722.slice/crio-fa4bad1757a7e150c40a5b22860b2a693ff5227a192f82d7f65c24a6e1c12ed1 WatchSource:0}: Error finding container fa4bad1757a7e150c40a5b22860b2a693ff5227a192f82d7f65c24a6e1c12ed1: Status 404 returned error can't find the container with id fa4bad1757a7e150c40a5b22860b2a693ff5227a192f82d7f65c24a6e1c12ed1 Dec 01 08:39:17 crc kubenswrapper[4689]: I1201 08:39:17.100536 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4z9l8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74395c07-d5ab-45ec-a616-1d0b1b336583\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f9b1493d035a89354477e779e8277075127a85756f190d5f0c3bf98a15f15b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjzb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4z9l8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:17Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:17 crc kubenswrapper[4689]: I1201 08:39:17.111107 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:17 crc kubenswrapper[4689]: I1201 08:39:17.111179 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:17 crc kubenswrapper[4689]: I1201 08:39:17.111191 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:17 crc kubenswrapper[4689]: I1201 08:39:17.111211 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:17 crc kubenswrapper[4689]: I1201 08:39:17.111223 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:17Z","lastTransitionTime":"2025-12-01T08:39:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:17 crc kubenswrapper[4689]: I1201 08:39:17.150120 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"988f960f-52fa-406f-9320-a8eec7a04f76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8be3fd2d2a091ced00ce091ea6f81df99a6718fa9ead81d56590d2c8ebc2a36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46c9dc400db69538c36365f4e2d66aae47defc4224c2881532870f875e2b30e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2afbacac849e9ed880e975ae167ea5b2329811c28516152f4d809b043ab61746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be8fa8da5eef9035e80121c75adb9d6653d8683948fa83cfb1ff87a5fb3e8b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81345c33ddfbccd0c6b7d9c505ee81d513ce884fa1326add04026fd82d441e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://211c2ff150fa2d5fb73b0df4cfc5e694bf0b6b6761eec1e6827ac983fd759829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c071c39e8559860137dcb6dae34e217ed5619f1d7ddd40ef9a2794f87943ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e96fecee94f6f25ea0bbde3c6a23854fb5dc476f4b5cfb82871358fbcf6809c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://496b7a56d1e3bb23f4d1da0fc5bbe009995c46b1f44faad80ee18a18ab301a18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://496b7a56d1e3bb23f4d1da0fc5bbe009995c46b1f44faad80ee18a18ab301a18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8zn56\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:17Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:17 crc kubenswrapper[4689]: I1201 08:39:17.214005 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:17 crc kubenswrapper[4689]: I1201 08:39:17.214049 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:17 crc kubenswrapper[4689]: I1201 08:39:17.214062 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:17 crc kubenswrapper[4689]: I1201 08:39:17.214083 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:17 crc kubenswrapper[4689]: I1201 08:39:17.214093 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:17Z","lastTransitionTime":"2025-12-01T08:39:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:17 crc kubenswrapper[4689]: I1201 08:39:17.317831 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:17 crc kubenswrapper[4689]: I1201 08:39:17.317888 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:17 crc kubenswrapper[4689]: I1201 08:39:17.317900 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:17 crc kubenswrapper[4689]: I1201 08:39:17.317926 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:17 crc kubenswrapper[4689]: I1201 08:39:17.317940 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:17Z","lastTransitionTime":"2025-12-01T08:39:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:17 crc kubenswrapper[4689]: I1201 08:39:17.420805 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:17 crc kubenswrapper[4689]: I1201 08:39:17.420858 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:17 crc kubenswrapper[4689]: I1201 08:39:17.420869 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:17 crc kubenswrapper[4689]: I1201 08:39:17.420890 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:17 crc kubenswrapper[4689]: I1201 08:39:17.420902 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:17Z","lastTransitionTime":"2025-12-01T08:39:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:17 crc kubenswrapper[4689]: I1201 08:39:17.523664 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:17 crc kubenswrapper[4689]: I1201 08:39:17.523709 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:17 crc kubenswrapper[4689]: I1201 08:39:17.523721 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:17 crc kubenswrapper[4689]: I1201 08:39:17.523743 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:17 crc kubenswrapper[4689]: I1201 08:39:17.523755 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:17Z","lastTransitionTime":"2025-12-01T08:39:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:17 crc kubenswrapper[4689]: I1201 08:39:17.554703 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8zn56_988f960f-52fa-406f-9320-a8eec7a04f76/ovnkube-controller/0.log" Dec 01 08:39:17 crc kubenswrapper[4689]: I1201 08:39:17.558198 4689 generic.go:334] "Generic (PLEG): container finished" podID="988f960f-52fa-406f-9320-a8eec7a04f76" containerID="71c071c39e8559860137dcb6dae34e217ed5619f1d7ddd40ef9a2794f87943ac" exitCode=1 Dec 01 08:39:17 crc kubenswrapper[4689]: I1201 08:39:17.558252 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" event={"ID":"988f960f-52fa-406f-9320-a8eec7a04f76","Type":"ContainerDied","Data":"71c071c39e8559860137dcb6dae34e217ed5619f1d7ddd40ef9a2794f87943ac"} Dec 01 08:39:17 crc kubenswrapper[4689]: I1201 08:39:17.560129 4689 scope.go:117] "RemoveContainer" containerID="71c071c39e8559860137dcb6dae34e217ed5619f1d7ddd40ef9a2794f87943ac" Dec 01 08:39:17 crc kubenswrapper[4689]: I1201 08:39:17.561309 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c2qqj" event={"ID":"65456ad6-e7d1-4546-a977-244691fc5722","Type":"ContainerStarted","Data":"0cda8fd2f87a8bee5f54685633fc64ce2dd06bfe6e5ea9fa8458345954080e37"} Dec 01 08:39:17 crc kubenswrapper[4689]: I1201 08:39:17.561345 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c2qqj" event={"ID":"65456ad6-e7d1-4546-a977-244691fc5722","Type":"ContainerStarted","Data":"c50390d9cf966913cfb379da199be0fe90b9085e0d76114903eb624054a7f84b"} Dec 01 08:39:17 crc kubenswrapper[4689]: I1201 08:39:17.561359 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c2qqj" event={"ID":"65456ad6-e7d1-4546-a977-244691fc5722","Type":"ContainerStarted","Data":"fa4bad1757a7e150c40a5b22860b2a693ff5227a192f82d7f65c24a6e1c12ed1"} Dec 01 08:39:17 crc kubenswrapper[4689]: I1201 08:39:17.579695 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1946844b-83ee-401e-b3b6-5994ef81c85e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dbad86a3834b9ec3334de7ccc49988da1f7e42c423801c9789ce12ba70390b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://848171dd65d193193056bff78092de5ea65abaf7af4d168a9c27409765bd0ac2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad9f8bfeec0bea5ec9df44017223d633ce320bba477e99d5cc325712f92fff65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5920f4efa4e812090eb100b51bcf8e37eb82eb6b9fa495c8560ceeea3a94d55b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83ec64e82504ae546c78bdb3eb8b6cf47514373616749723ec457191b95b73c7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 08:38:54.283267 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 08:38:54.288329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1100834793/tls.crt::/tmp/serving-cert-1100834793/tls.key\\\\\\\"\\\\nI1201 08:39:00.137331 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 08:39:00.140699 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 08:39:00.140719 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 08:39:00.140740 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 08:39:00.140746 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 08:39:00.151271 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 08:39:00.151299 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 08:39:00.151305 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 08:39:00.151312 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 08:39:00.151316 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 08:39:00.151322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 08:39:00.151326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 08:39:00.151604 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 08:39:00.154622 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7190f95ad8a5cce27417903e343138ff345b97c0713609f47db91d8fe0c44a7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bba1efe2a197a5c47fc4220c1c53da0db51bcbdf317261c978b7f31348034f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bba1efe2a197a5c47fc4220c1c53da0db51bcbdf317261c978b7f31348034f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:38:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:17Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:17 crc kubenswrapper[4689]: I1201 08:39:17.608759 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3947625d-75bf-4332-a233-1491b2ee9d96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5095b8eb2f2f6319a0221ed4d7ec1fb81540fb75f610f7b57dc23fefbd196b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5v5pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdcf58565a332d76dac9cc12df1e0cd89eee6934be4b3234f8c36e9a8e22983a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5v5pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hmdnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:17Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:17 crc kubenswrapper[4689]: I1201 08:39:17.626207 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:17 crc kubenswrapper[4689]: I1201 08:39:17.626255 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:17 crc kubenswrapper[4689]: I1201 08:39:17.626268 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:17 crc kubenswrapper[4689]: I1201 08:39:17.626289 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:17 crc kubenswrapper[4689]: I1201 08:39:17.626303 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:17Z","lastTransitionTime":"2025-12-01T08:39:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:17 crc kubenswrapper[4689]: I1201 08:39:17.629610 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:17Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:17 crc kubenswrapper[4689]: I1201 08:39:17.648465 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:17Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:17 crc kubenswrapper[4689]: I1201 08:39:17.664327 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dl2st" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bebcb50-c292-4bca-9299-2fdc21439b18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://768aa329fc277550bd549890a176d80246fcbfa50fbc3b88c444434b980b9986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98wj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dl2st\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:17Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:17 crc kubenswrapper[4689]: I1201 08:39:17.685132 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7p2p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcab587f-eb9b-4dde-a0a1-75ed175999b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://761460333be2b369513cc7812afa57b580daf1e0e9add1c20f33ddf45601632c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6954a34226801e4552090e2c6e76c36de94e28ea8e175bc26392f534fb12214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6954a34226801e4552090e2c6e76c36de94e28ea8e175bc26392f534fb12214\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33d22ffe5ececc3e9210cb8c45471bd5f4b8a76e9ff2002cfc4c099b6973a7ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33d22ffe5ececc3e9210cb8c45471bd5f4b8a76e9ff2002cfc4c099b6973a7ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30e1f218df66bf86cef3d56c80b2d5bafacfabbac8868f3127a70f88431d00e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30e1f218df66bf86cef3d56c80b2d5bafacfabbac8868f3127a70f88431d00e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://548c59ec353473b69fd54aa96a077763c420647673b64000364532db2c3beefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://548c59ec353473b69fd54aa96a077763c420647673b64000364532db2c3beefe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae04920175b7c9665a739c2f390f5b21a7f8227277909132d9719eaba43c9dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae04920175b7c9665a739c2f390f5b21a7f8227277909132d9719eaba43c9dd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://512fd55cf3633a9fcdf425af76d251823e7638bd08a7f32341108470103ea67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://512fd55cf3633a9fcdf425af76d251823e7638bd08a7f32341108470103ea67d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7p2p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:17Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:17 crc kubenswrapper[4689]: I1201 08:39:17.700323 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kg5bw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af90efaa-97be-48b4-bfe6-dc25956d2b5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e208d5bee2ddf73e62749c6865d4a6cc138365c59ab582e2b28e8c01480591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2sbc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kg5bw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:17Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:17 crc kubenswrapper[4689]: I1201 08:39:17.722007 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c2qqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65456ad6-e7d1-4546-a977-244691fc5722\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pwwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pwwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-c2qqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:17Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:17 crc kubenswrapper[4689]: I1201 08:39:17.729841 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:17 crc kubenswrapper[4689]: I1201 08:39:17.729907 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:17 crc kubenswrapper[4689]: I1201 08:39:17.729923 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:17 crc kubenswrapper[4689]: I1201 08:39:17.729945 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:17 crc kubenswrapper[4689]: I1201 08:39:17.729959 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:17Z","lastTransitionTime":"2025-12-01T08:39:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:17 crc kubenswrapper[4689]: I1201 08:39:17.739530 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e81b25bc4fc917a5480b84f4c7d2550829f5ad209d4cac14d2cd64d5b792b9e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:17Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:17 crc kubenswrapper[4689]: I1201 08:39:17.753664 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78a7f1a0302034697497f8a7a3d9547400a4311d7e304fdba95a19750c66658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:17Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:17 crc kubenswrapper[4689]: I1201 08:39:17.769123 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62ff4a03d2e50e2d881287008cfedc7ff067e04d3dc24ace25b56677b0eb117c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1ed68789c1f37ddbc952b0927feba7a704b01f3ee52e057e29c989f1d475e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:17Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:17 crc kubenswrapper[4689]: I1201 08:39:17.782151 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f71ff62f-7141-4d50-b8ad-8312dc8a80e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccd36e732198fa380968b1ca9a174e8f380c9d3c3c762c3267b2f65367c4360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d09630f7eb6a631d65473e272c4f98de5ba071d33992776b4101ac797b0854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f804833ecd52f1cd93a4df02639ff912e5c7ff38bfce3ba1c47e8cd5fca46e11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35299e45eecb4b2b98003295a39b7945d1fa2943a5a7f97301054d255687877e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:38:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:17Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:17 crc kubenswrapper[4689]: I1201 08:39:17.795147 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:17Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:17 crc kubenswrapper[4689]: I1201 08:39:17.805394 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4z9l8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74395c07-d5ab-45ec-a616-1d0b1b336583\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f9b1493d035a89354477e779e8277075127a85756f190d5f0c3bf98a15f15b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjzb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4z9l8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:17Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:17 crc kubenswrapper[4689]: I1201 08:39:17.827014 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"988f960f-52fa-406f-9320-a8eec7a04f76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8be3fd2d2a091ced00ce091ea6f81df99a6718fa9ead81d56590d2c8ebc2a36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46c9dc400db69538c36365f4e2d66aae47defc4224c2881532870f875e2b30e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2afbacac849e9ed880e975ae167ea5b2329811c28516152f4d809b043ab61746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be8fa8da5eef9035e80121c75adb9d6653d8683948fa83cfb1ff87a5fb3e8b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81345c33ddfbccd0c6b7d9c505ee81d513ce884fa1326add04026fd82d441e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://211c2ff150fa2d5fb73b0df4cfc5e694bf0b6b6761eec1e6827ac983fd759829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c071c39e8559860137dcb6dae34e217ed5619f1d7ddd40ef9a2794f87943ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71c071c39e8559860137dcb6dae34e217ed5619f1d7ddd40ef9a2794f87943ac\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T08:39:16Z\\\",\\\"message\\\":\\\"al\\\\nI1201 08:39:16.832519 5842 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1201 08:39:16.832524 5842 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1201 08:39:16.832537 5842 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1201 08:39:16.832573 5842 handler.go:208] Removed *v1.Node event handler 2\\\\nI1201 08:39:16.832594 5842 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1201 08:39:16.832601 5842 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1201 08:39:16.832607 5842 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1201 08:39:16.832608 5842 handler.go:208] Removed *v1.Node event handler 7\\\\nI1201 08:39:16.832630 5842 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1201 08:39:16.832642 5842 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1201 08:39:16.832646 5842 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1201 08:39:16.832651 5842 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1201 08:39:16.832661 5842 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1201 08:39:16.832666 5842 factory.go:656] Stopping watch factory\\\\nI1201 08:39:16.832674 5842 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1201 08:39:16.832693 5842 handler.go:208] Removed *v1.Pod ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e96fecee94f6f25ea0bbde3c6a23854fb5dc476f4b5cfb82871358fbcf6809c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://496b7a56d1e3bb23f4d1da0fc5bbe009995c46b1f44faad80ee18a18ab301a18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://496b7a56d1e3bb23f4d1da0fc5bbe009995c46b1f44faad80ee18a18ab301a18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8zn56\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:17Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:17 crc kubenswrapper[4689]: I1201 08:39:17.833673 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:17 crc kubenswrapper[4689]: I1201 08:39:17.833732 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:17 crc kubenswrapper[4689]: I1201 08:39:17.833742 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:17 crc kubenswrapper[4689]: I1201 08:39:17.833761 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:17 crc kubenswrapper[4689]: I1201 08:39:17.833771 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:17Z","lastTransitionTime":"2025-12-01T08:39:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:17 crc kubenswrapper[4689]: I1201 08:39:17.842329 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:17Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:17 crc kubenswrapper[4689]: I1201 08:39:17.857083 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:17Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:17 crc kubenswrapper[4689]: I1201 08:39:17.871706 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dl2st" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bebcb50-c292-4bca-9299-2fdc21439b18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://768aa329fc277550bd549890a176d80246fcbfa50fbc3b88c444434b980b9986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98wj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dl2st\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:17Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:17 crc kubenswrapper[4689]: I1201 08:39:17.888749 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7p2p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcab587f-eb9b-4dde-a0a1-75ed175999b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://761460333be2b369513cc7812afa57b580daf1e0e9add1c20f33ddf45601632c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6954a34226801e4552090e2c6e76c36de94e28ea8e175bc26392f534fb12214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6954a34226801e4552090e2c6e76c36de94e28ea8e175bc26392f534fb12214\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33d22ffe5ececc3e9210cb8c45471bd5f4b8a76e9ff2002cfc4c099b6973a7ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33d22ffe5ececc3e9210cb8c45471bd5f4b8a76e9ff2002cfc4c099b6973a7ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30e1f218df66bf86cef3d56c80b2d5bafacfabbac8868f3127a70f88431d00e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30e1f218df66bf86cef3d56c80b2d5bafacfabbac8868f3127a70f88431d00e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://548c59ec353473b69fd54aa96a077763c420647673b64000364532db2c3beefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://548c59ec353473b69fd54aa96a077763c420647673b64000364532db2c3beefe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae04920175b7c9665a739c2f390f5b21a7f8227277909132d9719eaba43c9dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae04920175b7c9665a739c2f390f5b21a7f8227277909132d9719eaba43c9dd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://512fd55cf3633a9fcdf425af76d251823e7638bd08a7f32341108470103ea67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://512fd55cf3633a9fcdf425af76d251823e7638bd08a7f32341108470103ea67d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7p2p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:17Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:17 crc kubenswrapper[4689]: I1201 08:39:17.901649 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kg5bw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af90efaa-97be-48b4-bfe6-dc25956d2b5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e208d5bee2ddf73e62749c6865d4a6cc138365c59ab582e2b28e8c01480591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2sbc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kg5bw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:17Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:17 crc kubenswrapper[4689]: I1201 08:39:17.919138 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-jtwvs"] Dec 01 08:39:17 crc kubenswrapper[4689]: I1201 08:39:17.919702 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jtwvs" Dec 01 08:39:17 crc kubenswrapper[4689]: E1201 08:39:17.919774 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jtwvs" podUID="5d6a08d0-a948-4c69-b3f0-f5e084adb453" Dec 01 08:39:17 crc kubenswrapper[4689]: I1201 08:39:17.924442 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c2qqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65456ad6-e7d1-4546-a977-244691fc5722\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c50390d9cf966913cfb379da199be0fe90b9085e0d76114903eb624054a7f84b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pwwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cda8fd2f87a8bee5f54685633fc64ce2dd06bfe6e5ea9fa8458345954080e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pwwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-c2qqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:17Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:17 crc kubenswrapper[4689]: I1201 08:39:17.936696 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:17 crc kubenswrapper[4689]: I1201 08:39:17.936760 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:17 crc kubenswrapper[4689]: I1201 08:39:17.936775 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:17 crc kubenswrapper[4689]: I1201 08:39:17.936803 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:17 crc kubenswrapper[4689]: I1201 08:39:17.936853 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:17Z","lastTransitionTime":"2025-12-01T08:39:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:17 crc kubenswrapper[4689]: I1201 08:39:17.946710 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e81b25bc4fc917a5480b84f4c7d2550829f5ad209d4cac14d2cd64d5b792b9e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:17Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:17 crc kubenswrapper[4689]: I1201 08:39:17.973717 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78a7f1a0302034697497f8a7a3d9547400a4311d7e304fdba95a19750c66658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:17Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:17 crc kubenswrapper[4689]: I1201 08:39:17.991412 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62ff4a03d2e50e2d881287008cfedc7ff067e04d3dc24ace25b56677b0eb117c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1ed68789c1f37ddbc952b0927feba7a704b01f3ee52e057e29c989f1d475e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:17Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:18 crc kubenswrapper[4689]: I1201 08:39:18.007323 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5d6a08d0-a948-4c69-b3f0-f5e084adb453-metrics-certs\") pod \"network-metrics-daemon-jtwvs\" (UID: \"5d6a08d0-a948-4c69-b3f0-f5e084adb453\") " pod="openshift-multus/network-metrics-daemon-jtwvs" Dec 01 08:39:18 crc kubenswrapper[4689]: I1201 08:39:18.007788 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkwdj\" (UniqueName: \"kubernetes.io/projected/5d6a08d0-a948-4c69-b3f0-f5e084adb453-kube-api-access-fkwdj\") pod \"network-metrics-daemon-jtwvs\" (UID: \"5d6a08d0-a948-4c69-b3f0-f5e084adb453\") " pod="openshift-multus/network-metrics-daemon-jtwvs" Dec 01 08:39:18 crc kubenswrapper[4689]: I1201 08:39:18.019599 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f71ff62f-7141-4d50-b8ad-8312dc8a80e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccd36e732198fa380968b1ca9a174e8f380c9d3c3c762c3267b2f65367c4360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d09630f7eb6a631d65473e272c4f98de5ba071d33992776b4101ac797b0854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f804833ecd52f1cd93a4df02639ff912e5c7ff38bfce3ba1c47e8cd5fca46e11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35299e45eecb4b2b98003295a39b7945d1fa2943a5a7f97301054d255687877e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:38:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:18Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:18 crc kubenswrapper[4689]: I1201 08:39:18.035308 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:18Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:18 crc kubenswrapper[4689]: I1201 08:39:18.039571 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:18 crc kubenswrapper[4689]: I1201 08:39:18.039737 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:18 crc kubenswrapper[4689]: I1201 08:39:18.039839 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:18 crc kubenswrapper[4689]: I1201 08:39:18.039937 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:18 crc kubenswrapper[4689]: I1201 08:39:18.040020 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:18Z","lastTransitionTime":"2025-12-01T08:39:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:18 crc kubenswrapper[4689]: I1201 08:39:18.047137 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 08:39:18 crc kubenswrapper[4689]: E1201 08:39:18.047341 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 08:39:18 crc kubenswrapper[4689]: I1201 08:39:18.047687 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 08:39:18 crc kubenswrapper[4689]: E1201 08:39:18.047897 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 08:39:18 crc kubenswrapper[4689]: I1201 08:39:18.050597 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4z9l8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74395c07-d5ab-45ec-a616-1d0b1b336583\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f9b1493d035a89354477e779e8277075127a85756f190d5f0c3bf98a15f15b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjzb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4z9l8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:18Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:18 crc kubenswrapper[4689]: I1201 08:39:18.072385 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"988f960f-52fa-406f-9320-a8eec7a04f76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8be3fd2d2a091ced00ce091ea6f81df99a6718fa9ead81d56590d2c8ebc2a36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46c9dc400db69538c36365f4e2d66aae47defc4224c2881532870f875e2b30e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2afbacac849e9ed880e975ae167ea5b2329811c28516152f4d809b043ab61746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be8fa8da5eef9035e80121c75adb9d6653d8683948fa83cfb1ff87a5fb3e8b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81345c33ddfbccd0c6b7d9c505ee81d513ce884fa1326add04026fd82d441e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://211c2ff150fa2d5fb73b0df4cfc5e694bf0b6b6761eec1e6827ac983fd759829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c071c39e8559860137dcb6dae34e217ed5619f1d7ddd40ef9a2794f87943ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71c071c39e8559860137dcb6dae34e217ed5619f1d7ddd40ef9a2794f87943ac\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T08:39:16Z\\\",\\\"message\\\":\\\"al\\\\nI1201 08:39:16.832519 5842 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1201 08:39:16.832524 5842 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1201 08:39:16.832537 5842 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1201 08:39:16.832573 5842 handler.go:208] Removed *v1.Node event handler 2\\\\nI1201 08:39:16.832594 5842 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1201 08:39:16.832601 5842 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1201 08:39:16.832607 5842 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1201 08:39:16.832608 5842 handler.go:208] Removed *v1.Node event handler 7\\\\nI1201 08:39:16.832630 5842 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1201 08:39:16.832642 5842 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1201 08:39:16.832646 5842 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1201 08:39:16.832651 5842 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1201 08:39:16.832661 5842 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1201 08:39:16.832666 5842 factory.go:656] Stopping watch factory\\\\nI1201 08:39:16.832674 5842 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1201 08:39:16.832693 5842 handler.go:208] Removed *v1.Pod ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e96fecee94f6f25ea0bbde3c6a23854fb5dc476f4b5cfb82871358fbcf6809c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://496b7a56d1e3bb23f4d1da0fc5bbe009995c46b1f44faad80ee18a18ab301a18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://496b7a56d1e3bb23f4d1da0fc5bbe009995c46b1f44faad80ee18a18ab301a18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8zn56\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:18Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:18 crc kubenswrapper[4689]: I1201 08:39:18.102002 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1946844b-83ee-401e-b3b6-5994ef81c85e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dbad86a3834b9ec3334de7ccc49988da1f7e42c423801c9789ce12ba70390b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://848171dd65d193193056bff78092de5ea65abaf7af4d168a9c27409765bd0ac2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad9f8bfeec0bea5ec9df44017223d633ce320bba477e99d5cc325712f92fff65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5920f4efa4e812090eb100b51bcf8e37eb82eb6b9fa495c8560ceeea3a94d55b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83ec64e82504ae546c78bdb3eb8b6cf47514373616749723ec457191b95b73c7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 08:38:54.283267 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 08:38:54.288329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1100834793/tls.crt::/tmp/serving-cert-1100834793/tls.key\\\\\\\"\\\\nI1201 08:39:00.137331 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 08:39:00.140699 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 08:39:00.140719 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 08:39:00.140740 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 08:39:00.140746 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 08:39:00.151271 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 08:39:00.151299 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 08:39:00.151305 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 08:39:00.151312 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 08:39:00.151316 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 08:39:00.151322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 08:39:00.151326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 08:39:00.151604 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 08:39:00.154622 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7190f95ad8a5cce27417903e343138ff345b97c0713609f47db91d8fe0c44a7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bba1efe2a197a5c47fc4220c1c53da0db51bcbdf317261c978b7f31348034f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bba1efe2a197a5c47fc4220c1c53da0db51bcbdf317261c978b7f31348034f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:38:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:18Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:18 crc kubenswrapper[4689]: I1201 08:39:18.125249 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3947625d-75bf-4332-a233-1491b2ee9d96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5095b8eb2f2f6319a0221ed4d7ec1fb81540fb75f610f7b57dc23fefbd196b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5v5pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdcf58565a332d76dac9cc12df1e0cd89eee6934be4b3234f8c36e9a8e22983a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5v5pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hmdnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:18Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:18 crc kubenswrapper[4689]: I1201 08:39:18.125810 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkwdj\" (UniqueName: \"kubernetes.io/projected/5d6a08d0-a948-4c69-b3f0-f5e084adb453-kube-api-access-fkwdj\") pod \"network-metrics-daemon-jtwvs\" (UID: \"5d6a08d0-a948-4c69-b3f0-f5e084adb453\") " pod="openshift-multus/network-metrics-daemon-jtwvs" Dec 01 08:39:18 crc kubenswrapper[4689]: I1201 08:39:18.125999 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5d6a08d0-a948-4c69-b3f0-f5e084adb453-metrics-certs\") pod \"network-metrics-daemon-jtwvs\" (UID: \"5d6a08d0-a948-4c69-b3f0-f5e084adb453\") " pod="openshift-multus/network-metrics-daemon-jtwvs" Dec 01 08:39:18 crc kubenswrapper[4689]: E1201 08:39:18.126237 4689 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 01 08:39:18 crc kubenswrapper[4689]: E1201 08:39:18.126421 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d6a08d0-a948-4c69-b3f0-f5e084adb453-metrics-certs podName:5d6a08d0-a948-4c69-b3f0-f5e084adb453 nodeName:}" failed. No retries permitted until 2025-12-01 08:39:18.626337955 +0000 UTC m=+38.698625859 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5d6a08d0-a948-4c69-b3f0-f5e084adb453-metrics-certs") pod "network-metrics-daemon-jtwvs" (UID: "5d6a08d0-a948-4c69-b3f0-f5e084adb453") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 01 08:39:18 crc kubenswrapper[4689]: I1201 08:39:18.147121 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:18 crc kubenswrapper[4689]: I1201 08:39:18.147451 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:18 crc kubenswrapper[4689]: I1201 08:39:18.147537 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:18 crc kubenswrapper[4689]: I1201 08:39:18.147662 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:18 crc kubenswrapper[4689]: I1201 08:39:18.147738 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:18Z","lastTransitionTime":"2025-12-01T08:39:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:18 crc kubenswrapper[4689]: I1201 08:39:18.154077 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkwdj\" (UniqueName: \"kubernetes.io/projected/5d6a08d0-a948-4c69-b3f0-f5e084adb453-kube-api-access-fkwdj\") pod \"network-metrics-daemon-jtwvs\" (UID: \"5d6a08d0-a948-4c69-b3f0-f5e084adb453\") " pod="openshift-multus/network-metrics-daemon-jtwvs" Dec 01 08:39:18 crc kubenswrapper[4689]: I1201 08:39:18.162057 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e81b25bc4fc917a5480b84f4c7d2550829f5ad209d4cac14d2cd64d5b792b9e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:18Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:18 crc kubenswrapper[4689]: I1201 08:39:18.192183 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78a7f1a0302034697497f8a7a3d9547400a4311d7e304fdba95a19750c66658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:18Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:18 crc kubenswrapper[4689]: I1201 08:39:18.224345 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62ff4a03d2e50e2d881287008cfedc7ff067e04d3dc24ace25b56677b0eb117c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1ed68789c1f37ddbc952b0927feba7a704b01f3ee52e057e29c989f1d475e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:18Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:18 crc kubenswrapper[4689]: I1201 08:39:18.251059 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f71ff62f-7141-4d50-b8ad-8312dc8a80e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccd36e732198fa380968b1ca9a174e8f380c9d3c3c762c3267b2f65367c4360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d09630f7eb6a631d65473e272c4f98de5ba071d33992776b4101ac797b0854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f804833ecd52f1cd93a4df02639ff912e5c7ff38bfce3ba1c47e8cd5fca46e11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35299e45eecb4b2b98003295a39b7945d1fa2943a5a7f97301054d255687877e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:38:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:18Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:18 crc kubenswrapper[4689]: I1201 08:39:18.253796 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:18 crc kubenswrapper[4689]: I1201 08:39:18.253848 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:18 crc kubenswrapper[4689]: I1201 08:39:18.253861 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:18 crc kubenswrapper[4689]: I1201 08:39:18.253880 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:18 crc kubenswrapper[4689]: I1201 08:39:18.253891 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:18Z","lastTransitionTime":"2025-12-01T08:39:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:18 crc kubenswrapper[4689]: I1201 08:39:18.269662 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:18Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:18 crc kubenswrapper[4689]: I1201 08:39:18.284464 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4z9l8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74395c07-d5ab-45ec-a616-1d0b1b336583\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f9b1493d035a89354477e779e8277075127a85756f190d5f0c3bf98a15f15b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjzb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4z9l8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:18Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:18 crc kubenswrapper[4689]: I1201 08:39:18.305223 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"988f960f-52fa-406f-9320-a8eec7a04f76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8be3fd2d2a091ced00ce091ea6f81df99a6718fa9ead81d56590d2c8ebc2a36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46c9dc400db69538c36365f4e2d66aae47defc4224c2881532870f875e2b30e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2afbacac849e9ed880e975ae167ea5b2329811c28516152f4d809b043ab61746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be8fa8da5eef9035e80121c75adb9d6653d8683948fa83cfb1ff87a5fb3e8b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81345c33ddfbccd0c6b7d9c505ee81d513ce884fa1326add04026fd82d441e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://211c2ff150fa2d5fb73b0df4cfc5e694bf0b6b6761eec1e6827ac983fd759829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71c071c39e8559860137dcb6dae34e217ed5619f1d7ddd40ef9a2794f87943ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71c071c39e8559860137dcb6dae34e217ed5619f1d7ddd40ef9a2794f87943ac\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T08:39:16Z\\\",\\\"message\\\":\\\"al\\\\nI1201 08:39:16.832519 5842 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1201 08:39:16.832524 5842 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1201 08:39:16.832537 5842 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1201 08:39:16.832573 5842 handler.go:208] Removed *v1.Node event handler 2\\\\nI1201 08:39:16.832594 5842 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1201 08:39:16.832601 5842 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1201 08:39:16.832607 5842 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1201 08:39:16.832608 5842 handler.go:208] Removed *v1.Node event handler 7\\\\nI1201 08:39:16.832630 5842 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1201 08:39:16.832642 5842 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1201 08:39:16.832646 5842 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1201 08:39:16.832651 5842 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1201 08:39:16.832661 5842 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1201 08:39:16.832666 5842 factory.go:656] Stopping watch factory\\\\nI1201 08:39:16.832674 5842 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1201 08:39:16.832693 5842 handler.go:208] Removed *v1.Pod ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e96fecee94f6f25ea0bbde3c6a23854fb5dc476f4b5cfb82871358fbcf6809c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://496b7a56d1e3bb23f4d1da0fc5bbe009995c46b1f44faad80ee18a18ab301a18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://496b7a56d1e3bb23f4d1da0fc5bbe009995c46b1f44faad80ee18a18ab301a18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8zn56\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:18Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:18 crc kubenswrapper[4689]: I1201 08:39:18.320656 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jtwvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d6a08d0-a948-4c69-b3f0-f5e084adb453\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkwdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkwdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jtwvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:18Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:18 crc kubenswrapper[4689]: I1201 08:39:18.337314 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1946844b-83ee-401e-b3b6-5994ef81c85e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dbad86a3834b9ec3334de7ccc49988da1f7e42c423801c9789ce12ba70390b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://848171dd65d193193056bff78092de5ea65abaf7af4d168a9c27409765bd0ac2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad9f8bfeec0bea5ec9df44017223d633ce320bba477e99d5cc325712f92fff65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5920f4efa4e812090eb100b51bcf8e37eb82eb6b9fa495c8560ceeea3a94d55b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83ec64e82504ae546c78bdb3eb8b6cf47514373616749723ec457191b95b73c7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 08:38:54.283267 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 08:38:54.288329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1100834793/tls.crt::/tmp/serving-cert-1100834793/tls.key\\\\\\\"\\\\nI1201 08:39:00.137331 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 08:39:00.140699 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 08:39:00.140719 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 08:39:00.140740 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 08:39:00.140746 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 08:39:00.151271 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 08:39:00.151299 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 08:39:00.151305 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 08:39:00.151312 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 08:39:00.151316 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 08:39:00.151322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 08:39:00.151326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 08:39:00.151604 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 08:39:00.154622 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7190f95ad8a5cce27417903e343138ff345b97c0713609f47db91d8fe0c44a7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bba1efe2a197a5c47fc4220c1c53da0db51bcbdf317261c978b7f31348034f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bba1efe2a197a5c47fc4220c1c53da0db51bcbdf317261c978b7f31348034f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:38:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:18Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:18 crc kubenswrapper[4689]: I1201 08:39:18.352064 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3947625d-75bf-4332-a233-1491b2ee9d96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5095b8eb2f2f6319a0221ed4d7ec1fb81540fb75f610f7b57dc23fefbd196b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5v5pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdcf58565a332d76dac9cc12df1e0cd89eee6934be4b3234f8c36e9a8e22983a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5v5pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hmdnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:18Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:18 crc kubenswrapper[4689]: I1201 08:39:18.356646 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:18 crc kubenswrapper[4689]: I1201 08:39:18.356714 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:18 crc kubenswrapper[4689]: I1201 08:39:18.356727 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:18 crc kubenswrapper[4689]: I1201 08:39:18.356746 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:18 crc kubenswrapper[4689]: I1201 08:39:18.356757 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:18Z","lastTransitionTime":"2025-12-01T08:39:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:18 crc kubenswrapper[4689]: I1201 08:39:18.367007 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:18Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:18 crc kubenswrapper[4689]: I1201 08:39:18.382799 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:18Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:18 crc kubenswrapper[4689]: I1201 08:39:18.399761 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dl2st" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bebcb50-c292-4bca-9299-2fdc21439b18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://768aa329fc277550bd549890a176d80246fcbfa50fbc3b88c444434b980b9986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98wj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dl2st\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:18Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:18 crc kubenswrapper[4689]: I1201 08:39:18.417460 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7p2p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcab587f-eb9b-4dde-a0a1-75ed175999b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://761460333be2b369513cc7812afa57b580daf1e0e9add1c20f33ddf45601632c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6954a34226801e4552090e2c6e76c36de94e28ea8e175bc26392f534fb12214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6954a34226801e4552090e2c6e76c36de94e28ea8e175bc26392f534fb12214\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33d22ffe5ececc3e9210cb8c45471bd5f4b8a76e9ff2002cfc4c099b6973a7ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33d22ffe5ececc3e9210cb8c45471bd5f4b8a76e9ff2002cfc4c099b6973a7ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30e1f218df66bf86cef3d56c80b2d5bafacfabbac8868f3127a70f88431d00e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30e1f218df66bf86cef3d56c80b2d5bafacfabbac8868f3127a70f88431d00e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://548c59ec353473b69fd54aa96a077763c420647673b64000364532db2c3beefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://548c59ec353473b69fd54aa96a077763c420647673b64000364532db2c3beefe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae04920175b7c9665a739c2f390f5b21a7f8227277909132d9719eaba43c9dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae04920175b7c9665a739c2f390f5b21a7f8227277909132d9719eaba43c9dd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://512fd55cf3633a9fcdf425af76d251823e7638bd08a7f32341108470103ea67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://512fd55cf3633a9fcdf425af76d251823e7638bd08a7f32341108470103ea67d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7p2p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:18Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:18 crc kubenswrapper[4689]: I1201 08:39:18.434268 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kg5bw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af90efaa-97be-48b4-bfe6-dc25956d2b5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e208d5bee2ddf73e62749c6865d4a6cc138365c59ab582e2b28e8c01480591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2sbc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kg5bw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:18Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:18 crc kubenswrapper[4689]: I1201 08:39:18.453406 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c2qqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65456ad6-e7d1-4546-a977-244691fc5722\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c50390d9cf966913cfb379da199be0fe90b9085e0d76114903eb624054a7f84b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pwwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cda8fd2f87a8bee5f54685633fc64ce2dd06bfe6e5ea9fa8458345954080e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pwwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-c2qqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:18Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:18 crc kubenswrapper[4689]: I1201 08:39:18.460144 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:18 crc kubenswrapper[4689]: I1201 08:39:18.460210 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:18 crc kubenswrapper[4689]: I1201 08:39:18.460227 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:18 crc kubenswrapper[4689]: I1201 08:39:18.460253 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:18 crc kubenswrapper[4689]: I1201 08:39:18.460268 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:18Z","lastTransitionTime":"2025-12-01T08:39:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:18 crc kubenswrapper[4689]: I1201 08:39:18.562707 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:18 crc kubenswrapper[4689]: I1201 08:39:18.562732 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:18 crc kubenswrapper[4689]: I1201 08:39:18.562739 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:18 crc kubenswrapper[4689]: I1201 08:39:18.562753 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:18 crc kubenswrapper[4689]: I1201 08:39:18.562764 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:18Z","lastTransitionTime":"2025-12-01T08:39:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:18 crc kubenswrapper[4689]: I1201 08:39:18.570627 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8zn56_988f960f-52fa-406f-9320-a8eec7a04f76/ovnkube-controller/0.log" Dec 01 08:39:18 crc kubenswrapper[4689]: I1201 08:39:18.573021 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" event={"ID":"988f960f-52fa-406f-9320-a8eec7a04f76","Type":"ContainerStarted","Data":"4f548894d4be4438ed8d8452bec2e494c1330ccf1737e238f9ccb675a8023abe"} Dec 01 08:39:18 crc kubenswrapper[4689]: I1201 08:39:18.573465 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" Dec 01 08:39:18 crc kubenswrapper[4689]: I1201 08:39:18.600800 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f71ff62f-7141-4d50-b8ad-8312dc8a80e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccd36e732198fa380968b1ca9a174e8f380c9d3c3c762c3267b2f65367c4360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d09630f7eb6a631d65473e272c4f98de5ba071d33992776b4101ac797b0854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f804833ecd52f1cd93a4df02639ff912e5c7ff38bfce3ba1c47e8cd5fca46e11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35299e45eecb4b2b98003295a39b7945d1fa2943a5a7f97301054d255687877e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:38:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:18Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:18 crc kubenswrapper[4689]: I1201 08:39:18.613491 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:18Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:18 crc kubenswrapper[4689]: I1201 08:39:18.627607 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4z9l8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74395c07-d5ab-45ec-a616-1d0b1b336583\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f9b1493d035a89354477e779e8277075127a85756f190d5f0c3bf98a15f15b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjzb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4z9l8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:18Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:18 crc kubenswrapper[4689]: I1201 08:39:18.630818 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5d6a08d0-a948-4c69-b3f0-f5e084adb453-metrics-certs\") pod \"network-metrics-daemon-jtwvs\" (UID: \"5d6a08d0-a948-4c69-b3f0-f5e084adb453\") " pod="openshift-multus/network-metrics-daemon-jtwvs" Dec 01 08:39:18 crc kubenswrapper[4689]: E1201 08:39:18.631035 4689 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 01 08:39:18 crc kubenswrapper[4689]: E1201 08:39:18.631164 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d6a08d0-a948-4c69-b3f0-f5e084adb453-metrics-certs podName:5d6a08d0-a948-4c69-b3f0-f5e084adb453 nodeName:}" failed. No retries permitted until 2025-12-01 08:39:19.631144446 +0000 UTC m=+39.703432350 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5d6a08d0-a948-4c69-b3f0-f5e084adb453-metrics-certs") pod "network-metrics-daemon-jtwvs" (UID: "5d6a08d0-a948-4c69-b3f0-f5e084adb453") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 01 08:39:18 crc kubenswrapper[4689]: I1201 08:39:18.648097 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"988f960f-52fa-406f-9320-a8eec7a04f76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8be3fd2d2a091ced00ce091ea6f81df99a6718fa9ead81d56590d2c8ebc2a36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46c9dc400db69538c36365f4e2d66aae47defc4224c2881532870f875e2b30e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2afbacac849e9ed880e975ae167ea5b2329811c28516152f4d809b043ab61746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be8fa8da5eef9035e80121c75adb9d6653d8683948fa83cfb1ff87a5fb3e8b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81345c33ddfbccd0c6b7d9c505ee81d513ce884fa1326add04026fd82d441e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://211c2ff150fa2d5fb73b0df4cfc5e694bf0b6b6761eec1e6827ac983fd759829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f548894d4be4438ed8d8452bec2e494c1330ccf1737e238f9ccb675a8023abe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71c071c39e8559860137dcb6dae34e217ed5619f1d7ddd40ef9a2794f87943ac\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T08:39:16Z\\\",\\\"message\\\":\\\"al\\\\nI1201 08:39:16.832519 5842 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1201 08:39:16.832524 5842 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1201 08:39:16.832537 5842 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1201 08:39:16.832573 5842 handler.go:208] Removed *v1.Node event handler 2\\\\nI1201 08:39:16.832594 5842 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1201 08:39:16.832601 5842 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1201 08:39:16.832607 5842 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1201 08:39:16.832608 5842 handler.go:208] Removed *v1.Node event handler 7\\\\nI1201 08:39:16.832630 5842 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1201 08:39:16.832642 5842 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1201 08:39:16.832646 5842 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1201 08:39:16.832651 5842 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1201 08:39:16.832661 5842 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1201 08:39:16.832666 5842 factory.go:656] Stopping watch factory\\\\nI1201 08:39:16.832674 5842 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1201 08:39:16.832693 5842 handler.go:208] Removed *v1.Pod ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e96fecee94f6f25ea0bbde3c6a23854fb5dc476f4b5cfb82871358fbcf6809c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://496b7a56d1e3bb23f4d1da0fc5bbe009995c46b1f44faad80ee18a18ab301a18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://496b7a56d1e3bb23f4d1da0fc5bbe009995c46b1f44faad80ee18a18ab301a18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8zn56\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:18Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:18 crc kubenswrapper[4689]: I1201 08:39:18.663121 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jtwvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d6a08d0-a948-4c69-b3f0-f5e084adb453\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkwdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkwdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jtwvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:18Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:18 crc kubenswrapper[4689]: I1201 08:39:18.664728 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:18 crc kubenswrapper[4689]: I1201 08:39:18.664857 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:18 crc kubenswrapper[4689]: I1201 08:39:18.664868 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:18 crc kubenswrapper[4689]: I1201 08:39:18.664887 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:18 crc kubenswrapper[4689]: I1201 08:39:18.664897 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:18Z","lastTransitionTime":"2025-12-01T08:39:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:18 crc kubenswrapper[4689]: I1201 08:39:18.678296 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1946844b-83ee-401e-b3b6-5994ef81c85e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dbad86a3834b9ec3334de7ccc49988da1f7e42c423801c9789ce12ba70390b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://848171dd65d193193056bff78092de5ea65abaf7af4d168a9c27409765bd0ac2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad9f8bfeec0bea5ec9df44017223d633ce320bba477e99d5cc325712f92fff65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5920f4efa4e812090eb100b51bcf8e37eb82eb6b9fa495c8560ceeea3a94d55b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83ec64e82504ae546c78bdb3eb8b6cf47514373616749723ec457191b95b73c7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 08:38:54.283267 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 08:38:54.288329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1100834793/tls.crt::/tmp/serving-cert-1100834793/tls.key\\\\\\\"\\\\nI1201 08:39:00.137331 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 08:39:00.140699 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 08:39:00.140719 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 08:39:00.140740 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 08:39:00.140746 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 08:39:00.151271 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 08:39:00.151299 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 08:39:00.151305 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 08:39:00.151312 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 08:39:00.151316 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 08:39:00.151322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 08:39:00.151326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 08:39:00.151604 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 08:39:00.154622 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7190f95ad8a5cce27417903e343138ff345b97c0713609f47db91d8fe0c44a7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bba1efe2a197a5c47fc4220c1c53da0db51bcbdf317261c978b7f31348034f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bba1efe2a197a5c47fc4220c1c53da0db51bcbdf317261c978b7f31348034f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:38:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:18Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:18 crc kubenswrapper[4689]: I1201 08:39:18.697048 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3947625d-75bf-4332-a233-1491b2ee9d96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5095b8eb2f2f6319a0221ed4d7ec1fb81540fb75f610f7b57dc23fefbd196b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5v5pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdcf58565a332d76dac9cc12df1e0cd89eee6934be4b3234f8c36e9a8e22983a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5v5pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hmdnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:18Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:18 crc kubenswrapper[4689]: I1201 08:39:18.714253 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:18Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:18 crc kubenswrapper[4689]: I1201 08:39:18.727736 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dl2st" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bebcb50-c292-4bca-9299-2fdc21439b18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://768aa329fc277550bd549890a176d80246fcbfa50fbc3b88c444434b980b9986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98wj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dl2st\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:18Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:18 crc kubenswrapper[4689]: I1201 08:39:18.743501 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7p2p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcab587f-eb9b-4dde-a0a1-75ed175999b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://761460333be2b369513cc7812afa57b580daf1e0e9add1c20f33ddf45601632c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6954a34226801e4552090e2c6e76c36de94e28ea8e175bc26392f534fb12214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6954a34226801e4552090e2c6e76c36de94e28ea8e175bc26392f534fb12214\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33d22ffe5ececc3e9210cb8c45471bd5f4b8a76e9ff2002cfc4c099b6973a7ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33d22ffe5ececc3e9210cb8c45471bd5f4b8a76e9ff2002cfc4c099b6973a7ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30e1f218df66bf86cef3d56c80b2d5bafacfabbac8868f3127a70f88431d00e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30e1f218df66bf86cef3d56c80b2d5bafacfabbac8868f3127a70f88431d00e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://548c59ec353473b69fd54aa96a077763c420647673b64000364532db2c3beefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://548c59ec353473b69fd54aa96a077763c420647673b64000364532db2c3beefe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae04920175b7c9665a739c2f390f5b21a7f8227277909132d9719eaba43c9dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae04920175b7c9665a739c2f390f5b21a7f8227277909132d9719eaba43c9dd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://512fd55cf3633a9fcdf425af76d251823e7638bd08a7f32341108470103ea67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://512fd55cf3633a9fcdf425af76d251823e7638bd08a7f32341108470103ea67d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7p2p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:18Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:18 crc kubenswrapper[4689]: I1201 08:39:18.753511 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kg5bw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af90efaa-97be-48b4-bfe6-dc25956d2b5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e208d5bee2ddf73e62749c6865d4a6cc138365c59ab582e2b28e8c01480591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2sbc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kg5bw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:18Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:18 crc kubenswrapper[4689]: I1201 08:39:18.776170 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c2qqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65456ad6-e7d1-4546-a977-244691fc5722\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c50390d9cf966913cfb379da199be0fe90b9085e0d76114903eb624054a7f84b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pwwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cda8fd2f87a8bee5f54685633fc64ce2dd06bfe6e5ea9fa8458345954080e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pwwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-c2qqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:18Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:18 crc kubenswrapper[4689]: I1201 08:39:18.782590 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:18 crc kubenswrapper[4689]: I1201 08:39:18.782637 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:18 crc kubenswrapper[4689]: I1201 08:39:18.782651 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:18 crc kubenswrapper[4689]: I1201 08:39:18.782672 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:18 crc kubenswrapper[4689]: I1201 08:39:18.782686 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:18Z","lastTransitionTime":"2025-12-01T08:39:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:18 crc kubenswrapper[4689]: I1201 08:39:18.795328 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:18Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:18 crc kubenswrapper[4689]: I1201 08:39:18.809819 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e81b25bc4fc917a5480b84f4c7d2550829f5ad209d4cac14d2cd64d5b792b9e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:18Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:18 crc kubenswrapper[4689]: I1201 08:39:18.823197 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78a7f1a0302034697497f8a7a3d9547400a4311d7e304fdba95a19750c66658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:18Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:18 crc kubenswrapper[4689]: I1201 08:39:18.838008 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62ff4a03d2e50e2d881287008cfedc7ff067e04d3dc24ace25b56677b0eb117c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1ed68789c1f37ddbc952b0927feba7a704b01f3ee52e057e29c989f1d475e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:18Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:18 crc kubenswrapper[4689]: I1201 08:39:18.887215 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:18 crc kubenswrapper[4689]: I1201 08:39:18.887272 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:18 crc kubenswrapper[4689]: I1201 08:39:18.887285 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:18 crc kubenswrapper[4689]: I1201 08:39:18.887304 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:18 crc kubenswrapper[4689]: I1201 08:39:18.887316 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:18Z","lastTransitionTime":"2025-12-01T08:39:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:18 crc kubenswrapper[4689]: I1201 08:39:18.990402 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:18 crc kubenswrapper[4689]: I1201 08:39:18.990452 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:18 crc kubenswrapper[4689]: I1201 08:39:18.990468 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:18 crc kubenswrapper[4689]: I1201 08:39:18.990489 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:18 crc kubenswrapper[4689]: I1201 08:39:18.990506 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:18Z","lastTransitionTime":"2025-12-01T08:39:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:19 crc kubenswrapper[4689]: I1201 08:39:19.047673 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 08:39:19 crc kubenswrapper[4689]: E1201 08:39:19.047888 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 08:39:19 crc kubenswrapper[4689]: I1201 08:39:19.093778 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:19 crc kubenswrapper[4689]: I1201 08:39:19.093844 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:19 crc kubenswrapper[4689]: I1201 08:39:19.093854 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:19 crc kubenswrapper[4689]: I1201 08:39:19.093876 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:19 crc kubenswrapper[4689]: I1201 08:39:19.093895 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:19Z","lastTransitionTime":"2025-12-01T08:39:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:19 crc kubenswrapper[4689]: I1201 08:39:19.197920 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:19 crc kubenswrapper[4689]: I1201 08:39:19.198308 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:19 crc kubenswrapper[4689]: I1201 08:39:19.198412 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:19 crc kubenswrapper[4689]: I1201 08:39:19.198504 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:19 crc kubenswrapper[4689]: I1201 08:39:19.198646 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:19Z","lastTransitionTime":"2025-12-01T08:39:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:19 crc kubenswrapper[4689]: I1201 08:39:19.301703 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:19 crc kubenswrapper[4689]: I1201 08:39:19.301774 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:19 crc kubenswrapper[4689]: I1201 08:39:19.301793 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:19 crc kubenswrapper[4689]: I1201 08:39:19.301822 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:19 crc kubenswrapper[4689]: I1201 08:39:19.301843 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:19Z","lastTransitionTime":"2025-12-01T08:39:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:19 crc kubenswrapper[4689]: I1201 08:39:19.405078 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:19 crc kubenswrapper[4689]: I1201 08:39:19.405527 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:19 crc kubenswrapper[4689]: I1201 08:39:19.405707 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:19 crc kubenswrapper[4689]: I1201 08:39:19.405866 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:19 crc kubenswrapper[4689]: I1201 08:39:19.406000 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:19Z","lastTransitionTime":"2025-12-01T08:39:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:19 crc kubenswrapper[4689]: I1201 08:39:19.510822 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:19 crc kubenswrapper[4689]: I1201 08:39:19.510885 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:19 crc kubenswrapper[4689]: I1201 08:39:19.510901 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:19 crc kubenswrapper[4689]: I1201 08:39:19.510922 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:19 crc kubenswrapper[4689]: I1201 08:39:19.510939 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:19Z","lastTransitionTime":"2025-12-01T08:39:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:19 crc kubenswrapper[4689]: I1201 08:39:19.578874 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8zn56_988f960f-52fa-406f-9320-a8eec7a04f76/ovnkube-controller/1.log" Dec 01 08:39:19 crc kubenswrapper[4689]: I1201 08:39:19.579753 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8zn56_988f960f-52fa-406f-9320-a8eec7a04f76/ovnkube-controller/0.log" Dec 01 08:39:19 crc kubenswrapper[4689]: I1201 08:39:19.582007 4689 generic.go:334] "Generic (PLEG): container finished" podID="988f960f-52fa-406f-9320-a8eec7a04f76" containerID="4f548894d4be4438ed8d8452bec2e494c1330ccf1737e238f9ccb675a8023abe" exitCode=1 Dec 01 08:39:19 crc kubenswrapper[4689]: I1201 08:39:19.582054 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" event={"ID":"988f960f-52fa-406f-9320-a8eec7a04f76","Type":"ContainerDied","Data":"4f548894d4be4438ed8d8452bec2e494c1330ccf1737e238f9ccb675a8023abe"} Dec 01 08:39:19 crc kubenswrapper[4689]: I1201 08:39:19.582099 4689 scope.go:117] "RemoveContainer" containerID="71c071c39e8559860137dcb6dae34e217ed5619f1d7ddd40ef9a2794f87943ac" Dec 01 08:39:19 crc kubenswrapper[4689]: I1201 08:39:19.583004 4689 scope.go:117] "RemoveContainer" containerID="4f548894d4be4438ed8d8452bec2e494c1330ccf1737e238f9ccb675a8023abe" Dec 01 08:39:19 crc kubenswrapper[4689]: E1201 08:39:19.583326 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-8zn56_openshift-ovn-kubernetes(988f960f-52fa-406f-9320-a8eec7a04f76)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" podUID="988f960f-52fa-406f-9320-a8eec7a04f76" Dec 01 08:39:19 crc kubenswrapper[4689]: I1201 08:39:19.603127 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e81b25bc4fc917a5480b84f4c7d2550829f5ad209d4cac14d2cd64d5b792b9e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:19Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:19 crc kubenswrapper[4689]: I1201 08:39:19.613832 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:19 crc kubenswrapper[4689]: I1201 08:39:19.613883 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:19 crc kubenswrapper[4689]: I1201 08:39:19.613896 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:19 crc kubenswrapper[4689]: I1201 08:39:19.613917 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:19 crc kubenswrapper[4689]: I1201 08:39:19.613927 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:19Z","lastTransitionTime":"2025-12-01T08:39:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:19 crc kubenswrapper[4689]: I1201 08:39:19.624900 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78a7f1a0302034697497f8a7a3d9547400a4311d7e304fdba95a19750c66658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:19Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:19 crc kubenswrapper[4689]: I1201 08:39:19.641029 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5d6a08d0-a948-4c69-b3f0-f5e084adb453-metrics-certs\") pod \"network-metrics-daemon-jtwvs\" (UID: \"5d6a08d0-a948-4c69-b3f0-f5e084adb453\") " pod="openshift-multus/network-metrics-daemon-jtwvs" Dec 01 08:39:19 crc kubenswrapper[4689]: E1201 08:39:19.641306 4689 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 01 08:39:19 crc kubenswrapper[4689]: E1201 08:39:19.641520 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d6a08d0-a948-4c69-b3f0-f5e084adb453-metrics-certs podName:5d6a08d0-a948-4c69-b3f0-f5e084adb453 nodeName:}" failed. No retries permitted until 2025-12-01 08:39:21.641479437 +0000 UTC m=+41.713767381 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5d6a08d0-a948-4c69-b3f0-f5e084adb453-metrics-certs") pod "network-metrics-daemon-jtwvs" (UID: "5d6a08d0-a948-4c69-b3f0-f5e084adb453") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 01 08:39:19 crc kubenswrapper[4689]: I1201 08:39:19.643279 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62ff4a03d2e50e2d881287008cfedc7ff067e04d3dc24ace25b56677b0eb117c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1ed68789c1f37ddbc952b0927feba7a704b01f3ee52e057e29c989f1d475e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:19Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:19 crc kubenswrapper[4689]: I1201 08:39:19.666929 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"988f960f-52fa-406f-9320-a8eec7a04f76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8be3fd2d2a091ced00ce091ea6f81df99a6718fa9ead81d56590d2c8ebc2a36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46c9dc400db69538c36365f4e2d66aae47defc4224c2881532870f875e2b30e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2afbacac849e9ed880e975ae167ea5b2329811c28516152f4d809b043ab61746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be8fa8da5eef9035e80121c75adb9d6653d8683948fa83cfb1ff87a5fb3e8b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81345c33ddfbccd0c6b7d9c505ee81d513ce884fa1326add04026fd82d441e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://211c2ff150fa2d5fb73b0df4cfc5e694bf0b6b6761eec1e6827ac983fd759829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f548894d4be4438ed8d8452bec2e494c1330ccf1737e238f9ccb675a8023abe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71c071c39e8559860137dcb6dae34e217ed5619f1d7ddd40ef9a2794f87943ac\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T08:39:16Z\\\",\\\"message\\\":\\\"al\\\\nI1201 08:39:16.832519 5842 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1201 08:39:16.832524 5842 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1201 08:39:16.832537 5842 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1201 08:39:16.832573 5842 handler.go:208] Removed *v1.Node event handler 2\\\\nI1201 08:39:16.832594 5842 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1201 08:39:16.832601 5842 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1201 08:39:16.832607 5842 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1201 08:39:16.832608 5842 handler.go:208] Removed *v1.Node event handler 7\\\\nI1201 08:39:16.832630 5842 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1201 08:39:16.832642 5842 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1201 08:39:16.832646 5842 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1201 08:39:16.832651 5842 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1201 08:39:16.832661 5842 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1201 08:39:16.832666 5842 factory.go:656] Stopping watch factory\\\\nI1201 08:39:16.832674 5842 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1201 08:39:16.832693 5842 handler.go:208] Removed *v1.Pod ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f548894d4be4438ed8d8452bec2e494c1330ccf1737e238f9ccb675a8023abe\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T08:39:19Z\\\",\\\"message\\\":\\\" 6073 services_controller.go:360] Finished syncing service metrics on namespace openshift-service-ca-operator for network=default : 1.621102ms\\\\nI1201 08:39:19.012439 6073 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1201 08:39:19.012454 6073 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1201 08:39:19.012672 6073 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1201 08:39:19.012688 6073 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-authentication/oauth-openshift\\\\\\\"}\\\\nI1201 08:39:19.012700 6073 services_controller.go:360] Finished syncing service oauth-openshift on namespace openshift-authentication for network=default : 1.358936ms\\\\nI1201 08:39:19.013045 6073 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console-operator/metrics\\\\\\\"}\\\\nI1201 08:39:19.013066 6073 services_controller.go:360] Finished syncing service metrics on namespace openshift-console-operator for network=default : 1.708506ms\\\\nI1201 08:39:19.013517 6073 ovnkube.go:599] Stopped ovnkube\\\\nI1201 08:39:19.013605 6073 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1201 08:39:19.013730 6073 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e96fecee94f6f25ea0bbde3c6a23854fb5dc476f4b5cfb82871358fbcf6809c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://496b7a56d1e3bb23f4d1da0fc5bbe009995c46b1f44faad80ee18a18ab301a18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://496b7a56d1e3bb23f4d1da0fc5bbe009995c46b1f44faad80ee18a18ab301a18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8zn56\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:19Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:19 crc kubenswrapper[4689]: I1201 08:39:19.680341 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jtwvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d6a08d0-a948-4c69-b3f0-f5e084adb453\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkwdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkwdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jtwvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:19Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:19 crc kubenswrapper[4689]: I1201 08:39:19.696555 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f71ff62f-7141-4d50-b8ad-8312dc8a80e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccd36e732198fa380968b1ca9a174e8f380c9d3c3c762c3267b2f65367c4360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d09630f7eb6a631d65473e272c4f98de5ba071d33992776b4101ac797b0854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f804833ecd52f1cd93a4df02639ff912e5c7ff38bfce3ba1c47e8cd5fca46e11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35299e45eecb4b2b98003295a39b7945d1fa2943a5a7f97301054d255687877e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:38:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:19Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:19 crc kubenswrapper[4689]: I1201 08:39:19.715951 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:19Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:19 crc kubenswrapper[4689]: I1201 08:39:19.717301 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:19 crc kubenswrapper[4689]: I1201 08:39:19.717466 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:19 crc kubenswrapper[4689]: I1201 08:39:19.717479 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:19 crc kubenswrapper[4689]: I1201 08:39:19.717504 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:19 crc kubenswrapper[4689]: I1201 08:39:19.717521 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:19Z","lastTransitionTime":"2025-12-01T08:39:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:19 crc kubenswrapper[4689]: I1201 08:39:19.729840 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4z9l8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74395c07-d5ab-45ec-a616-1d0b1b336583\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f9b1493d035a89354477e779e8277075127a85756f190d5f0c3bf98a15f15b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjzb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4z9l8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:19Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:19 crc kubenswrapper[4689]: I1201 08:39:19.750609 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1946844b-83ee-401e-b3b6-5994ef81c85e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dbad86a3834b9ec3334de7ccc49988da1f7e42c423801c9789ce12ba70390b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://848171dd65d193193056bff78092de5ea65abaf7af4d168a9c27409765bd0ac2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad9f8bfeec0bea5ec9df44017223d633ce320bba477e99d5cc325712f92fff65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5920f4efa4e812090eb100b51bcf8e37eb82eb6b9fa495c8560ceeea3a94d55b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83ec64e82504ae546c78bdb3eb8b6cf47514373616749723ec457191b95b73c7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 08:38:54.283267 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 08:38:54.288329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1100834793/tls.crt::/tmp/serving-cert-1100834793/tls.key\\\\\\\"\\\\nI1201 08:39:00.137331 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 08:39:00.140699 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 08:39:00.140719 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 08:39:00.140740 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 08:39:00.140746 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 08:39:00.151271 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 08:39:00.151299 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 08:39:00.151305 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 08:39:00.151312 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 08:39:00.151316 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 08:39:00.151322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 08:39:00.151326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 08:39:00.151604 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 08:39:00.154622 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7190f95ad8a5cce27417903e343138ff345b97c0713609f47db91d8fe0c44a7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bba1efe2a197a5c47fc4220c1c53da0db51bcbdf317261c978b7f31348034f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bba1efe2a197a5c47fc4220c1c53da0db51bcbdf317261c978b7f31348034f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:38:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:19Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:19 crc kubenswrapper[4689]: I1201 08:39:19.763198 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3947625d-75bf-4332-a233-1491b2ee9d96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5095b8eb2f2f6319a0221ed4d7ec1fb81540fb75f610f7b57dc23fefbd196b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5v5pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdcf58565a332d76dac9cc12df1e0cd89eee6934be4b3234f8c36e9a8e22983a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5v5pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hmdnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:19Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:19 crc kubenswrapper[4689]: I1201 08:39:19.773428 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kg5bw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af90efaa-97be-48b4-bfe6-dc25956d2b5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e208d5bee2ddf73e62749c6865d4a6cc138365c59ab582e2b28e8c01480591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2sbc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kg5bw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:19Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:19 crc kubenswrapper[4689]: I1201 08:39:19.783410 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c2qqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65456ad6-e7d1-4546-a977-244691fc5722\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c50390d9cf966913cfb379da199be0fe90b9085e0d76114903eb624054a7f84b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pwwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cda8fd2f87a8bee5f54685633fc64ce2dd06bfe6e5ea9fa8458345954080e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pwwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-c2qqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:19Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:19 crc kubenswrapper[4689]: I1201 08:39:19.794715 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:19Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:19 crc kubenswrapper[4689]: I1201 08:39:19.806011 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:19Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:19 crc kubenswrapper[4689]: I1201 08:39:19.818548 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dl2st" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bebcb50-c292-4bca-9299-2fdc21439b18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://768aa329fc277550bd549890a176d80246fcbfa50fbc3b88c444434b980b9986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98wj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dl2st\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:19Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:19 crc kubenswrapper[4689]: I1201 08:39:19.819923 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:19 crc kubenswrapper[4689]: I1201 08:39:19.819966 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:19 crc kubenswrapper[4689]: I1201 08:39:19.819976 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:19 crc kubenswrapper[4689]: I1201 08:39:19.819994 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:19 crc kubenswrapper[4689]: I1201 08:39:19.820004 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:19Z","lastTransitionTime":"2025-12-01T08:39:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:19 crc kubenswrapper[4689]: I1201 08:39:19.834636 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7p2p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcab587f-eb9b-4dde-a0a1-75ed175999b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://761460333be2b369513cc7812afa57b580daf1e0e9add1c20f33ddf45601632c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6954a34226801e4552090e2c6e76c36de94e28ea8e175bc26392f534fb12214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6954a34226801e4552090e2c6e76c36de94e28ea8e175bc26392f534fb12214\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33d22ffe5ececc3e9210cb8c45471bd5f4b8a76e9ff2002cfc4c099b6973a7ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33d22ffe5ececc3e9210cb8c45471bd5f4b8a76e9ff2002cfc4c099b6973a7ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30e1f218df66bf86cef3d56c80b2d5bafacfabbac8868f3127a70f88431d00e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30e1f218df66bf86cef3d56c80b2d5bafacfabbac8868f3127a70f88431d00e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://548c59ec353473b69fd54aa96a077763c420647673b64000364532db2c3beefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://548c59ec353473b69fd54aa96a077763c420647673b64000364532db2c3beefe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae04920175b7c9665a739c2f390f5b21a7f8227277909132d9719eaba43c9dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae04920175b7c9665a739c2f390f5b21a7f8227277909132d9719eaba43c9dd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://512fd55cf3633a9fcdf425af76d251823e7638bd08a7f32341108470103ea67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://512fd55cf3633a9fcdf425af76d251823e7638bd08a7f32341108470103ea67d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7p2p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:19Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:19 crc kubenswrapper[4689]: I1201 08:39:19.923008 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:19 crc kubenswrapper[4689]: I1201 08:39:19.923082 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:19 crc kubenswrapper[4689]: I1201 08:39:19.923106 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:19 crc kubenswrapper[4689]: I1201 08:39:19.923136 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:19 crc kubenswrapper[4689]: I1201 08:39:19.923154 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:19Z","lastTransitionTime":"2025-12-01T08:39:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:20 crc kubenswrapper[4689]: I1201 08:39:20.025884 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:20 crc kubenswrapper[4689]: I1201 08:39:20.025945 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:20 crc kubenswrapper[4689]: I1201 08:39:20.025962 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:20 crc kubenswrapper[4689]: I1201 08:39:20.025985 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:20 crc kubenswrapper[4689]: I1201 08:39:20.026002 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:20Z","lastTransitionTime":"2025-12-01T08:39:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:20 crc kubenswrapper[4689]: I1201 08:39:20.046656 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 08:39:20 crc kubenswrapper[4689]: I1201 08:39:20.046682 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 08:39:20 crc kubenswrapper[4689]: E1201 08:39:20.047125 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 08:39:20 crc kubenswrapper[4689]: I1201 08:39:20.046778 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jtwvs" Dec 01 08:39:20 crc kubenswrapper[4689]: E1201 08:39:20.047296 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 08:39:20 crc kubenswrapper[4689]: E1201 08:39:20.047573 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jtwvs" podUID="5d6a08d0-a948-4c69-b3f0-f5e084adb453" Dec 01 08:39:20 crc kubenswrapper[4689]: I1201 08:39:20.129840 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:20 crc kubenswrapper[4689]: I1201 08:39:20.129898 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:20 crc kubenswrapper[4689]: I1201 08:39:20.129907 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:20 crc kubenswrapper[4689]: I1201 08:39:20.129923 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:20 crc kubenswrapper[4689]: I1201 08:39:20.129933 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:20Z","lastTransitionTime":"2025-12-01T08:39:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:20 crc kubenswrapper[4689]: I1201 08:39:20.233423 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:20 crc kubenswrapper[4689]: I1201 08:39:20.233494 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:20 crc kubenswrapper[4689]: I1201 08:39:20.233508 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:20 crc kubenswrapper[4689]: I1201 08:39:20.233527 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:20 crc kubenswrapper[4689]: I1201 08:39:20.233540 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:20Z","lastTransitionTime":"2025-12-01T08:39:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:20 crc kubenswrapper[4689]: I1201 08:39:20.336409 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:20 crc kubenswrapper[4689]: I1201 08:39:20.336475 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:20 crc kubenswrapper[4689]: I1201 08:39:20.336492 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:20 crc kubenswrapper[4689]: I1201 08:39:20.336554 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:20 crc kubenswrapper[4689]: I1201 08:39:20.336572 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:20Z","lastTransitionTime":"2025-12-01T08:39:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:20 crc kubenswrapper[4689]: I1201 08:39:20.440820 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:20 crc kubenswrapper[4689]: I1201 08:39:20.440909 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:20 crc kubenswrapper[4689]: I1201 08:39:20.440934 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:20 crc kubenswrapper[4689]: I1201 08:39:20.440964 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:20 crc kubenswrapper[4689]: I1201 08:39:20.440985 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:20Z","lastTransitionTime":"2025-12-01T08:39:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:20 crc kubenswrapper[4689]: I1201 08:39:20.544821 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:20 crc kubenswrapper[4689]: I1201 08:39:20.544935 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:20 crc kubenswrapper[4689]: I1201 08:39:20.544953 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:20 crc kubenswrapper[4689]: I1201 08:39:20.544977 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:20 crc kubenswrapper[4689]: I1201 08:39:20.544997 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:20Z","lastTransitionTime":"2025-12-01T08:39:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:20 crc kubenswrapper[4689]: I1201 08:39:20.601783 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8zn56_988f960f-52fa-406f-9320-a8eec7a04f76/ovnkube-controller/1.log" Dec 01 08:39:20 crc kubenswrapper[4689]: I1201 08:39:20.605337 4689 scope.go:117] "RemoveContainer" containerID="4f548894d4be4438ed8d8452bec2e494c1330ccf1737e238f9ccb675a8023abe" Dec 01 08:39:20 crc kubenswrapper[4689]: E1201 08:39:20.605597 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-8zn56_openshift-ovn-kubernetes(988f960f-52fa-406f-9320-a8eec7a04f76)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" podUID="988f960f-52fa-406f-9320-a8eec7a04f76" Dec 01 08:39:20 crc kubenswrapper[4689]: I1201 08:39:20.617020 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4z9l8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74395c07-d5ab-45ec-a616-1d0b1b336583\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f9b1493d035a89354477e779e8277075127a85756f190d5f0c3bf98a15f15b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjzb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4z9l8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:20Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:20 crc kubenswrapper[4689]: I1201 08:39:20.639041 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"988f960f-52fa-406f-9320-a8eec7a04f76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8be3fd2d2a091ced00ce091ea6f81df99a6718fa9ead81d56590d2c8ebc2a36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46c9dc400db69538c36365f4e2d66aae47defc4224c2881532870f875e2b30e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2afbacac849e9ed880e975ae167ea5b2329811c28516152f4d809b043ab61746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be8fa8da5eef9035e80121c75adb9d6653d8683948fa83cfb1ff87a5fb3e8b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81345c33ddfbccd0c6b7d9c505ee81d513ce884fa1326add04026fd82d441e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://211c2ff150fa2d5fb73b0df4cfc5e694bf0b6b6761eec1e6827ac983fd759829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f548894d4be4438ed8d8452bec2e494c1330ccf1737e238f9ccb675a8023abe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f548894d4be4438ed8d8452bec2e494c1330ccf1737e238f9ccb675a8023abe\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T08:39:19Z\\\",\\\"message\\\":\\\" 6073 services_controller.go:360] Finished syncing service metrics on namespace openshift-service-ca-operator for network=default : 1.621102ms\\\\nI1201 08:39:19.012439 6073 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1201 08:39:19.012454 6073 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1201 08:39:19.012672 6073 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1201 08:39:19.012688 6073 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-authentication/oauth-openshift\\\\\\\"}\\\\nI1201 08:39:19.012700 6073 services_controller.go:360] Finished syncing service oauth-openshift on namespace openshift-authentication for network=default : 1.358936ms\\\\nI1201 08:39:19.013045 6073 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console-operator/metrics\\\\\\\"}\\\\nI1201 08:39:19.013066 6073 services_controller.go:360] Finished syncing service metrics on namespace openshift-console-operator for network=default : 1.708506ms\\\\nI1201 08:39:19.013517 6073 ovnkube.go:599] Stopped ovnkube\\\\nI1201 08:39:19.013605 6073 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1201 08:39:19.013730 6073 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-8zn56_openshift-ovn-kubernetes(988f960f-52fa-406f-9320-a8eec7a04f76)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e96fecee94f6f25ea0bbde3c6a23854fb5dc476f4b5cfb82871358fbcf6809c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://496b7a56d1e3bb23f4d1da0fc5bbe009995c46b1f44faad80ee18a18ab301a18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://496b7a56d1e3bb23f4d1da0fc5bbe009995c46b1f44faad80ee18a18ab301a18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8zn56\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:20Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:20 crc kubenswrapper[4689]: I1201 08:39:20.652291 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:20 crc kubenswrapper[4689]: I1201 08:39:20.652378 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:20 crc kubenswrapper[4689]: I1201 08:39:20.652396 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:20 crc kubenswrapper[4689]: I1201 08:39:20.652412 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:20 crc kubenswrapper[4689]: I1201 08:39:20.652423 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:20Z","lastTransitionTime":"2025-12-01T08:39:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:20 crc kubenswrapper[4689]: I1201 08:39:20.653926 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jtwvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d6a08d0-a948-4c69-b3f0-f5e084adb453\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkwdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkwdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jtwvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:20Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:20 crc kubenswrapper[4689]: I1201 08:39:20.670803 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f71ff62f-7141-4d50-b8ad-8312dc8a80e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccd36e732198fa380968b1ca9a174e8f380c9d3c3c762c3267b2f65367c4360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d09630f7eb6a631d65473e272c4f98de5ba071d33992776b4101ac797b0854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f804833ecd52f1cd93a4df02639ff912e5c7ff38bfce3ba1c47e8cd5fca46e11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35299e45eecb4b2b98003295a39b7945d1fa2943a5a7f97301054d255687877e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:38:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:20Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:20 crc kubenswrapper[4689]: I1201 08:39:20.684862 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:20Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:20 crc kubenswrapper[4689]: I1201 08:39:20.698103 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1946844b-83ee-401e-b3b6-5994ef81c85e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dbad86a3834b9ec3334de7ccc49988da1f7e42c423801c9789ce12ba70390b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://848171dd65d193193056bff78092de5ea65abaf7af4d168a9c27409765bd0ac2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad9f8bfeec0bea5ec9df44017223d633ce320bba477e99d5cc325712f92fff65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5920f4efa4e812090eb100b51bcf8e37eb82eb6b9fa495c8560ceeea3a94d55b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83ec64e82504ae546c78bdb3eb8b6cf47514373616749723ec457191b95b73c7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 08:38:54.283267 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 08:38:54.288329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1100834793/tls.crt::/tmp/serving-cert-1100834793/tls.key\\\\\\\"\\\\nI1201 08:39:00.137331 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 08:39:00.140699 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 08:39:00.140719 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 08:39:00.140740 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 08:39:00.140746 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 08:39:00.151271 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 08:39:00.151299 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 08:39:00.151305 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 08:39:00.151312 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 08:39:00.151316 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 08:39:00.151322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 08:39:00.151326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 08:39:00.151604 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 08:39:00.154622 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7190f95ad8a5cce27417903e343138ff345b97c0713609f47db91d8fe0c44a7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bba1efe2a197a5c47fc4220c1c53da0db51bcbdf317261c978b7f31348034f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bba1efe2a197a5c47fc4220c1c53da0db51bcbdf317261c978b7f31348034f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:38:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:20Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:20 crc kubenswrapper[4689]: I1201 08:39:20.710314 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3947625d-75bf-4332-a233-1491b2ee9d96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5095b8eb2f2f6319a0221ed4d7ec1fb81540fb75f610f7b57dc23fefbd196b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5v5pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdcf58565a332d76dac9cc12df1e0cd89eee6934be4b3234f8c36e9a8e22983a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5v5pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hmdnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:20Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:20 crc kubenswrapper[4689]: I1201 08:39:20.724745 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7p2p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcab587f-eb9b-4dde-a0a1-75ed175999b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://761460333be2b369513cc7812afa57b580daf1e0e9add1c20f33ddf45601632c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6954a34226801e4552090e2c6e76c36de94e28ea8e175bc26392f534fb12214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6954a34226801e4552090e2c6e76c36de94e28ea8e175bc26392f534fb12214\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33d22ffe5ececc3e9210cb8c45471bd5f4b8a76e9ff2002cfc4c099b6973a7ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33d22ffe5ececc3e9210cb8c45471bd5f4b8a76e9ff2002cfc4c099b6973a7ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30e1f218df66bf86cef3d56c80b2d5bafacfabbac8868f3127a70f88431d00e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30e1f218df66bf86cef3d56c80b2d5bafacfabbac8868f3127a70f88431d00e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://548c59ec353473b69fd54aa96a077763c420647673b64000364532db2c3beefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://548c59ec353473b69fd54aa96a077763c420647673b64000364532db2c3beefe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae04920175b7c9665a739c2f390f5b21a7f8227277909132d9719eaba43c9dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae04920175b7c9665a739c2f390f5b21a7f8227277909132d9719eaba43c9dd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://512fd55cf3633a9fcdf425af76d251823e7638bd08a7f32341108470103ea67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://512fd55cf3633a9fcdf425af76d251823e7638bd08a7f32341108470103ea67d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7p2p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:20Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:20 crc kubenswrapper[4689]: I1201 08:39:20.736304 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kg5bw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af90efaa-97be-48b4-bfe6-dc25956d2b5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e208d5bee2ddf73e62749c6865d4a6cc138365c59ab582e2b28e8c01480591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2sbc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kg5bw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:20Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:20 crc kubenswrapper[4689]: I1201 08:39:20.749439 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c2qqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65456ad6-e7d1-4546-a977-244691fc5722\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c50390d9cf966913cfb379da199be0fe90b9085e0d76114903eb624054a7f84b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pwwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cda8fd2f87a8bee5f54685633fc64ce2dd06bfe6e5ea9fa8458345954080e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pwwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-c2qqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:20Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:20 crc kubenswrapper[4689]: I1201 08:39:20.755186 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:20 crc kubenswrapper[4689]: I1201 08:39:20.755233 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:20 crc kubenswrapper[4689]: I1201 08:39:20.755244 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:20 crc kubenswrapper[4689]: I1201 08:39:20.755265 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:20 crc kubenswrapper[4689]: I1201 08:39:20.755280 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:20Z","lastTransitionTime":"2025-12-01T08:39:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:20 crc kubenswrapper[4689]: I1201 08:39:20.764922 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:20Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:20 crc kubenswrapper[4689]: I1201 08:39:20.779185 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:20Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:20 crc kubenswrapper[4689]: I1201 08:39:20.796030 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dl2st" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bebcb50-c292-4bca-9299-2fdc21439b18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://768aa329fc277550bd549890a176d80246fcbfa50fbc3b88c444434b980b9986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98wj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dl2st\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:20Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:20 crc kubenswrapper[4689]: I1201 08:39:20.808451 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62ff4a03d2e50e2d881287008cfedc7ff067e04d3dc24ace25b56677b0eb117c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1ed68789c1f37ddbc952b0927feba7a704b01f3ee52e057e29c989f1d475e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:20Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:20 crc kubenswrapper[4689]: I1201 08:39:20.822936 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e81b25bc4fc917a5480b84f4c7d2550829f5ad209d4cac14d2cd64d5b792b9e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:20Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:20 crc kubenswrapper[4689]: I1201 08:39:20.835945 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78a7f1a0302034697497f8a7a3d9547400a4311d7e304fdba95a19750c66658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:20Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:20 crc kubenswrapper[4689]: I1201 08:39:20.858330 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:20 crc kubenswrapper[4689]: I1201 08:39:20.858395 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:20 crc kubenswrapper[4689]: I1201 08:39:20.858410 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:20 crc kubenswrapper[4689]: I1201 08:39:20.858431 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:20 crc kubenswrapper[4689]: I1201 08:39:20.858445 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:20Z","lastTransitionTime":"2025-12-01T08:39:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:20 crc kubenswrapper[4689]: I1201 08:39:20.960847 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:20 crc kubenswrapper[4689]: I1201 08:39:20.960884 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:20 crc kubenswrapper[4689]: I1201 08:39:20.960896 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:20 crc kubenswrapper[4689]: I1201 08:39:20.960914 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:20 crc kubenswrapper[4689]: I1201 08:39:20.960927 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:20Z","lastTransitionTime":"2025-12-01T08:39:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:21 crc kubenswrapper[4689]: I1201 08:39:21.047323 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 08:39:21 crc kubenswrapper[4689]: E1201 08:39:21.047502 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 08:39:21 crc kubenswrapper[4689]: I1201 08:39:21.061443 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e81b25bc4fc917a5480b84f4c7d2550829f5ad209d4cac14d2cd64d5b792b9e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:21Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:21 crc kubenswrapper[4689]: I1201 08:39:21.064117 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:21 crc kubenswrapper[4689]: I1201 08:39:21.064162 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:21 crc kubenswrapper[4689]: I1201 08:39:21.064176 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:21 crc kubenswrapper[4689]: I1201 08:39:21.064195 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:21 crc kubenswrapper[4689]: I1201 08:39:21.064210 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:21Z","lastTransitionTime":"2025-12-01T08:39:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:21 crc kubenswrapper[4689]: I1201 08:39:21.072701 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78a7f1a0302034697497f8a7a3d9547400a4311d7e304fdba95a19750c66658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:21Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:21 crc kubenswrapper[4689]: I1201 08:39:21.085035 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62ff4a03d2e50e2d881287008cfedc7ff067e04d3dc24ace25b56677b0eb117c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1ed68789c1f37ddbc952b0927feba7a704b01f3ee52e057e29c989f1d475e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:21Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:21 crc kubenswrapper[4689]: I1201 08:39:21.099409 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f71ff62f-7141-4d50-b8ad-8312dc8a80e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccd36e732198fa380968b1ca9a174e8f380c9d3c3c762c3267b2f65367c4360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d09630f7eb6a631d65473e272c4f98de5ba071d33992776b4101ac797b0854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f804833ecd52f1cd93a4df02639ff912e5c7ff38bfce3ba1c47e8cd5fca46e11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35299e45eecb4b2b98003295a39b7945d1fa2943a5a7f97301054d255687877e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:38:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:21Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:21 crc kubenswrapper[4689]: I1201 08:39:21.113863 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:21Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:21 crc kubenswrapper[4689]: I1201 08:39:21.124951 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4z9l8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74395c07-d5ab-45ec-a616-1d0b1b336583\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f9b1493d035a89354477e779e8277075127a85756f190d5f0c3bf98a15f15b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjzb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4z9l8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:21Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:21 crc kubenswrapper[4689]: I1201 08:39:21.147076 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"988f960f-52fa-406f-9320-a8eec7a04f76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8be3fd2d2a091ced00ce091ea6f81df99a6718fa9ead81d56590d2c8ebc2a36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46c9dc400db69538c36365f4e2d66aae47defc4224c2881532870f875e2b30e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2afbacac849e9ed880e975ae167ea5b2329811c28516152f4d809b043ab61746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be8fa8da5eef9035e80121c75adb9d6653d8683948fa83cfb1ff87a5fb3e8b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81345c33ddfbccd0c6b7d9c505ee81d513ce884fa1326add04026fd82d441e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://211c2ff150fa2d5fb73b0df4cfc5e694bf0b6b6761eec1e6827ac983fd759829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f548894d4be4438ed8d8452bec2e494c1330ccf1737e238f9ccb675a8023abe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f548894d4be4438ed8d8452bec2e494c1330ccf1737e238f9ccb675a8023abe\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T08:39:19Z\\\",\\\"message\\\":\\\" 6073 services_controller.go:360] Finished syncing service metrics on namespace openshift-service-ca-operator for network=default : 1.621102ms\\\\nI1201 08:39:19.012439 6073 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1201 08:39:19.012454 6073 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1201 08:39:19.012672 6073 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1201 08:39:19.012688 6073 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-authentication/oauth-openshift\\\\\\\"}\\\\nI1201 08:39:19.012700 6073 services_controller.go:360] Finished syncing service oauth-openshift on namespace openshift-authentication for network=default : 1.358936ms\\\\nI1201 08:39:19.013045 6073 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console-operator/metrics\\\\\\\"}\\\\nI1201 08:39:19.013066 6073 services_controller.go:360] Finished syncing service metrics on namespace openshift-console-operator for network=default : 1.708506ms\\\\nI1201 08:39:19.013517 6073 ovnkube.go:599] Stopped ovnkube\\\\nI1201 08:39:19.013605 6073 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1201 08:39:19.013730 6073 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-8zn56_openshift-ovn-kubernetes(988f960f-52fa-406f-9320-a8eec7a04f76)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e96fecee94f6f25ea0bbde3c6a23854fb5dc476f4b5cfb82871358fbcf6809c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://496b7a56d1e3bb23f4d1da0fc5bbe009995c46b1f44faad80ee18a18ab301a18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://496b7a56d1e3bb23f4d1da0fc5bbe009995c46b1f44faad80ee18a18ab301a18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8zn56\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:21Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:21 crc kubenswrapper[4689]: I1201 08:39:21.163812 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jtwvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d6a08d0-a948-4c69-b3f0-f5e084adb453\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkwdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkwdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jtwvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:21Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:21 crc kubenswrapper[4689]: I1201 08:39:21.166764 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:21 crc kubenswrapper[4689]: I1201 08:39:21.166795 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:21 crc kubenswrapper[4689]: I1201 08:39:21.166803 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:21 crc kubenswrapper[4689]: I1201 08:39:21.166818 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:21 crc kubenswrapper[4689]: I1201 08:39:21.166828 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:21Z","lastTransitionTime":"2025-12-01T08:39:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:21 crc kubenswrapper[4689]: I1201 08:39:21.180727 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1946844b-83ee-401e-b3b6-5994ef81c85e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dbad86a3834b9ec3334de7ccc49988da1f7e42c423801c9789ce12ba70390b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://848171dd65d193193056bff78092de5ea65abaf7af4d168a9c27409765bd0ac2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad9f8bfeec0bea5ec9df44017223d633ce320bba477e99d5cc325712f92fff65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5920f4efa4e812090eb100b51bcf8e37eb82eb6b9fa495c8560ceeea3a94d55b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83ec64e82504ae546c78bdb3eb8b6cf47514373616749723ec457191b95b73c7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 08:38:54.283267 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 08:38:54.288329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1100834793/tls.crt::/tmp/serving-cert-1100834793/tls.key\\\\\\\"\\\\nI1201 08:39:00.137331 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 08:39:00.140699 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 08:39:00.140719 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 08:39:00.140740 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 08:39:00.140746 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 08:39:00.151271 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 08:39:00.151299 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 08:39:00.151305 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 08:39:00.151312 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 08:39:00.151316 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 08:39:00.151322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 08:39:00.151326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 08:39:00.151604 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 08:39:00.154622 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7190f95ad8a5cce27417903e343138ff345b97c0713609f47db91d8fe0c44a7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bba1efe2a197a5c47fc4220c1c53da0db51bcbdf317261c978b7f31348034f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bba1efe2a197a5c47fc4220c1c53da0db51bcbdf317261c978b7f31348034f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:38:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:21Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:21 crc kubenswrapper[4689]: I1201 08:39:21.191125 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3947625d-75bf-4332-a233-1491b2ee9d96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5095b8eb2f2f6319a0221ed4d7ec1fb81540fb75f610f7b57dc23fefbd196b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5v5pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdcf58565a332d76dac9cc12df1e0cd89eee6934be4b3234f8c36e9a8e22983a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5v5pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hmdnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:21Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:21 crc kubenswrapper[4689]: I1201 08:39:21.204611 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:21Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:21 crc kubenswrapper[4689]: I1201 08:39:21.227057 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dl2st" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bebcb50-c292-4bca-9299-2fdc21439b18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://768aa329fc277550bd549890a176d80246fcbfa50fbc3b88c444434b980b9986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98wj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dl2st\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:21Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:21 crc kubenswrapper[4689]: I1201 08:39:21.245616 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7p2p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcab587f-eb9b-4dde-a0a1-75ed175999b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://761460333be2b369513cc7812afa57b580daf1e0e9add1c20f33ddf45601632c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6954a34226801e4552090e2c6e76c36de94e28ea8e175bc26392f534fb12214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6954a34226801e4552090e2c6e76c36de94e28ea8e175bc26392f534fb12214\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33d22ffe5ececc3e9210cb8c45471bd5f4b8a76e9ff2002cfc4c099b6973a7ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33d22ffe5ececc3e9210cb8c45471bd5f4b8a76e9ff2002cfc4c099b6973a7ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30e1f218df66bf86cef3d56c80b2d5bafacfabbac8868f3127a70f88431d00e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30e1f218df66bf86cef3d56c80b2d5bafacfabbac8868f3127a70f88431d00e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://548c59ec353473b69fd54aa96a077763c420647673b64000364532db2c3beefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://548c59ec353473b69fd54aa96a077763c420647673b64000364532db2c3beefe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae04920175b7c9665a739c2f390f5b21a7f8227277909132d9719eaba43c9dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae04920175b7c9665a739c2f390f5b21a7f8227277909132d9719eaba43c9dd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://512fd55cf3633a9fcdf425af76d251823e7638bd08a7f32341108470103ea67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://512fd55cf3633a9fcdf425af76d251823e7638bd08a7f32341108470103ea67d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7p2p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:21Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:21 crc kubenswrapper[4689]: I1201 08:39:21.260565 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kg5bw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af90efaa-97be-48b4-bfe6-dc25956d2b5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e208d5bee2ddf73e62749c6865d4a6cc138365c59ab582e2b28e8c01480591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2sbc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kg5bw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:21Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:21 crc kubenswrapper[4689]: I1201 08:39:21.270319 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:21 crc kubenswrapper[4689]: I1201 08:39:21.270442 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:21 crc kubenswrapper[4689]: I1201 08:39:21.270462 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:21 crc kubenswrapper[4689]: I1201 08:39:21.270490 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:21 crc kubenswrapper[4689]: I1201 08:39:21.270508 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:21Z","lastTransitionTime":"2025-12-01T08:39:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:21 crc kubenswrapper[4689]: I1201 08:39:21.281164 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c2qqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65456ad6-e7d1-4546-a977-244691fc5722\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c50390d9cf966913cfb379da199be0fe90b9085e0d76114903eb624054a7f84b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pwwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cda8fd2f87a8bee5f54685633fc64ce2dd06bfe6e5ea9fa8458345954080e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pwwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-c2qqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:21Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:21 crc kubenswrapper[4689]: I1201 08:39:21.297961 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:21Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:21 crc kubenswrapper[4689]: I1201 08:39:21.373522 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:21 crc kubenswrapper[4689]: I1201 08:39:21.373574 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:21 crc kubenswrapper[4689]: I1201 08:39:21.373588 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:21 crc kubenswrapper[4689]: I1201 08:39:21.373609 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:21 crc kubenswrapper[4689]: I1201 08:39:21.373622 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:21Z","lastTransitionTime":"2025-12-01T08:39:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:21 crc kubenswrapper[4689]: I1201 08:39:21.477175 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:21 crc kubenswrapper[4689]: I1201 08:39:21.477220 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:21 crc kubenswrapper[4689]: I1201 08:39:21.477231 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:21 crc kubenswrapper[4689]: I1201 08:39:21.477247 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:21 crc kubenswrapper[4689]: I1201 08:39:21.477258 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:21Z","lastTransitionTime":"2025-12-01T08:39:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:21 crc kubenswrapper[4689]: I1201 08:39:21.581954 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:21 crc kubenswrapper[4689]: I1201 08:39:21.582012 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:21 crc kubenswrapper[4689]: I1201 08:39:21.582030 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:21 crc kubenswrapper[4689]: I1201 08:39:21.582055 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:21 crc kubenswrapper[4689]: I1201 08:39:21.582072 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:21Z","lastTransitionTime":"2025-12-01T08:39:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:21 crc kubenswrapper[4689]: I1201 08:39:21.661786 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5d6a08d0-a948-4c69-b3f0-f5e084adb453-metrics-certs\") pod \"network-metrics-daemon-jtwvs\" (UID: \"5d6a08d0-a948-4c69-b3f0-f5e084adb453\") " pod="openshift-multus/network-metrics-daemon-jtwvs" Dec 01 08:39:21 crc kubenswrapper[4689]: E1201 08:39:21.662073 4689 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 01 08:39:21 crc kubenswrapper[4689]: E1201 08:39:21.662198 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d6a08d0-a948-4c69-b3f0-f5e084adb453-metrics-certs podName:5d6a08d0-a948-4c69-b3f0-f5e084adb453 nodeName:}" failed. No retries permitted until 2025-12-01 08:39:25.662168709 +0000 UTC m=+45.734456613 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5d6a08d0-a948-4c69-b3f0-f5e084adb453-metrics-certs") pod "network-metrics-daemon-jtwvs" (UID: "5d6a08d0-a948-4c69-b3f0-f5e084adb453") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 01 08:39:21 crc kubenswrapper[4689]: I1201 08:39:21.685132 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:21 crc kubenswrapper[4689]: I1201 08:39:21.685193 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:21 crc kubenswrapper[4689]: I1201 08:39:21.685206 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:21 crc kubenswrapper[4689]: I1201 08:39:21.685226 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:21 crc kubenswrapper[4689]: I1201 08:39:21.685239 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:21Z","lastTransitionTime":"2025-12-01T08:39:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:21 crc kubenswrapper[4689]: I1201 08:39:21.788740 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:21 crc kubenswrapper[4689]: I1201 08:39:21.788792 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:21 crc kubenswrapper[4689]: I1201 08:39:21.788802 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:21 crc kubenswrapper[4689]: I1201 08:39:21.788821 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:21 crc kubenswrapper[4689]: I1201 08:39:21.788831 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:21Z","lastTransitionTime":"2025-12-01T08:39:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:21 crc kubenswrapper[4689]: I1201 08:39:21.892119 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:21 crc kubenswrapper[4689]: I1201 08:39:21.892181 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:21 crc kubenswrapper[4689]: I1201 08:39:21.892204 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:21 crc kubenswrapper[4689]: I1201 08:39:21.892232 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:21 crc kubenswrapper[4689]: I1201 08:39:21.892250 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:21Z","lastTransitionTime":"2025-12-01T08:39:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:21 crc kubenswrapper[4689]: I1201 08:39:21.996433 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:21 crc kubenswrapper[4689]: I1201 08:39:21.996513 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:21 crc kubenswrapper[4689]: I1201 08:39:21.996528 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:21 crc kubenswrapper[4689]: I1201 08:39:21.996551 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:21 crc kubenswrapper[4689]: I1201 08:39:21.996563 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:21Z","lastTransitionTime":"2025-12-01T08:39:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:22 crc kubenswrapper[4689]: I1201 08:39:22.047101 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jtwvs" Dec 01 08:39:22 crc kubenswrapper[4689]: I1201 08:39:22.047146 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 08:39:22 crc kubenswrapper[4689]: I1201 08:39:22.047214 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 08:39:22 crc kubenswrapper[4689]: E1201 08:39:22.047290 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jtwvs" podUID="5d6a08d0-a948-4c69-b3f0-f5e084adb453" Dec 01 08:39:22 crc kubenswrapper[4689]: E1201 08:39:22.047562 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 08:39:22 crc kubenswrapper[4689]: E1201 08:39:22.047998 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 08:39:22 crc kubenswrapper[4689]: I1201 08:39:22.099313 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:22 crc kubenswrapper[4689]: I1201 08:39:22.099361 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:22 crc kubenswrapper[4689]: I1201 08:39:22.099390 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:22 crc kubenswrapper[4689]: I1201 08:39:22.099411 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:22 crc kubenswrapper[4689]: I1201 08:39:22.099421 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:22Z","lastTransitionTime":"2025-12-01T08:39:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:22 crc kubenswrapper[4689]: I1201 08:39:22.203223 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:22 crc kubenswrapper[4689]: I1201 08:39:22.203274 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:22 crc kubenswrapper[4689]: I1201 08:39:22.203287 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:22 crc kubenswrapper[4689]: I1201 08:39:22.203308 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:22 crc kubenswrapper[4689]: I1201 08:39:22.203323 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:22Z","lastTransitionTime":"2025-12-01T08:39:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:22 crc kubenswrapper[4689]: I1201 08:39:22.306500 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:22 crc kubenswrapper[4689]: I1201 08:39:22.306573 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:22 crc kubenswrapper[4689]: I1201 08:39:22.306585 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:22 crc kubenswrapper[4689]: I1201 08:39:22.306616 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:22 crc kubenswrapper[4689]: I1201 08:39:22.306630 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:22Z","lastTransitionTime":"2025-12-01T08:39:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:22 crc kubenswrapper[4689]: I1201 08:39:22.409410 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:22 crc kubenswrapper[4689]: I1201 08:39:22.409484 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:22 crc kubenswrapper[4689]: I1201 08:39:22.409498 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:22 crc kubenswrapper[4689]: I1201 08:39:22.409517 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:22 crc kubenswrapper[4689]: I1201 08:39:22.409530 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:22Z","lastTransitionTime":"2025-12-01T08:39:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:22 crc kubenswrapper[4689]: I1201 08:39:22.513169 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:22 crc kubenswrapper[4689]: I1201 08:39:22.513211 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:22 crc kubenswrapper[4689]: I1201 08:39:22.513219 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:22 crc kubenswrapper[4689]: I1201 08:39:22.513234 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:22 crc kubenswrapper[4689]: I1201 08:39:22.513244 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:22Z","lastTransitionTime":"2025-12-01T08:39:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:22 crc kubenswrapper[4689]: I1201 08:39:22.617198 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:22 crc kubenswrapper[4689]: I1201 08:39:22.617301 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:22 crc kubenswrapper[4689]: I1201 08:39:22.617319 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:22 crc kubenswrapper[4689]: I1201 08:39:22.617345 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:22 crc kubenswrapper[4689]: I1201 08:39:22.617383 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:22Z","lastTransitionTime":"2025-12-01T08:39:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:22 crc kubenswrapper[4689]: I1201 08:39:22.720499 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:22 crc kubenswrapper[4689]: I1201 08:39:22.720582 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:22 crc kubenswrapper[4689]: I1201 08:39:22.720607 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:22 crc kubenswrapper[4689]: I1201 08:39:22.720663 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:22 crc kubenswrapper[4689]: I1201 08:39:22.720691 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:22Z","lastTransitionTime":"2025-12-01T08:39:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:22 crc kubenswrapper[4689]: I1201 08:39:22.823681 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:22 crc kubenswrapper[4689]: I1201 08:39:22.823726 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:22 crc kubenswrapper[4689]: I1201 08:39:22.823738 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:22 crc kubenswrapper[4689]: I1201 08:39:22.823757 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:22 crc kubenswrapper[4689]: I1201 08:39:22.823772 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:22Z","lastTransitionTime":"2025-12-01T08:39:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:22 crc kubenswrapper[4689]: I1201 08:39:22.927396 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:22 crc kubenswrapper[4689]: I1201 08:39:22.927450 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:22 crc kubenswrapper[4689]: I1201 08:39:22.927465 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:22 crc kubenswrapper[4689]: I1201 08:39:22.927485 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:22 crc kubenswrapper[4689]: I1201 08:39:22.927496 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:22Z","lastTransitionTime":"2025-12-01T08:39:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:23 crc kubenswrapper[4689]: I1201 08:39:23.030778 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:23 crc kubenswrapper[4689]: I1201 08:39:23.030828 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:23 crc kubenswrapper[4689]: I1201 08:39:23.030839 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:23 crc kubenswrapper[4689]: I1201 08:39:23.030853 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:23 crc kubenswrapper[4689]: I1201 08:39:23.030863 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:23Z","lastTransitionTime":"2025-12-01T08:39:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:23 crc kubenswrapper[4689]: I1201 08:39:23.047282 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 08:39:23 crc kubenswrapper[4689]: E1201 08:39:23.047554 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 08:39:23 crc kubenswrapper[4689]: I1201 08:39:23.134781 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:23 crc kubenswrapper[4689]: I1201 08:39:23.134854 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:23 crc kubenswrapper[4689]: I1201 08:39:23.134878 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:23 crc kubenswrapper[4689]: I1201 08:39:23.134916 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:23 crc kubenswrapper[4689]: I1201 08:39:23.134938 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:23Z","lastTransitionTime":"2025-12-01T08:39:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:23 crc kubenswrapper[4689]: I1201 08:39:23.239092 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:23 crc kubenswrapper[4689]: I1201 08:39:23.239156 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:23 crc kubenswrapper[4689]: I1201 08:39:23.239178 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:23 crc kubenswrapper[4689]: I1201 08:39:23.239210 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:23 crc kubenswrapper[4689]: I1201 08:39:23.239235 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:23Z","lastTransitionTime":"2025-12-01T08:39:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:23 crc kubenswrapper[4689]: I1201 08:39:23.356803 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:23 crc kubenswrapper[4689]: I1201 08:39:23.356911 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:23 crc kubenswrapper[4689]: I1201 08:39:23.356933 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:23 crc kubenswrapper[4689]: I1201 08:39:23.356959 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:23 crc kubenswrapper[4689]: I1201 08:39:23.356980 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:23Z","lastTransitionTime":"2025-12-01T08:39:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:23 crc kubenswrapper[4689]: I1201 08:39:23.461445 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:23 crc kubenswrapper[4689]: I1201 08:39:23.461601 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:23 crc kubenswrapper[4689]: I1201 08:39:23.461621 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:23 crc kubenswrapper[4689]: I1201 08:39:23.461649 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:23 crc kubenswrapper[4689]: I1201 08:39:23.461671 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:23Z","lastTransitionTime":"2025-12-01T08:39:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:23 crc kubenswrapper[4689]: I1201 08:39:23.565907 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:23 crc kubenswrapper[4689]: I1201 08:39:23.565994 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:23 crc kubenswrapper[4689]: I1201 08:39:23.566019 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:23 crc kubenswrapper[4689]: I1201 08:39:23.566047 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:23 crc kubenswrapper[4689]: I1201 08:39:23.566065 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:23Z","lastTransitionTime":"2025-12-01T08:39:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:23 crc kubenswrapper[4689]: I1201 08:39:23.669766 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:23 crc kubenswrapper[4689]: I1201 08:39:23.669833 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:23 crc kubenswrapper[4689]: I1201 08:39:23.669854 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:23 crc kubenswrapper[4689]: I1201 08:39:23.669880 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:23 crc kubenswrapper[4689]: I1201 08:39:23.669898 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:23Z","lastTransitionTime":"2025-12-01T08:39:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:23 crc kubenswrapper[4689]: I1201 08:39:23.773601 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:23 crc kubenswrapper[4689]: I1201 08:39:23.773666 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:23 crc kubenswrapper[4689]: I1201 08:39:23.773685 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:23 crc kubenswrapper[4689]: I1201 08:39:23.773711 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:23 crc kubenswrapper[4689]: I1201 08:39:23.773728 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:23Z","lastTransitionTime":"2025-12-01T08:39:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:23 crc kubenswrapper[4689]: I1201 08:39:23.877356 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:23 crc kubenswrapper[4689]: I1201 08:39:23.877470 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:23 crc kubenswrapper[4689]: I1201 08:39:23.877533 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:23 crc kubenswrapper[4689]: I1201 08:39:23.877561 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:23 crc kubenswrapper[4689]: I1201 08:39:23.877581 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:23Z","lastTransitionTime":"2025-12-01T08:39:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:23 crc kubenswrapper[4689]: I1201 08:39:23.981154 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:23 crc kubenswrapper[4689]: I1201 08:39:23.981197 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:23 crc kubenswrapper[4689]: I1201 08:39:23.981207 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:23 crc kubenswrapper[4689]: I1201 08:39:23.981225 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:23 crc kubenswrapper[4689]: I1201 08:39:23.981236 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:23Z","lastTransitionTime":"2025-12-01T08:39:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:24 crc kubenswrapper[4689]: I1201 08:39:24.047071 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 08:39:24 crc kubenswrapper[4689]: I1201 08:39:24.047083 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 08:39:24 crc kubenswrapper[4689]: E1201 08:39:24.047268 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 08:39:24 crc kubenswrapper[4689]: E1201 08:39:24.047323 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 08:39:24 crc kubenswrapper[4689]: I1201 08:39:24.047113 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jtwvs" Dec 01 08:39:24 crc kubenswrapper[4689]: E1201 08:39:24.047448 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jtwvs" podUID="5d6a08d0-a948-4c69-b3f0-f5e084adb453" Dec 01 08:39:24 crc kubenswrapper[4689]: I1201 08:39:24.084164 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:24 crc kubenswrapper[4689]: I1201 08:39:24.084257 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:24 crc kubenswrapper[4689]: I1201 08:39:24.084281 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:24 crc kubenswrapper[4689]: I1201 08:39:24.084317 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:24 crc kubenswrapper[4689]: I1201 08:39:24.084343 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:24Z","lastTransitionTime":"2025-12-01T08:39:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:24 crc kubenswrapper[4689]: I1201 08:39:24.187014 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:24 crc kubenswrapper[4689]: I1201 08:39:24.187079 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:24 crc kubenswrapper[4689]: I1201 08:39:24.187098 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:24 crc kubenswrapper[4689]: I1201 08:39:24.187124 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:24 crc kubenswrapper[4689]: I1201 08:39:24.187146 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:24Z","lastTransitionTime":"2025-12-01T08:39:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:24 crc kubenswrapper[4689]: I1201 08:39:24.290218 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:24 crc kubenswrapper[4689]: I1201 08:39:24.290353 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:24 crc kubenswrapper[4689]: I1201 08:39:24.290474 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:24 crc kubenswrapper[4689]: I1201 08:39:24.290567 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:24 crc kubenswrapper[4689]: I1201 08:39:24.290598 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:24Z","lastTransitionTime":"2025-12-01T08:39:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:24 crc kubenswrapper[4689]: I1201 08:39:24.394605 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:24 crc kubenswrapper[4689]: I1201 08:39:24.394669 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:24 crc kubenswrapper[4689]: I1201 08:39:24.394697 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:24 crc kubenswrapper[4689]: I1201 08:39:24.394727 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:24 crc kubenswrapper[4689]: I1201 08:39:24.394768 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:24Z","lastTransitionTime":"2025-12-01T08:39:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:24 crc kubenswrapper[4689]: I1201 08:39:24.498584 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:24 crc kubenswrapper[4689]: I1201 08:39:24.498688 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:24 crc kubenswrapper[4689]: I1201 08:39:24.498710 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:24 crc kubenswrapper[4689]: I1201 08:39:24.498741 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:24 crc kubenswrapper[4689]: I1201 08:39:24.498756 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:24Z","lastTransitionTime":"2025-12-01T08:39:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:24 crc kubenswrapper[4689]: I1201 08:39:24.602392 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:24 crc kubenswrapper[4689]: I1201 08:39:24.602432 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:24 crc kubenswrapper[4689]: I1201 08:39:24.602444 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:24 crc kubenswrapper[4689]: I1201 08:39:24.602460 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:24 crc kubenswrapper[4689]: I1201 08:39:24.602470 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:24Z","lastTransitionTime":"2025-12-01T08:39:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:24 crc kubenswrapper[4689]: I1201 08:39:24.686153 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:24 crc kubenswrapper[4689]: I1201 08:39:24.686234 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:24 crc kubenswrapper[4689]: I1201 08:39:24.686254 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:24 crc kubenswrapper[4689]: I1201 08:39:24.686289 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:24 crc kubenswrapper[4689]: I1201 08:39:24.686308 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:24Z","lastTransitionTime":"2025-12-01T08:39:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:24 crc kubenswrapper[4689]: E1201 08:39:24.725348 4689 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"87f9359d-17aa-499d-90bc-b05146c26f0f\\\",\\\"systemUUID\\\":\\\"1b1c64ae-9dbc-417f-9fac-7f3e657b08f7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:24Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:24 crc kubenswrapper[4689]: I1201 08:39:24.731252 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:24 crc kubenswrapper[4689]: I1201 08:39:24.731342 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:24 crc kubenswrapper[4689]: I1201 08:39:24.731360 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:24 crc kubenswrapper[4689]: I1201 08:39:24.731407 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:24 crc kubenswrapper[4689]: I1201 08:39:24.731430 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:24Z","lastTransitionTime":"2025-12-01T08:39:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:24 crc kubenswrapper[4689]: E1201 08:39:24.752480 4689 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"87f9359d-17aa-499d-90bc-b05146c26f0f\\\",\\\"systemUUID\\\":\\\"1b1c64ae-9dbc-417f-9fac-7f3e657b08f7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:24Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:24 crc kubenswrapper[4689]: I1201 08:39:24.757897 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:24 crc kubenswrapper[4689]: I1201 08:39:24.757934 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:24 crc kubenswrapper[4689]: I1201 08:39:24.757950 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:24 crc kubenswrapper[4689]: I1201 08:39:24.757970 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:24 crc kubenswrapper[4689]: I1201 08:39:24.757981 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:24Z","lastTransitionTime":"2025-12-01T08:39:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:24 crc kubenswrapper[4689]: E1201 08:39:24.782201 4689 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"87f9359d-17aa-499d-90bc-b05146c26f0f\\\",\\\"systemUUID\\\":\\\"1b1c64ae-9dbc-417f-9fac-7f3e657b08f7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:24Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:24 crc kubenswrapper[4689]: I1201 08:39:24.789918 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:24 crc kubenswrapper[4689]: I1201 08:39:24.789968 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:24 crc kubenswrapper[4689]: I1201 08:39:24.789981 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:24 crc kubenswrapper[4689]: I1201 08:39:24.789999 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:24 crc kubenswrapper[4689]: I1201 08:39:24.790013 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:24Z","lastTransitionTime":"2025-12-01T08:39:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:24 crc kubenswrapper[4689]: E1201 08:39:24.811421 4689 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"87f9359d-17aa-499d-90bc-b05146c26f0f\\\",\\\"systemUUID\\\":\\\"1b1c64ae-9dbc-417f-9fac-7f3e657b08f7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:24Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:24 crc kubenswrapper[4689]: I1201 08:39:24.815859 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:24 crc kubenswrapper[4689]: I1201 08:39:24.815908 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:24 crc kubenswrapper[4689]: I1201 08:39:24.815923 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:24 crc kubenswrapper[4689]: I1201 08:39:24.815944 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:24 crc kubenswrapper[4689]: I1201 08:39:24.815957 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:24Z","lastTransitionTime":"2025-12-01T08:39:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:24 crc kubenswrapper[4689]: E1201 08:39:24.829797 4689 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"87f9359d-17aa-499d-90bc-b05146c26f0f\\\",\\\"systemUUID\\\":\\\"1b1c64ae-9dbc-417f-9fac-7f3e657b08f7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:24Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:24 crc kubenswrapper[4689]: E1201 08:39:24.829921 4689 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 01 08:39:24 crc kubenswrapper[4689]: I1201 08:39:24.831955 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:24 crc kubenswrapper[4689]: I1201 08:39:24.832014 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:24 crc kubenswrapper[4689]: I1201 08:39:24.832030 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:24 crc kubenswrapper[4689]: I1201 08:39:24.832054 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:24 crc kubenswrapper[4689]: I1201 08:39:24.832068 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:24Z","lastTransitionTime":"2025-12-01T08:39:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:24 crc kubenswrapper[4689]: I1201 08:39:24.934752 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:24 crc kubenswrapper[4689]: I1201 08:39:24.934829 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:24 crc kubenswrapper[4689]: I1201 08:39:24.934844 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:24 crc kubenswrapper[4689]: I1201 08:39:24.934867 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:24 crc kubenswrapper[4689]: I1201 08:39:24.934889 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:24Z","lastTransitionTime":"2025-12-01T08:39:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:25 crc kubenswrapper[4689]: I1201 08:39:25.037934 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:25 crc kubenswrapper[4689]: I1201 08:39:25.037992 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:25 crc kubenswrapper[4689]: I1201 08:39:25.038002 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:25 crc kubenswrapper[4689]: I1201 08:39:25.038025 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:25 crc kubenswrapper[4689]: I1201 08:39:25.038037 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:25Z","lastTransitionTime":"2025-12-01T08:39:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:25 crc kubenswrapper[4689]: I1201 08:39:25.047308 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 08:39:25 crc kubenswrapper[4689]: E1201 08:39:25.047596 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 08:39:25 crc kubenswrapper[4689]: I1201 08:39:25.141284 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:25 crc kubenswrapper[4689]: I1201 08:39:25.141341 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:25 crc kubenswrapper[4689]: I1201 08:39:25.141353 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:25 crc kubenswrapper[4689]: I1201 08:39:25.141387 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:25 crc kubenswrapper[4689]: I1201 08:39:25.141404 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:25Z","lastTransitionTime":"2025-12-01T08:39:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:25 crc kubenswrapper[4689]: I1201 08:39:25.245275 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:25 crc kubenswrapper[4689]: I1201 08:39:25.245337 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:25 crc kubenswrapper[4689]: I1201 08:39:25.245351 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:25 crc kubenswrapper[4689]: I1201 08:39:25.245398 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:25 crc kubenswrapper[4689]: I1201 08:39:25.245414 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:25Z","lastTransitionTime":"2025-12-01T08:39:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:25 crc kubenswrapper[4689]: I1201 08:39:25.348756 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:25 crc kubenswrapper[4689]: I1201 08:39:25.348848 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:25 crc kubenswrapper[4689]: I1201 08:39:25.348875 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:25 crc kubenswrapper[4689]: I1201 08:39:25.348911 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:25 crc kubenswrapper[4689]: I1201 08:39:25.348931 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:25Z","lastTransitionTime":"2025-12-01T08:39:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:25 crc kubenswrapper[4689]: I1201 08:39:25.452587 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:25 crc kubenswrapper[4689]: I1201 08:39:25.452679 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:25 crc kubenswrapper[4689]: I1201 08:39:25.452711 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:25 crc kubenswrapper[4689]: I1201 08:39:25.452746 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:25 crc kubenswrapper[4689]: I1201 08:39:25.452771 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:25Z","lastTransitionTime":"2025-12-01T08:39:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:25 crc kubenswrapper[4689]: I1201 08:39:25.556783 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:25 crc kubenswrapper[4689]: I1201 08:39:25.556845 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:25 crc kubenswrapper[4689]: I1201 08:39:25.556859 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:25 crc kubenswrapper[4689]: I1201 08:39:25.556883 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:25 crc kubenswrapper[4689]: I1201 08:39:25.556895 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:25Z","lastTransitionTime":"2025-12-01T08:39:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:25 crc kubenswrapper[4689]: I1201 08:39:25.660348 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:25 crc kubenswrapper[4689]: I1201 08:39:25.660419 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:25 crc kubenswrapper[4689]: I1201 08:39:25.660429 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:25 crc kubenswrapper[4689]: I1201 08:39:25.660444 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:25 crc kubenswrapper[4689]: I1201 08:39:25.660453 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:25Z","lastTransitionTime":"2025-12-01T08:39:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:25 crc kubenswrapper[4689]: I1201 08:39:25.710735 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5d6a08d0-a948-4c69-b3f0-f5e084adb453-metrics-certs\") pod \"network-metrics-daemon-jtwvs\" (UID: \"5d6a08d0-a948-4c69-b3f0-f5e084adb453\") " pod="openshift-multus/network-metrics-daemon-jtwvs" Dec 01 08:39:25 crc kubenswrapper[4689]: E1201 08:39:25.711035 4689 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 01 08:39:25 crc kubenswrapper[4689]: E1201 08:39:25.711174 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d6a08d0-a948-4c69-b3f0-f5e084adb453-metrics-certs podName:5d6a08d0-a948-4c69-b3f0-f5e084adb453 nodeName:}" failed. No retries permitted until 2025-12-01 08:39:33.711140656 +0000 UTC m=+53.783428560 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5d6a08d0-a948-4c69-b3f0-f5e084adb453-metrics-certs") pod "network-metrics-daemon-jtwvs" (UID: "5d6a08d0-a948-4c69-b3f0-f5e084adb453") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 01 08:39:25 crc kubenswrapper[4689]: I1201 08:39:25.762903 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:25 crc kubenswrapper[4689]: I1201 08:39:25.762989 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:25 crc kubenswrapper[4689]: I1201 08:39:25.763006 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:25 crc kubenswrapper[4689]: I1201 08:39:25.763027 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:25 crc kubenswrapper[4689]: I1201 08:39:25.763041 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:25Z","lastTransitionTime":"2025-12-01T08:39:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:25 crc kubenswrapper[4689]: I1201 08:39:25.866753 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:25 crc kubenswrapper[4689]: I1201 08:39:25.866828 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:25 crc kubenswrapper[4689]: I1201 08:39:25.866846 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:25 crc kubenswrapper[4689]: I1201 08:39:25.866874 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:25 crc kubenswrapper[4689]: I1201 08:39:25.866897 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:25Z","lastTransitionTime":"2025-12-01T08:39:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:25 crc kubenswrapper[4689]: I1201 08:39:25.969435 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:25 crc kubenswrapper[4689]: I1201 08:39:25.969491 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:25 crc kubenswrapper[4689]: I1201 08:39:25.969506 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:25 crc kubenswrapper[4689]: I1201 08:39:25.969529 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:25 crc kubenswrapper[4689]: I1201 08:39:25.969546 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:25Z","lastTransitionTime":"2025-12-01T08:39:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:26 crc kubenswrapper[4689]: I1201 08:39:26.046732 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 08:39:26 crc kubenswrapper[4689]: I1201 08:39:26.046784 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 08:39:26 crc kubenswrapper[4689]: E1201 08:39:26.047147 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 08:39:26 crc kubenswrapper[4689]: I1201 08:39:26.046784 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jtwvs" Dec 01 08:39:26 crc kubenswrapper[4689]: E1201 08:39:26.047340 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 08:39:26 crc kubenswrapper[4689]: E1201 08:39:26.047588 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jtwvs" podUID="5d6a08d0-a948-4c69-b3f0-f5e084adb453" Dec 01 08:39:26 crc kubenswrapper[4689]: I1201 08:39:26.072714 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:26 crc kubenswrapper[4689]: I1201 08:39:26.072767 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:26 crc kubenswrapper[4689]: I1201 08:39:26.072781 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:26 crc kubenswrapper[4689]: I1201 08:39:26.072800 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:26 crc kubenswrapper[4689]: I1201 08:39:26.072814 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:26Z","lastTransitionTime":"2025-12-01T08:39:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:26 crc kubenswrapper[4689]: I1201 08:39:26.176245 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:26 crc kubenswrapper[4689]: I1201 08:39:26.176309 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:26 crc kubenswrapper[4689]: I1201 08:39:26.176323 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:26 crc kubenswrapper[4689]: I1201 08:39:26.176345 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:26 crc kubenswrapper[4689]: I1201 08:39:26.176360 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:26Z","lastTransitionTime":"2025-12-01T08:39:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:26 crc kubenswrapper[4689]: I1201 08:39:26.280153 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:26 crc kubenswrapper[4689]: I1201 08:39:26.280196 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:26 crc kubenswrapper[4689]: I1201 08:39:26.280212 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:26 crc kubenswrapper[4689]: I1201 08:39:26.280227 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:26 crc kubenswrapper[4689]: I1201 08:39:26.280239 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:26Z","lastTransitionTime":"2025-12-01T08:39:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:26 crc kubenswrapper[4689]: I1201 08:39:26.384411 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:26 crc kubenswrapper[4689]: I1201 08:39:26.384490 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:26 crc kubenswrapper[4689]: I1201 08:39:26.384515 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:26 crc kubenswrapper[4689]: I1201 08:39:26.384547 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:26 crc kubenswrapper[4689]: I1201 08:39:26.384570 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:26Z","lastTransitionTime":"2025-12-01T08:39:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:26 crc kubenswrapper[4689]: I1201 08:39:26.487684 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:26 crc kubenswrapper[4689]: I1201 08:39:26.487756 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:26 crc kubenswrapper[4689]: I1201 08:39:26.487774 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:26 crc kubenswrapper[4689]: I1201 08:39:26.487802 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:26 crc kubenswrapper[4689]: I1201 08:39:26.487822 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:26Z","lastTransitionTime":"2025-12-01T08:39:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:26 crc kubenswrapper[4689]: I1201 08:39:26.591485 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:26 crc kubenswrapper[4689]: I1201 08:39:26.591612 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:26 crc kubenswrapper[4689]: I1201 08:39:26.591636 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:26 crc kubenswrapper[4689]: I1201 08:39:26.591666 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:26 crc kubenswrapper[4689]: I1201 08:39:26.591686 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:26Z","lastTransitionTime":"2025-12-01T08:39:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:26 crc kubenswrapper[4689]: I1201 08:39:26.694960 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:26 crc kubenswrapper[4689]: I1201 08:39:26.695027 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:26 crc kubenswrapper[4689]: I1201 08:39:26.695037 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:26 crc kubenswrapper[4689]: I1201 08:39:26.695063 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:26 crc kubenswrapper[4689]: I1201 08:39:26.695076 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:26Z","lastTransitionTime":"2025-12-01T08:39:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:26 crc kubenswrapper[4689]: I1201 08:39:26.799223 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:26 crc kubenswrapper[4689]: I1201 08:39:26.799288 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:26 crc kubenswrapper[4689]: I1201 08:39:26.799314 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:26 crc kubenswrapper[4689]: I1201 08:39:26.799337 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:26 crc kubenswrapper[4689]: I1201 08:39:26.799357 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:26Z","lastTransitionTime":"2025-12-01T08:39:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:26 crc kubenswrapper[4689]: I1201 08:39:26.901759 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:26 crc kubenswrapper[4689]: I1201 08:39:26.901806 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:26 crc kubenswrapper[4689]: I1201 08:39:26.901821 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:26 crc kubenswrapper[4689]: I1201 08:39:26.901839 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:26 crc kubenswrapper[4689]: I1201 08:39:26.901852 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:26Z","lastTransitionTime":"2025-12-01T08:39:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:27 crc kubenswrapper[4689]: I1201 08:39:27.004810 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:27 crc kubenswrapper[4689]: I1201 08:39:27.004938 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:27 crc kubenswrapper[4689]: I1201 08:39:27.004967 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:27 crc kubenswrapper[4689]: I1201 08:39:27.005003 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:27 crc kubenswrapper[4689]: I1201 08:39:27.005031 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:27Z","lastTransitionTime":"2025-12-01T08:39:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:27 crc kubenswrapper[4689]: I1201 08:39:27.046686 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 08:39:27 crc kubenswrapper[4689]: E1201 08:39:27.046916 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 08:39:27 crc kubenswrapper[4689]: I1201 08:39:27.108591 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:27 crc kubenswrapper[4689]: I1201 08:39:27.108655 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:27 crc kubenswrapper[4689]: I1201 08:39:27.108672 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:27 crc kubenswrapper[4689]: I1201 08:39:27.108695 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:27 crc kubenswrapper[4689]: I1201 08:39:27.108708 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:27Z","lastTransitionTime":"2025-12-01T08:39:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:27 crc kubenswrapper[4689]: I1201 08:39:27.211546 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:27 crc kubenswrapper[4689]: I1201 08:39:27.211593 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:27 crc kubenswrapper[4689]: I1201 08:39:27.211603 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:27 crc kubenswrapper[4689]: I1201 08:39:27.211626 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:27 crc kubenswrapper[4689]: I1201 08:39:27.211638 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:27Z","lastTransitionTime":"2025-12-01T08:39:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:27 crc kubenswrapper[4689]: I1201 08:39:27.291307 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 01 08:39:27 crc kubenswrapper[4689]: I1201 08:39:27.303709 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Dec 01 08:39:27 crc kubenswrapper[4689]: I1201 08:39:27.314491 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e81b25bc4fc917a5480b84f4c7d2550829f5ad209d4cac14d2cd64d5b792b9e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:27Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:27 crc kubenswrapper[4689]: I1201 08:39:27.314664 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:27 crc kubenswrapper[4689]: I1201 08:39:27.314701 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:27 crc kubenswrapper[4689]: I1201 08:39:27.314713 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:27 crc kubenswrapper[4689]: I1201 08:39:27.314732 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:27 crc kubenswrapper[4689]: I1201 08:39:27.314745 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:27Z","lastTransitionTime":"2025-12-01T08:39:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:27 crc kubenswrapper[4689]: I1201 08:39:27.332799 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78a7f1a0302034697497f8a7a3d9547400a4311d7e304fdba95a19750c66658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:27Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:27 crc kubenswrapper[4689]: I1201 08:39:27.351950 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62ff4a03d2e50e2d881287008cfedc7ff067e04d3dc24ace25b56677b0eb117c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1ed68789c1f37ddbc952b0927feba7a704b01f3ee52e057e29c989f1d475e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:27Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:27 crc kubenswrapper[4689]: I1201 08:39:27.371713 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f71ff62f-7141-4d50-b8ad-8312dc8a80e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccd36e732198fa380968b1ca9a174e8f380c9d3c3c762c3267b2f65367c4360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d09630f7eb6a631d65473e272c4f98de5ba071d33992776b4101ac797b0854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f804833ecd52f1cd93a4df02639ff912e5c7ff38bfce3ba1c47e8cd5fca46e11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35299e45eecb4b2b98003295a39b7945d1fa2943a5a7f97301054d255687877e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:38:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:27Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:27 crc kubenswrapper[4689]: I1201 08:39:27.390223 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:27Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:27 crc kubenswrapper[4689]: I1201 08:39:27.405480 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4z9l8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74395c07-d5ab-45ec-a616-1d0b1b336583\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f9b1493d035a89354477e779e8277075127a85756f190d5f0c3bf98a15f15b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjzb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4z9l8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:27Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:27 crc kubenswrapper[4689]: I1201 08:39:27.418525 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:27 crc kubenswrapper[4689]: I1201 08:39:27.418562 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:27 crc kubenswrapper[4689]: I1201 08:39:27.418573 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:27 crc kubenswrapper[4689]: I1201 08:39:27.418591 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:27 crc kubenswrapper[4689]: I1201 08:39:27.418603 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:27Z","lastTransitionTime":"2025-12-01T08:39:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:27 crc kubenswrapper[4689]: I1201 08:39:27.432509 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"988f960f-52fa-406f-9320-a8eec7a04f76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8be3fd2d2a091ced00ce091ea6f81df99a6718fa9ead81d56590d2c8ebc2a36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46c9dc400db69538c36365f4e2d66aae47defc4224c2881532870f875e2b30e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2afbacac849e9ed880e975ae167ea5b2329811c28516152f4d809b043ab61746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be8fa8da5eef9035e80121c75adb9d6653d8683948fa83cfb1ff87a5fb3e8b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81345c33ddfbccd0c6b7d9c505ee81d513ce884fa1326add04026fd82d441e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://211c2ff150fa2d5fb73b0df4cfc5e694bf0b6b6761eec1e6827ac983fd759829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f548894d4be4438ed8d8452bec2e494c1330ccf1737e238f9ccb675a8023abe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f548894d4be4438ed8d8452bec2e494c1330ccf1737e238f9ccb675a8023abe\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T08:39:19Z\\\",\\\"message\\\":\\\" 6073 services_controller.go:360] Finished syncing service metrics on namespace openshift-service-ca-operator for network=default : 1.621102ms\\\\nI1201 08:39:19.012439 6073 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1201 08:39:19.012454 6073 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1201 08:39:19.012672 6073 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1201 08:39:19.012688 6073 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-authentication/oauth-openshift\\\\\\\"}\\\\nI1201 08:39:19.012700 6073 services_controller.go:360] Finished syncing service oauth-openshift on namespace openshift-authentication for network=default : 1.358936ms\\\\nI1201 08:39:19.013045 6073 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console-operator/metrics\\\\\\\"}\\\\nI1201 08:39:19.013066 6073 services_controller.go:360] Finished syncing service metrics on namespace openshift-console-operator for network=default : 1.708506ms\\\\nI1201 08:39:19.013517 6073 ovnkube.go:599] Stopped ovnkube\\\\nI1201 08:39:19.013605 6073 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1201 08:39:19.013730 6073 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-8zn56_openshift-ovn-kubernetes(988f960f-52fa-406f-9320-a8eec7a04f76)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e96fecee94f6f25ea0bbde3c6a23854fb5dc476f4b5cfb82871358fbcf6809c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://496b7a56d1e3bb23f4d1da0fc5bbe009995c46b1f44faad80ee18a18ab301a18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://496b7a56d1e3bb23f4d1da0fc5bbe009995c46b1f44faad80ee18a18ab301a18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8zn56\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:27Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:27 crc kubenswrapper[4689]: I1201 08:39:27.448258 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jtwvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d6a08d0-a948-4c69-b3f0-f5e084adb453\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkwdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkwdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jtwvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:27Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:27 crc kubenswrapper[4689]: I1201 08:39:27.465441 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1946844b-83ee-401e-b3b6-5994ef81c85e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dbad86a3834b9ec3334de7ccc49988da1f7e42c423801c9789ce12ba70390b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://848171dd65d193193056bff78092de5ea65abaf7af4d168a9c27409765bd0ac2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad9f8bfeec0bea5ec9df44017223d633ce320bba477e99d5cc325712f92fff65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5920f4efa4e812090eb100b51bcf8e37eb82eb6b9fa495c8560ceeea3a94d55b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83ec64e82504ae546c78bdb3eb8b6cf47514373616749723ec457191b95b73c7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 08:38:54.283267 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 08:38:54.288329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1100834793/tls.crt::/tmp/serving-cert-1100834793/tls.key\\\\\\\"\\\\nI1201 08:39:00.137331 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 08:39:00.140699 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 08:39:00.140719 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 08:39:00.140740 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 08:39:00.140746 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 08:39:00.151271 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 08:39:00.151299 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 08:39:00.151305 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 08:39:00.151312 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 08:39:00.151316 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 08:39:00.151322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 08:39:00.151326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 08:39:00.151604 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 08:39:00.154622 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7190f95ad8a5cce27417903e343138ff345b97c0713609f47db91d8fe0c44a7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bba1efe2a197a5c47fc4220c1c53da0db51bcbdf317261c978b7f31348034f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bba1efe2a197a5c47fc4220c1c53da0db51bcbdf317261c978b7f31348034f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:38:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:27Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:27 crc kubenswrapper[4689]: I1201 08:39:27.481691 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3947625d-75bf-4332-a233-1491b2ee9d96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5095b8eb2f2f6319a0221ed4d7ec1fb81540fb75f610f7b57dc23fefbd196b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5v5pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdcf58565a332d76dac9cc12df1e0cd89eee6934be4b3234f8c36e9a8e22983a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5v5pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hmdnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:27Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:27 crc kubenswrapper[4689]: I1201 08:39:27.499660 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:27Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:27 crc kubenswrapper[4689]: I1201 08:39:27.519314 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dl2st" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bebcb50-c292-4bca-9299-2fdc21439b18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://768aa329fc277550bd549890a176d80246fcbfa50fbc3b88c444434b980b9986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98wj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dl2st\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:27Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:27 crc kubenswrapper[4689]: I1201 08:39:27.521230 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:27 crc kubenswrapper[4689]: I1201 08:39:27.521292 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:27 crc kubenswrapper[4689]: I1201 08:39:27.521302 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:27 crc kubenswrapper[4689]: I1201 08:39:27.521324 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:27 crc kubenswrapper[4689]: I1201 08:39:27.521336 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:27Z","lastTransitionTime":"2025-12-01T08:39:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:27 crc kubenswrapper[4689]: I1201 08:39:27.538118 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7p2p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcab587f-eb9b-4dde-a0a1-75ed175999b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://761460333be2b369513cc7812afa57b580daf1e0e9add1c20f33ddf45601632c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6954a34226801e4552090e2c6e76c36de94e28ea8e175bc26392f534fb12214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6954a34226801e4552090e2c6e76c36de94e28ea8e175bc26392f534fb12214\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33d22ffe5ececc3e9210cb8c45471bd5f4b8a76e9ff2002cfc4c099b6973a7ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33d22ffe5ececc3e9210cb8c45471bd5f4b8a76e9ff2002cfc4c099b6973a7ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30e1f218df66bf86cef3d56c80b2d5bafacfabbac8868f3127a70f88431d00e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30e1f218df66bf86cef3d56c80b2d5bafacfabbac8868f3127a70f88431d00e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://548c59ec353473b69fd54aa96a077763c420647673b64000364532db2c3beefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://548c59ec353473b69fd54aa96a077763c420647673b64000364532db2c3beefe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae04920175b7c9665a739c2f390f5b21a7f8227277909132d9719eaba43c9dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae04920175b7c9665a739c2f390f5b21a7f8227277909132d9719eaba43c9dd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://512fd55cf3633a9fcdf425af76d251823e7638bd08a7f32341108470103ea67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://512fd55cf3633a9fcdf425af76d251823e7638bd08a7f32341108470103ea67d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7p2p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:27Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:27 crc kubenswrapper[4689]: I1201 08:39:27.550185 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kg5bw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af90efaa-97be-48b4-bfe6-dc25956d2b5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e208d5bee2ddf73e62749c6865d4a6cc138365c59ab582e2b28e8c01480591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2sbc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kg5bw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:27Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:27 crc kubenswrapper[4689]: I1201 08:39:27.561232 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c2qqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65456ad6-e7d1-4546-a977-244691fc5722\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c50390d9cf966913cfb379da199be0fe90b9085e0d76114903eb624054a7f84b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pwwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cda8fd2f87a8bee5f54685633fc64ce2dd06bfe6e5ea9fa8458345954080e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pwwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-c2qqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:27Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:27 crc kubenswrapper[4689]: I1201 08:39:27.573659 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:27Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:27 crc kubenswrapper[4689]: I1201 08:39:27.624441 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:27 crc kubenswrapper[4689]: I1201 08:39:27.624497 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:27 crc kubenswrapper[4689]: I1201 08:39:27.624510 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:27 crc kubenswrapper[4689]: I1201 08:39:27.624531 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:27 crc kubenswrapper[4689]: I1201 08:39:27.624546 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:27Z","lastTransitionTime":"2025-12-01T08:39:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:27 crc kubenswrapper[4689]: I1201 08:39:27.727333 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:27 crc kubenswrapper[4689]: I1201 08:39:27.727444 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:27 crc kubenswrapper[4689]: I1201 08:39:27.727457 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:27 crc kubenswrapper[4689]: I1201 08:39:27.727483 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:27 crc kubenswrapper[4689]: I1201 08:39:27.727495 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:27Z","lastTransitionTime":"2025-12-01T08:39:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:27 crc kubenswrapper[4689]: I1201 08:39:27.831230 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:27 crc kubenswrapper[4689]: I1201 08:39:27.831394 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:27 crc kubenswrapper[4689]: I1201 08:39:27.831421 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:27 crc kubenswrapper[4689]: I1201 08:39:27.831470 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:27 crc kubenswrapper[4689]: I1201 08:39:27.831496 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:27Z","lastTransitionTime":"2025-12-01T08:39:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:27 crc kubenswrapper[4689]: I1201 08:39:27.935085 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:27 crc kubenswrapper[4689]: I1201 08:39:27.935159 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:27 crc kubenswrapper[4689]: I1201 08:39:27.935182 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:27 crc kubenswrapper[4689]: I1201 08:39:27.935219 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:27 crc kubenswrapper[4689]: I1201 08:39:27.935248 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:27Z","lastTransitionTime":"2025-12-01T08:39:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:28 crc kubenswrapper[4689]: I1201 08:39:28.038471 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:28 crc kubenswrapper[4689]: I1201 08:39:28.038537 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:28 crc kubenswrapper[4689]: I1201 08:39:28.038561 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:28 crc kubenswrapper[4689]: I1201 08:39:28.038596 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:28 crc kubenswrapper[4689]: I1201 08:39:28.038620 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:28Z","lastTransitionTime":"2025-12-01T08:39:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:28 crc kubenswrapper[4689]: I1201 08:39:28.046960 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 08:39:28 crc kubenswrapper[4689]: I1201 08:39:28.046960 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 08:39:28 crc kubenswrapper[4689]: I1201 08:39:28.047152 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jtwvs" Dec 01 08:39:28 crc kubenswrapper[4689]: E1201 08:39:28.047316 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 08:39:28 crc kubenswrapper[4689]: E1201 08:39:28.049123 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 08:39:28 crc kubenswrapper[4689]: E1201 08:39:28.049484 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jtwvs" podUID="5d6a08d0-a948-4c69-b3f0-f5e084adb453" Dec 01 08:39:28 crc kubenswrapper[4689]: I1201 08:39:28.141867 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:28 crc kubenswrapper[4689]: I1201 08:39:28.141938 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:28 crc kubenswrapper[4689]: I1201 08:39:28.141958 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:28 crc kubenswrapper[4689]: I1201 08:39:28.141986 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:28 crc kubenswrapper[4689]: I1201 08:39:28.142011 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:28Z","lastTransitionTime":"2025-12-01T08:39:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:28 crc kubenswrapper[4689]: I1201 08:39:28.245497 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:28 crc kubenswrapper[4689]: I1201 08:39:28.245571 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:28 crc kubenswrapper[4689]: I1201 08:39:28.245596 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:28 crc kubenswrapper[4689]: I1201 08:39:28.245630 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:28 crc kubenswrapper[4689]: I1201 08:39:28.245656 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:28Z","lastTransitionTime":"2025-12-01T08:39:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:28 crc kubenswrapper[4689]: I1201 08:39:28.349292 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:28 crc kubenswrapper[4689]: I1201 08:39:28.349353 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:28 crc kubenswrapper[4689]: I1201 08:39:28.349391 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:28 crc kubenswrapper[4689]: I1201 08:39:28.349422 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:28 crc kubenswrapper[4689]: I1201 08:39:28.349461 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:28Z","lastTransitionTime":"2025-12-01T08:39:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:28 crc kubenswrapper[4689]: I1201 08:39:28.453430 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:28 crc kubenswrapper[4689]: I1201 08:39:28.453499 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:28 crc kubenswrapper[4689]: I1201 08:39:28.453518 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:28 crc kubenswrapper[4689]: I1201 08:39:28.453557 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:28 crc kubenswrapper[4689]: I1201 08:39:28.453596 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:28Z","lastTransitionTime":"2025-12-01T08:39:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:28 crc kubenswrapper[4689]: I1201 08:39:28.556410 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:28 crc kubenswrapper[4689]: I1201 08:39:28.556455 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:28 crc kubenswrapper[4689]: I1201 08:39:28.556469 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:28 crc kubenswrapper[4689]: I1201 08:39:28.556487 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:28 crc kubenswrapper[4689]: I1201 08:39:28.556499 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:28Z","lastTransitionTime":"2025-12-01T08:39:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:28 crc kubenswrapper[4689]: I1201 08:39:28.663578 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:28 crc kubenswrapper[4689]: I1201 08:39:28.663609 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:28 crc kubenswrapper[4689]: I1201 08:39:28.663634 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:28 crc kubenswrapper[4689]: I1201 08:39:28.663650 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:28 crc kubenswrapper[4689]: I1201 08:39:28.663659 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:28Z","lastTransitionTime":"2025-12-01T08:39:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:28 crc kubenswrapper[4689]: I1201 08:39:28.766409 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:28 crc kubenswrapper[4689]: I1201 08:39:28.766450 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:28 crc kubenswrapper[4689]: I1201 08:39:28.766460 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:28 crc kubenswrapper[4689]: I1201 08:39:28.766475 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:28 crc kubenswrapper[4689]: I1201 08:39:28.766485 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:28Z","lastTransitionTime":"2025-12-01T08:39:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:28 crc kubenswrapper[4689]: I1201 08:39:28.870034 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:28 crc kubenswrapper[4689]: I1201 08:39:28.870121 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:28 crc kubenswrapper[4689]: I1201 08:39:28.870144 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:28 crc kubenswrapper[4689]: I1201 08:39:28.870193 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:28 crc kubenswrapper[4689]: I1201 08:39:28.870220 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:28Z","lastTransitionTime":"2025-12-01T08:39:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:28 crc kubenswrapper[4689]: I1201 08:39:28.972117 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:28 crc kubenswrapper[4689]: I1201 08:39:28.972163 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:28 crc kubenswrapper[4689]: I1201 08:39:28.972174 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:28 crc kubenswrapper[4689]: I1201 08:39:28.972190 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:28 crc kubenswrapper[4689]: I1201 08:39:28.972199 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:28Z","lastTransitionTime":"2025-12-01T08:39:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:29 crc kubenswrapper[4689]: I1201 08:39:29.047327 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 08:39:29 crc kubenswrapper[4689]: E1201 08:39:29.047544 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 08:39:29 crc kubenswrapper[4689]: I1201 08:39:29.074942 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:29 crc kubenswrapper[4689]: I1201 08:39:29.075013 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:29 crc kubenswrapper[4689]: I1201 08:39:29.075036 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:29 crc kubenswrapper[4689]: I1201 08:39:29.075067 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:29 crc kubenswrapper[4689]: I1201 08:39:29.075090 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:29Z","lastTransitionTime":"2025-12-01T08:39:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:29 crc kubenswrapper[4689]: I1201 08:39:29.177640 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:29 crc kubenswrapper[4689]: I1201 08:39:29.177683 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:29 crc kubenswrapper[4689]: I1201 08:39:29.177691 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:29 crc kubenswrapper[4689]: I1201 08:39:29.177705 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:29 crc kubenswrapper[4689]: I1201 08:39:29.177715 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:29Z","lastTransitionTime":"2025-12-01T08:39:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:29 crc kubenswrapper[4689]: I1201 08:39:29.280420 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:29 crc kubenswrapper[4689]: I1201 08:39:29.280482 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:29 crc kubenswrapper[4689]: I1201 08:39:29.280500 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:29 crc kubenswrapper[4689]: I1201 08:39:29.280525 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:29 crc kubenswrapper[4689]: I1201 08:39:29.280543 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:29Z","lastTransitionTime":"2025-12-01T08:39:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:29 crc kubenswrapper[4689]: I1201 08:39:29.383546 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:29 crc kubenswrapper[4689]: I1201 08:39:29.383629 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:29 crc kubenswrapper[4689]: I1201 08:39:29.383652 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:29 crc kubenswrapper[4689]: I1201 08:39:29.383681 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:29 crc kubenswrapper[4689]: I1201 08:39:29.383701 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:29Z","lastTransitionTime":"2025-12-01T08:39:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:29 crc kubenswrapper[4689]: I1201 08:39:29.487113 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:29 crc kubenswrapper[4689]: I1201 08:39:29.487168 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:29 crc kubenswrapper[4689]: I1201 08:39:29.487179 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:29 crc kubenswrapper[4689]: I1201 08:39:29.487198 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:29 crc kubenswrapper[4689]: I1201 08:39:29.487211 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:29Z","lastTransitionTime":"2025-12-01T08:39:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:29 crc kubenswrapper[4689]: I1201 08:39:29.589952 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:29 crc kubenswrapper[4689]: I1201 08:39:29.590018 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:29 crc kubenswrapper[4689]: I1201 08:39:29.590034 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:29 crc kubenswrapper[4689]: I1201 08:39:29.590056 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:29 crc kubenswrapper[4689]: I1201 08:39:29.590068 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:29Z","lastTransitionTime":"2025-12-01T08:39:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:29 crc kubenswrapper[4689]: I1201 08:39:29.692416 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:29 crc kubenswrapper[4689]: I1201 08:39:29.692481 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:29 crc kubenswrapper[4689]: I1201 08:39:29.692493 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:29 crc kubenswrapper[4689]: I1201 08:39:29.692512 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:29 crc kubenswrapper[4689]: I1201 08:39:29.692525 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:29Z","lastTransitionTime":"2025-12-01T08:39:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:29 crc kubenswrapper[4689]: I1201 08:39:29.795529 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:29 crc kubenswrapper[4689]: I1201 08:39:29.795584 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:29 crc kubenswrapper[4689]: I1201 08:39:29.795598 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:29 crc kubenswrapper[4689]: I1201 08:39:29.795616 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:29 crc kubenswrapper[4689]: I1201 08:39:29.795626 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:29Z","lastTransitionTime":"2025-12-01T08:39:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:29 crc kubenswrapper[4689]: I1201 08:39:29.899487 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:29 crc kubenswrapper[4689]: I1201 08:39:29.899557 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:29 crc kubenswrapper[4689]: I1201 08:39:29.899710 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:29 crc kubenswrapper[4689]: I1201 08:39:29.899756 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:29 crc kubenswrapper[4689]: I1201 08:39:29.899793 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:29Z","lastTransitionTime":"2025-12-01T08:39:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:30 crc kubenswrapper[4689]: I1201 08:39:30.002235 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:30 crc kubenswrapper[4689]: I1201 08:39:30.002314 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:30 crc kubenswrapper[4689]: I1201 08:39:30.002329 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:30 crc kubenswrapper[4689]: I1201 08:39:30.002348 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:30 crc kubenswrapper[4689]: I1201 08:39:30.002360 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:30Z","lastTransitionTime":"2025-12-01T08:39:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:30 crc kubenswrapper[4689]: I1201 08:39:30.047024 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 08:39:30 crc kubenswrapper[4689]: I1201 08:39:30.047095 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jtwvs" Dec 01 08:39:30 crc kubenswrapper[4689]: E1201 08:39:30.047204 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 08:39:30 crc kubenswrapper[4689]: E1201 08:39:30.047405 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jtwvs" podUID="5d6a08d0-a948-4c69-b3f0-f5e084adb453" Dec 01 08:39:30 crc kubenswrapper[4689]: I1201 08:39:30.047024 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 08:39:30 crc kubenswrapper[4689]: E1201 08:39:30.047567 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 08:39:30 crc kubenswrapper[4689]: I1201 08:39:30.105801 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:30 crc kubenswrapper[4689]: I1201 08:39:30.106040 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:30 crc kubenswrapper[4689]: I1201 08:39:30.106058 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:30 crc kubenswrapper[4689]: I1201 08:39:30.106076 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:30 crc kubenswrapper[4689]: I1201 08:39:30.106089 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:30Z","lastTransitionTime":"2025-12-01T08:39:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:30 crc kubenswrapper[4689]: I1201 08:39:30.209085 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:30 crc kubenswrapper[4689]: I1201 08:39:30.209151 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:30 crc kubenswrapper[4689]: I1201 08:39:30.209169 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:30 crc kubenswrapper[4689]: I1201 08:39:30.209193 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:30 crc kubenswrapper[4689]: I1201 08:39:30.209208 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:30Z","lastTransitionTime":"2025-12-01T08:39:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:30 crc kubenswrapper[4689]: I1201 08:39:30.314623 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:30 crc kubenswrapper[4689]: I1201 08:39:30.314699 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:30 crc kubenswrapper[4689]: I1201 08:39:30.314717 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:30 crc kubenswrapper[4689]: I1201 08:39:30.314748 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:30 crc kubenswrapper[4689]: I1201 08:39:30.314767 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:30Z","lastTransitionTime":"2025-12-01T08:39:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:30 crc kubenswrapper[4689]: I1201 08:39:30.418661 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:30 crc kubenswrapper[4689]: I1201 08:39:30.418768 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:30 crc kubenswrapper[4689]: I1201 08:39:30.418789 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:30 crc kubenswrapper[4689]: I1201 08:39:30.418819 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:30 crc kubenswrapper[4689]: I1201 08:39:30.418839 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:30Z","lastTransitionTime":"2025-12-01T08:39:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:30 crc kubenswrapper[4689]: I1201 08:39:30.524192 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:30 crc kubenswrapper[4689]: I1201 08:39:30.524249 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:30 crc kubenswrapper[4689]: I1201 08:39:30.524259 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:30 crc kubenswrapper[4689]: I1201 08:39:30.524281 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:30 crc kubenswrapper[4689]: I1201 08:39:30.524292 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:30Z","lastTransitionTime":"2025-12-01T08:39:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:30 crc kubenswrapper[4689]: I1201 08:39:30.627087 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:30 crc kubenswrapper[4689]: I1201 08:39:30.627148 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:30 crc kubenswrapper[4689]: I1201 08:39:30.627164 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:30 crc kubenswrapper[4689]: I1201 08:39:30.627191 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:30 crc kubenswrapper[4689]: I1201 08:39:30.627210 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:30Z","lastTransitionTime":"2025-12-01T08:39:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:30 crc kubenswrapper[4689]: I1201 08:39:30.730787 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:30 crc kubenswrapper[4689]: I1201 08:39:30.730858 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:30 crc kubenswrapper[4689]: I1201 08:39:30.730881 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:30 crc kubenswrapper[4689]: I1201 08:39:30.730914 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:30 crc kubenswrapper[4689]: I1201 08:39:30.730937 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:30Z","lastTransitionTime":"2025-12-01T08:39:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:30 crc kubenswrapper[4689]: I1201 08:39:30.834803 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:30 crc kubenswrapper[4689]: I1201 08:39:30.835104 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:30 crc kubenswrapper[4689]: I1201 08:39:30.835196 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:30 crc kubenswrapper[4689]: I1201 08:39:30.835279 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:30 crc kubenswrapper[4689]: I1201 08:39:30.835347 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:30Z","lastTransitionTime":"2025-12-01T08:39:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:30 crc kubenswrapper[4689]: I1201 08:39:30.938583 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:30 crc kubenswrapper[4689]: I1201 08:39:30.938971 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:30 crc kubenswrapper[4689]: I1201 08:39:30.939225 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:30 crc kubenswrapper[4689]: I1201 08:39:30.939513 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:30 crc kubenswrapper[4689]: I1201 08:39:30.939720 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:30Z","lastTransitionTime":"2025-12-01T08:39:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:31 crc kubenswrapper[4689]: I1201 08:39:31.043145 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:31 crc kubenswrapper[4689]: I1201 08:39:31.043455 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:31 crc kubenswrapper[4689]: I1201 08:39:31.043613 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:31 crc kubenswrapper[4689]: I1201 08:39:31.043766 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:31 crc kubenswrapper[4689]: I1201 08:39:31.043896 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:31Z","lastTransitionTime":"2025-12-01T08:39:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:31 crc kubenswrapper[4689]: I1201 08:39:31.046712 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 08:39:31 crc kubenswrapper[4689]: E1201 08:39:31.046860 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 08:39:31 crc kubenswrapper[4689]: I1201 08:39:31.066248 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c2qqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65456ad6-e7d1-4546-a977-244691fc5722\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c50390d9cf966913cfb379da199be0fe90b9085e0d76114903eb624054a7f84b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pwwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cda8fd2f87a8bee5f54685633fc64ce2dd06bfe6e5ea9fa8458345954080e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pwwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-c2qqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:31Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:31 crc kubenswrapper[4689]: I1201 08:39:31.086086 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:31Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:31 crc kubenswrapper[4689]: I1201 08:39:31.101833 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:31Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:31 crc kubenswrapper[4689]: I1201 08:39:31.117476 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dl2st" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bebcb50-c292-4bca-9299-2fdc21439b18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://768aa329fc277550bd549890a176d80246fcbfa50fbc3b88c444434b980b9986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98wj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dl2st\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:31Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:31 crc kubenswrapper[4689]: I1201 08:39:31.134766 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7p2p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcab587f-eb9b-4dde-a0a1-75ed175999b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://761460333be2b369513cc7812afa57b580daf1e0e9add1c20f33ddf45601632c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6954a34226801e4552090e2c6e76c36de94e28ea8e175bc26392f534fb12214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6954a34226801e4552090e2c6e76c36de94e28ea8e175bc26392f534fb12214\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33d22ffe5ececc3e9210cb8c45471bd5f4b8a76e9ff2002cfc4c099b6973a7ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33d22ffe5ececc3e9210cb8c45471bd5f4b8a76e9ff2002cfc4c099b6973a7ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30e1f218df66bf86cef3d56c80b2d5bafacfabbac8868f3127a70f88431d00e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30e1f218df66bf86cef3d56c80b2d5bafacfabbac8868f3127a70f88431d00e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://548c59ec353473b69fd54aa96a077763c420647673b64000364532db2c3beefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://548c59ec353473b69fd54aa96a077763c420647673b64000364532db2c3beefe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae04920175b7c9665a739c2f390f5b21a7f8227277909132d9719eaba43c9dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae04920175b7c9665a739c2f390f5b21a7f8227277909132d9719eaba43c9dd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://512fd55cf3633a9fcdf425af76d251823e7638bd08a7f32341108470103ea67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://512fd55cf3633a9fcdf425af76d251823e7638bd08a7f32341108470103ea67d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7p2p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:31Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:31 crc kubenswrapper[4689]: I1201 08:39:31.147887 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:31 crc kubenswrapper[4689]: I1201 08:39:31.147943 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:31 crc kubenswrapper[4689]: I1201 08:39:31.147956 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:31 crc kubenswrapper[4689]: I1201 08:39:31.147975 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:31 crc kubenswrapper[4689]: I1201 08:39:31.147989 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:31Z","lastTransitionTime":"2025-12-01T08:39:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:31 crc kubenswrapper[4689]: I1201 08:39:31.148805 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kg5bw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af90efaa-97be-48b4-bfe6-dc25956d2b5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e208d5bee2ddf73e62749c6865d4a6cc138365c59ab582e2b28e8c01480591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2sbc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kg5bw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:31Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:31 crc kubenswrapper[4689]: I1201 08:39:31.168859 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e81b25bc4fc917a5480b84f4c7d2550829f5ad209d4cac14d2cd64d5b792b9e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:31Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:31 crc kubenswrapper[4689]: I1201 08:39:31.184232 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78a7f1a0302034697497f8a7a3d9547400a4311d7e304fdba95a19750c66658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:31Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:31 crc kubenswrapper[4689]: I1201 08:39:31.200349 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62ff4a03d2e50e2d881287008cfedc7ff067e04d3dc24ace25b56677b0eb117c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1ed68789c1f37ddbc952b0927feba7a704b01f3ee52e057e29c989f1d475e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:31Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:31 crc kubenswrapper[4689]: I1201 08:39:31.216879 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jtwvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d6a08d0-a948-4c69-b3f0-f5e084adb453\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkwdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkwdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jtwvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:31Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:31 crc kubenswrapper[4689]: I1201 08:39:31.233705 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f71ff62f-7141-4d50-b8ad-8312dc8a80e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccd36e732198fa380968b1ca9a174e8f380c9d3c3c762c3267b2f65367c4360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d09630f7eb6a631d65473e272c4f98de5ba071d33992776b4101ac797b0854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f804833ecd52f1cd93a4df02639ff912e5c7ff38bfce3ba1c47e8cd5fca46e11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35299e45eecb4b2b98003295a39b7945d1fa2943a5a7f97301054d255687877e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:38:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:31Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:31 crc kubenswrapper[4689]: I1201 08:39:31.250064 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:31Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:31 crc kubenswrapper[4689]: I1201 08:39:31.252092 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:31 crc kubenswrapper[4689]: I1201 08:39:31.252145 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:31 crc kubenswrapper[4689]: I1201 08:39:31.252156 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:31 crc kubenswrapper[4689]: I1201 08:39:31.252172 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:31 crc kubenswrapper[4689]: I1201 08:39:31.252183 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:31Z","lastTransitionTime":"2025-12-01T08:39:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:31 crc kubenswrapper[4689]: I1201 08:39:31.268991 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4z9l8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74395c07-d5ab-45ec-a616-1d0b1b336583\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f9b1493d035a89354477e779e8277075127a85756f190d5f0c3bf98a15f15b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjzb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4z9l8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:31Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:31 crc kubenswrapper[4689]: I1201 08:39:31.306840 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"988f960f-52fa-406f-9320-a8eec7a04f76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8be3fd2d2a091ced00ce091ea6f81df99a6718fa9ead81d56590d2c8ebc2a36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46c9dc400db69538c36365f4e2d66aae47defc4224c2881532870f875e2b30e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2afbacac849e9ed880e975ae167ea5b2329811c28516152f4d809b043ab61746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be8fa8da5eef9035e80121c75adb9d6653d8683948fa83cfb1ff87a5fb3e8b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81345c33ddfbccd0c6b7d9c505ee81d513ce884fa1326add04026fd82d441e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://211c2ff150fa2d5fb73b0df4cfc5e694bf0b6b6761eec1e6827ac983fd759829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f548894d4be4438ed8d8452bec2e494c1330ccf1737e238f9ccb675a8023abe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f548894d4be4438ed8d8452bec2e494c1330ccf1737e238f9ccb675a8023abe\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T08:39:19Z\\\",\\\"message\\\":\\\" 6073 services_controller.go:360] Finished syncing service metrics on namespace openshift-service-ca-operator for network=default : 1.621102ms\\\\nI1201 08:39:19.012439 6073 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1201 08:39:19.012454 6073 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1201 08:39:19.012672 6073 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1201 08:39:19.012688 6073 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-authentication/oauth-openshift\\\\\\\"}\\\\nI1201 08:39:19.012700 6073 services_controller.go:360] Finished syncing service oauth-openshift on namespace openshift-authentication for network=default : 1.358936ms\\\\nI1201 08:39:19.013045 6073 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console-operator/metrics\\\\\\\"}\\\\nI1201 08:39:19.013066 6073 services_controller.go:360] Finished syncing service metrics on namespace openshift-console-operator for network=default : 1.708506ms\\\\nI1201 08:39:19.013517 6073 ovnkube.go:599] Stopped ovnkube\\\\nI1201 08:39:19.013605 6073 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1201 08:39:19.013730 6073 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-8zn56_openshift-ovn-kubernetes(988f960f-52fa-406f-9320-a8eec7a04f76)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e96fecee94f6f25ea0bbde3c6a23854fb5dc476f4b5cfb82871358fbcf6809c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://496b7a56d1e3bb23f4d1da0fc5bbe009995c46b1f44faad80ee18a18ab301a18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://496b7a56d1e3bb23f4d1da0fc5bbe009995c46b1f44faad80ee18a18ab301a18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8zn56\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:31Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:31 crc kubenswrapper[4689]: I1201 08:39:31.323799 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82556f94-5534-4aae-9690-5ce8e8d38113\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ad3697dfb9a953c1345d69e9f6c393a184bafaf121e9a62a86d05f8f26e3f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20de12d3c9020c7818be4f11a75808c7b7e81db8ae821b12284182d81e7cbceb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35c39a06af6610f8d18eac5f96e5dee0b542fb3fb0c81a102ed8b2a20d054a42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d507cfb5d0bd608d6d5ecd4105f944cdf013df7acf17c4d6237512601b4a7125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d507cfb5d0bd608d6d5ecd4105f944cdf013df7acf17c4d6237512601b4a7125\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:38:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:38:41Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:31Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:31 crc kubenswrapper[4689]: I1201 08:39:31.343315 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1946844b-83ee-401e-b3b6-5994ef81c85e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dbad86a3834b9ec3334de7ccc49988da1f7e42c423801c9789ce12ba70390b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://848171dd65d193193056bff78092de5ea65abaf7af4d168a9c27409765bd0ac2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad9f8bfeec0bea5ec9df44017223d633ce320bba477e99d5cc325712f92fff65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5920f4efa4e812090eb100b51bcf8e37eb82eb6b9fa495c8560ceeea3a94d55b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83ec64e82504ae546c78bdb3eb8b6cf47514373616749723ec457191b95b73c7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 08:38:54.283267 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 08:38:54.288329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1100834793/tls.crt::/tmp/serving-cert-1100834793/tls.key\\\\\\\"\\\\nI1201 08:39:00.137331 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 08:39:00.140699 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 08:39:00.140719 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 08:39:00.140740 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 08:39:00.140746 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 08:39:00.151271 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 08:39:00.151299 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 08:39:00.151305 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 08:39:00.151312 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 08:39:00.151316 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 08:39:00.151322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 08:39:00.151326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 08:39:00.151604 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 08:39:00.154622 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7190f95ad8a5cce27417903e343138ff345b97c0713609f47db91d8fe0c44a7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bba1efe2a197a5c47fc4220c1c53da0db51bcbdf317261c978b7f31348034f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bba1efe2a197a5c47fc4220c1c53da0db51bcbdf317261c978b7f31348034f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:38:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:31Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:31 crc kubenswrapper[4689]: I1201 08:39:31.354653 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:31 crc kubenswrapper[4689]: I1201 08:39:31.354689 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:31 crc kubenswrapper[4689]: I1201 08:39:31.354700 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:31 crc kubenswrapper[4689]: I1201 08:39:31.354719 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:31 crc kubenswrapper[4689]: I1201 08:39:31.354732 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:31Z","lastTransitionTime":"2025-12-01T08:39:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:31 crc kubenswrapper[4689]: I1201 08:39:31.357601 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3947625d-75bf-4332-a233-1491b2ee9d96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5095b8eb2f2f6319a0221ed4d7ec1fb81540fb75f610f7b57dc23fefbd196b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5v5pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdcf58565a332d76dac9cc12df1e0cd89eee6934be4b3234f8c36e9a8e22983a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5v5pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hmdnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:31Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:31 crc kubenswrapper[4689]: I1201 08:39:31.457467 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:31 crc kubenswrapper[4689]: I1201 08:39:31.457512 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:31 crc kubenswrapper[4689]: I1201 08:39:31.457524 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:31 crc kubenswrapper[4689]: I1201 08:39:31.457540 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:31 crc kubenswrapper[4689]: I1201 08:39:31.457550 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:31Z","lastTransitionTime":"2025-12-01T08:39:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:31 crc kubenswrapper[4689]: I1201 08:39:31.561015 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:31 crc kubenswrapper[4689]: I1201 08:39:31.561470 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:31 crc kubenswrapper[4689]: I1201 08:39:31.561845 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:31 crc kubenswrapper[4689]: I1201 08:39:31.562088 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:31 crc kubenswrapper[4689]: I1201 08:39:31.562261 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:31Z","lastTransitionTime":"2025-12-01T08:39:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:31 crc kubenswrapper[4689]: I1201 08:39:31.665927 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:31 crc kubenswrapper[4689]: I1201 08:39:31.666003 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:31 crc kubenswrapper[4689]: I1201 08:39:31.666027 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:31 crc kubenswrapper[4689]: I1201 08:39:31.666055 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:31 crc kubenswrapper[4689]: I1201 08:39:31.666076 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:31Z","lastTransitionTime":"2025-12-01T08:39:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:31 crc kubenswrapper[4689]: I1201 08:39:31.769756 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:31 crc kubenswrapper[4689]: I1201 08:39:31.769825 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:31 crc kubenswrapper[4689]: I1201 08:39:31.769856 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:31 crc kubenswrapper[4689]: I1201 08:39:31.769889 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:31 crc kubenswrapper[4689]: I1201 08:39:31.769914 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:31Z","lastTransitionTime":"2025-12-01T08:39:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:31 crc kubenswrapper[4689]: I1201 08:39:31.874000 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:31 crc kubenswrapper[4689]: I1201 08:39:31.874076 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:31 crc kubenswrapper[4689]: I1201 08:39:31.874100 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:31 crc kubenswrapper[4689]: I1201 08:39:31.874130 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:31 crc kubenswrapper[4689]: I1201 08:39:31.874153 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:31Z","lastTransitionTime":"2025-12-01T08:39:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:31 crc kubenswrapper[4689]: I1201 08:39:31.977879 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:31 crc kubenswrapper[4689]: I1201 08:39:31.977995 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:31 crc kubenswrapper[4689]: I1201 08:39:31.978023 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:31 crc kubenswrapper[4689]: I1201 08:39:31.978057 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:31 crc kubenswrapper[4689]: I1201 08:39:31.978080 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:31Z","lastTransitionTime":"2025-12-01T08:39:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:32 crc kubenswrapper[4689]: I1201 08:39:32.047078 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jtwvs" Dec 01 08:39:32 crc kubenswrapper[4689]: I1201 08:39:32.047213 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 08:39:32 crc kubenswrapper[4689]: E1201 08:39:32.047457 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jtwvs" podUID="5d6a08d0-a948-4c69-b3f0-f5e084adb453" Dec 01 08:39:32 crc kubenswrapper[4689]: I1201 08:39:32.047539 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 08:39:32 crc kubenswrapper[4689]: E1201 08:39:32.047686 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 08:39:32 crc kubenswrapper[4689]: E1201 08:39:32.048615 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 08:39:32 crc kubenswrapper[4689]: I1201 08:39:32.049465 4689 scope.go:117] "RemoveContainer" containerID="4f548894d4be4438ed8d8452bec2e494c1330ccf1737e238f9ccb675a8023abe" Dec 01 08:39:32 crc kubenswrapper[4689]: I1201 08:39:32.082210 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:32 crc kubenswrapper[4689]: I1201 08:39:32.083265 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:32 crc kubenswrapper[4689]: I1201 08:39:32.083533 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:32 crc kubenswrapper[4689]: I1201 08:39:32.083765 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:32 crc kubenswrapper[4689]: I1201 08:39:32.084026 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:32Z","lastTransitionTime":"2025-12-01T08:39:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:32 crc kubenswrapper[4689]: I1201 08:39:32.188207 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:32 crc kubenswrapper[4689]: I1201 08:39:32.188534 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:32 crc kubenswrapper[4689]: I1201 08:39:32.188552 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:32 crc kubenswrapper[4689]: I1201 08:39:32.188571 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:32 crc kubenswrapper[4689]: I1201 08:39:32.188588 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:32Z","lastTransitionTime":"2025-12-01T08:39:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:32 crc kubenswrapper[4689]: I1201 08:39:32.291493 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:32 crc kubenswrapper[4689]: I1201 08:39:32.291546 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:32 crc kubenswrapper[4689]: I1201 08:39:32.291559 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:32 crc kubenswrapper[4689]: I1201 08:39:32.291579 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:32 crc kubenswrapper[4689]: I1201 08:39:32.291594 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:32Z","lastTransitionTime":"2025-12-01T08:39:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:32 crc kubenswrapper[4689]: I1201 08:39:32.393938 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:32 crc kubenswrapper[4689]: I1201 08:39:32.393988 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:32 crc kubenswrapper[4689]: I1201 08:39:32.393997 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:32 crc kubenswrapper[4689]: I1201 08:39:32.394013 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:32 crc kubenswrapper[4689]: I1201 08:39:32.394024 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:32Z","lastTransitionTime":"2025-12-01T08:39:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:32 crc kubenswrapper[4689]: I1201 08:39:32.496359 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:32 crc kubenswrapper[4689]: I1201 08:39:32.496447 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:32 crc kubenswrapper[4689]: I1201 08:39:32.496459 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:32 crc kubenswrapper[4689]: I1201 08:39:32.496479 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:32 crc kubenswrapper[4689]: I1201 08:39:32.496492 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:32Z","lastTransitionTime":"2025-12-01T08:39:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:32 crc kubenswrapper[4689]: I1201 08:39:32.612865 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:32 crc kubenswrapper[4689]: I1201 08:39:32.612912 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:32 crc kubenswrapper[4689]: I1201 08:39:32.612921 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:32 crc kubenswrapper[4689]: I1201 08:39:32.612934 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:32 crc kubenswrapper[4689]: I1201 08:39:32.612954 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:32Z","lastTransitionTime":"2025-12-01T08:39:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:32 crc kubenswrapper[4689]: I1201 08:39:32.671864 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8zn56_988f960f-52fa-406f-9320-a8eec7a04f76/ovnkube-controller/1.log" Dec 01 08:39:32 crc kubenswrapper[4689]: I1201 08:39:32.680442 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" event={"ID":"988f960f-52fa-406f-9320-a8eec7a04f76","Type":"ContainerStarted","Data":"2c1785e9dd655b78cb3c4139dca5280fb0d84adc92ae5f92ff69ce96b8bb82f7"} Dec 01 08:39:32 crc kubenswrapper[4689]: I1201 08:39:32.680933 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" Dec 01 08:39:32 crc kubenswrapper[4689]: I1201 08:39:32.693329 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:32Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:32 crc kubenswrapper[4689]: I1201 08:39:32.705322 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:32Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:32 crc kubenswrapper[4689]: I1201 08:39:32.715259 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:32 crc kubenswrapper[4689]: I1201 08:39:32.715308 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:32 crc kubenswrapper[4689]: I1201 08:39:32.715317 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:32 crc kubenswrapper[4689]: I1201 08:39:32.715330 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:32 crc kubenswrapper[4689]: I1201 08:39:32.715339 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:32Z","lastTransitionTime":"2025-12-01T08:39:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:32 crc kubenswrapper[4689]: I1201 08:39:32.716691 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:39:32 crc kubenswrapper[4689]: I1201 08:39:32.716767 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 08:39:32 crc kubenswrapper[4689]: E1201 08:39:32.716808 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:40:04.716789558 +0000 UTC m=+84.789077452 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:39:32 crc kubenswrapper[4689]: I1201 08:39:32.716859 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 08:39:32 crc kubenswrapper[4689]: I1201 08:39:32.716889 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 08:39:32 crc kubenswrapper[4689]: E1201 08:39:32.716956 4689 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 01 08:39:32 crc kubenswrapper[4689]: E1201 08:39:32.716997 4689 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 01 08:39:32 crc kubenswrapper[4689]: E1201 08:39:32.717023 4689 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 01 08:39:32 crc kubenswrapper[4689]: E1201 08:39:32.717043 4689 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 01 08:39:32 crc kubenswrapper[4689]: E1201 08:39:32.717053 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-01 08:40:04.717028094 +0000 UTC m=+84.789316008 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 01 08:39:32 crc kubenswrapper[4689]: E1201 08:39:32.717087 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-01 08:40:04.717077136 +0000 UTC m=+84.789365050 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 01 08:39:32 crc kubenswrapper[4689]: E1201 08:39:32.717056 4689 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 08:39:32 crc kubenswrapper[4689]: E1201 08:39:32.717338 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-01 08:40:04.717328703 +0000 UTC m=+84.789616617 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 08:39:32 crc kubenswrapper[4689]: I1201 08:39:32.719411 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dl2st" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bebcb50-c292-4bca-9299-2fdc21439b18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://768aa329fc277550bd549890a176d80246fcbfa50fbc3b88c444434b980b9986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98wj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dl2st\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:32Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:32 crc kubenswrapper[4689]: I1201 08:39:32.733507 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7p2p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcab587f-eb9b-4dde-a0a1-75ed175999b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://761460333be2b369513cc7812afa57b580daf1e0e9add1c20f33ddf45601632c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6954a34226801e4552090e2c6e76c36de94e28ea8e175bc26392f534fb12214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6954a34226801e4552090e2c6e76c36de94e28ea8e175bc26392f534fb12214\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33d22ffe5ececc3e9210cb8c45471bd5f4b8a76e9ff2002cfc4c099b6973a7ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33d22ffe5ececc3e9210cb8c45471bd5f4b8a76e9ff2002cfc4c099b6973a7ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30e1f218df66bf86cef3d56c80b2d5bafacfabbac8868f3127a70f88431d00e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30e1f218df66bf86cef3d56c80b2d5bafacfabbac8868f3127a70f88431d00e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://548c59ec353473b69fd54aa96a077763c420647673b64000364532db2c3beefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://548c59ec353473b69fd54aa96a077763c420647673b64000364532db2c3beefe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae04920175b7c9665a739c2f390f5b21a7f8227277909132d9719eaba43c9dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae04920175b7c9665a739c2f390f5b21a7f8227277909132d9719eaba43c9dd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://512fd55cf3633a9fcdf425af76d251823e7638bd08a7f32341108470103ea67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://512fd55cf3633a9fcdf425af76d251823e7638bd08a7f32341108470103ea67d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7p2p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:32Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:32 crc kubenswrapper[4689]: I1201 08:39:32.743997 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kg5bw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af90efaa-97be-48b4-bfe6-dc25956d2b5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e208d5bee2ddf73e62749c6865d4a6cc138365c59ab582e2b28e8c01480591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2sbc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kg5bw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:32Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:32 crc kubenswrapper[4689]: I1201 08:39:32.756514 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c2qqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65456ad6-e7d1-4546-a977-244691fc5722\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c50390d9cf966913cfb379da199be0fe90b9085e0d76114903eb624054a7f84b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pwwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cda8fd2f87a8bee5f54685633fc64ce2dd06bfe6e5ea9fa8458345954080e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pwwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-c2qqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:32Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:32 crc kubenswrapper[4689]: I1201 08:39:32.770948 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e81b25bc4fc917a5480b84f4c7d2550829f5ad209d4cac14d2cd64d5b792b9e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:32Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:32 crc kubenswrapper[4689]: I1201 08:39:32.784967 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78a7f1a0302034697497f8a7a3d9547400a4311d7e304fdba95a19750c66658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:32Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:32 crc kubenswrapper[4689]: I1201 08:39:32.796323 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62ff4a03d2e50e2d881287008cfedc7ff067e04d3dc24ace25b56677b0eb117c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1ed68789c1f37ddbc952b0927feba7a704b01f3ee52e057e29c989f1d475e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:32Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:32 crc kubenswrapper[4689]: I1201 08:39:32.810952 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f71ff62f-7141-4d50-b8ad-8312dc8a80e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccd36e732198fa380968b1ca9a174e8f380c9d3c3c762c3267b2f65367c4360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d09630f7eb6a631d65473e272c4f98de5ba071d33992776b4101ac797b0854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f804833ecd52f1cd93a4df02639ff912e5c7ff38bfce3ba1c47e8cd5fca46e11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35299e45eecb4b2b98003295a39b7945d1fa2943a5a7f97301054d255687877e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:38:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:32Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:32 crc kubenswrapper[4689]: I1201 08:39:32.817501 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 08:39:32 crc kubenswrapper[4689]: E1201 08:39:32.817726 4689 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 01 08:39:32 crc kubenswrapper[4689]: E1201 08:39:32.817750 4689 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 01 08:39:32 crc kubenswrapper[4689]: E1201 08:39:32.817765 4689 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 08:39:32 crc kubenswrapper[4689]: E1201 08:39:32.817822 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-01 08:40:04.817805441 +0000 UTC m=+84.890093345 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 08:39:32 crc kubenswrapper[4689]: I1201 08:39:32.817959 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:32 crc kubenswrapper[4689]: I1201 08:39:32.817981 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:32 crc kubenswrapper[4689]: I1201 08:39:32.817989 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:32 crc kubenswrapper[4689]: I1201 08:39:32.818003 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:32 crc kubenswrapper[4689]: I1201 08:39:32.818013 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:32Z","lastTransitionTime":"2025-12-01T08:39:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:32 crc kubenswrapper[4689]: I1201 08:39:32.822444 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:32Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:32 crc kubenswrapper[4689]: I1201 08:39:32.833839 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4z9l8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74395c07-d5ab-45ec-a616-1d0b1b336583\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f9b1493d035a89354477e779e8277075127a85756f190d5f0c3bf98a15f15b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjzb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4z9l8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:32Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:32 crc kubenswrapper[4689]: I1201 08:39:32.854590 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"988f960f-52fa-406f-9320-a8eec7a04f76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8be3fd2d2a091ced00ce091ea6f81df99a6718fa9ead81d56590d2c8ebc2a36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46c9dc400db69538c36365f4e2d66aae47defc4224c2881532870f875e2b30e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2afbacac849e9ed880e975ae167ea5b2329811c28516152f4d809b043ab61746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be8fa8da5eef9035e80121c75adb9d6653d8683948fa83cfb1ff87a5fb3e8b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81345c33ddfbccd0c6b7d9c505ee81d513ce884fa1326add04026fd82d441e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://211c2ff150fa2d5fb73b0df4cfc5e694bf0b6b6761eec1e6827ac983fd759829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c1785e9dd655b78cb3c4139dca5280fb0d84adc92ae5f92ff69ce96b8bb82f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f548894d4be4438ed8d8452bec2e494c1330ccf1737e238f9ccb675a8023abe\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T08:39:19Z\\\",\\\"message\\\":\\\" 6073 services_controller.go:360] Finished syncing service metrics on namespace openshift-service-ca-operator for network=default : 1.621102ms\\\\nI1201 08:39:19.012439 6073 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1201 08:39:19.012454 6073 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1201 08:39:19.012672 6073 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1201 08:39:19.012688 6073 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-authentication/oauth-openshift\\\\\\\"}\\\\nI1201 08:39:19.012700 6073 services_controller.go:360] Finished syncing service oauth-openshift on namespace openshift-authentication for network=default : 1.358936ms\\\\nI1201 08:39:19.013045 6073 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console-operator/metrics\\\\\\\"}\\\\nI1201 08:39:19.013066 6073 services_controller.go:360] Finished syncing service metrics on namespace openshift-console-operator for network=default : 1.708506ms\\\\nI1201 08:39:19.013517 6073 ovnkube.go:599] Stopped ovnkube\\\\nI1201 08:39:19.013605 6073 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1201 08:39:19.013730 6073 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e96fecee94f6f25ea0bbde3c6a23854fb5dc476f4b5cfb82871358fbcf6809c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://496b7a56d1e3bb23f4d1da0fc5bbe009995c46b1f44faad80ee18a18ab301a18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://496b7a56d1e3bb23f4d1da0fc5bbe009995c46b1f44faad80ee18a18ab301a18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8zn56\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:32Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:32 crc kubenswrapper[4689]: I1201 08:39:32.866762 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jtwvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d6a08d0-a948-4c69-b3f0-f5e084adb453\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkwdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkwdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jtwvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:32Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:32 crc kubenswrapper[4689]: I1201 08:39:32.880093 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82556f94-5534-4aae-9690-5ce8e8d38113\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ad3697dfb9a953c1345d69e9f6c393a184bafaf121e9a62a86d05f8f26e3f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20de12d3c9020c7818be4f11a75808c7b7e81db8ae821b12284182d81e7cbceb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35c39a06af6610f8d18eac5f96e5dee0b542fb3fb0c81a102ed8b2a20d054a42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d507cfb5d0bd608d6d5ecd4105f944cdf013df7acf17c4d6237512601b4a7125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d507cfb5d0bd608d6d5ecd4105f944cdf013df7acf17c4d6237512601b4a7125\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:38:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:38:41Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:32Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:32 crc kubenswrapper[4689]: I1201 08:39:32.896017 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1946844b-83ee-401e-b3b6-5994ef81c85e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dbad86a3834b9ec3334de7ccc49988da1f7e42c423801c9789ce12ba70390b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://848171dd65d193193056bff78092de5ea65abaf7af4d168a9c27409765bd0ac2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad9f8bfeec0bea5ec9df44017223d633ce320bba477e99d5cc325712f92fff65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5920f4efa4e812090eb100b51bcf8e37eb82eb6b9fa495c8560ceeea3a94d55b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83ec64e82504ae546c78bdb3eb8b6cf47514373616749723ec457191b95b73c7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 08:38:54.283267 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 08:38:54.288329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1100834793/tls.crt::/tmp/serving-cert-1100834793/tls.key\\\\\\\"\\\\nI1201 08:39:00.137331 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 08:39:00.140699 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 08:39:00.140719 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 08:39:00.140740 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 08:39:00.140746 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 08:39:00.151271 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 08:39:00.151299 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 08:39:00.151305 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 08:39:00.151312 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 08:39:00.151316 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 08:39:00.151322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 08:39:00.151326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 08:39:00.151604 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 08:39:00.154622 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7190f95ad8a5cce27417903e343138ff345b97c0713609f47db91d8fe0c44a7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bba1efe2a197a5c47fc4220c1c53da0db51bcbdf317261c978b7f31348034f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bba1efe2a197a5c47fc4220c1c53da0db51bcbdf317261c978b7f31348034f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:38:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:32Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:32 crc kubenswrapper[4689]: I1201 08:39:32.907627 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3947625d-75bf-4332-a233-1491b2ee9d96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5095b8eb2f2f6319a0221ed4d7ec1fb81540fb75f610f7b57dc23fefbd196b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5v5pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdcf58565a332d76dac9cc12df1e0cd89eee6934be4b3234f8c36e9a8e22983a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5v5pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hmdnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:32Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:32 crc kubenswrapper[4689]: I1201 08:39:32.920256 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:32 crc kubenswrapper[4689]: I1201 08:39:32.920442 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:32 crc kubenswrapper[4689]: I1201 08:39:32.920514 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:32 crc kubenswrapper[4689]: I1201 08:39:32.920587 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:32 crc kubenswrapper[4689]: I1201 08:39:32.920667 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:32Z","lastTransitionTime":"2025-12-01T08:39:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:33 crc kubenswrapper[4689]: I1201 08:39:33.023920 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:33 crc kubenswrapper[4689]: I1201 08:39:33.024015 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:33 crc kubenswrapper[4689]: I1201 08:39:33.024035 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:33 crc kubenswrapper[4689]: I1201 08:39:33.024059 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:33 crc kubenswrapper[4689]: I1201 08:39:33.024076 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:33Z","lastTransitionTime":"2025-12-01T08:39:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:33 crc kubenswrapper[4689]: I1201 08:39:33.048591 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 08:39:33 crc kubenswrapper[4689]: E1201 08:39:33.048825 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 08:39:33 crc kubenswrapper[4689]: I1201 08:39:33.127028 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:33 crc kubenswrapper[4689]: I1201 08:39:33.127080 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:33 crc kubenswrapper[4689]: I1201 08:39:33.127094 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:33 crc kubenswrapper[4689]: I1201 08:39:33.127115 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:33 crc kubenswrapper[4689]: I1201 08:39:33.127131 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:33Z","lastTransitionTime":"2025-12-01T08:39:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:33 crc kubenswrapper[4689]: I1201 08:39:33.230780 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:33 crc kubenswrapper[4689]: I1201 08:39:33.231166 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:33 crc kubenswrapper[4689]: I1201 08:39:33.231461 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:33 crc kubenswrapper[4689]: I1201 08:39:33.231636 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:33 crc kubenswrapper[4689]: I1201 08:39:33.231839 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:33Z","lastTransitionTime":"2025-12-01T08:39:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:33 crc kubenswrapper[4689]: I1201 08:39:33.334825 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:33 crc kubenswrapper[4689]: I1201 08:39:33.334898 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:33 crc kubenswrapper[4689]: I1201 08:39:33.334918 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:33 crc kubenswrapper[4689]: I1201 08:39:33.334945 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:33 crc kubenswrapper[4689]: I1201 08:39:33.334962 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:33Z","lastTransitionTime":"2025-12-01T08:39:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:33 crc kubenswrapper[4689]: I1201 08:39:33.437511 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:33 crc kubenswrapper[4689]: I1201 08:39:33.437573 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:33 crc kubenswrapper[4689]: I1201 08:39:33.437597 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:33 crc kubenswrapper[4689]: I1201 08:39:33.437628 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:33 crc kubenswrapper[4689]: I1201 08:39:33.437651 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:33Z","lastTransitionTime":"2025-12-01T08:39:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:33 crc kubenswrapper[4689]: I1201 08:39:33.541128 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:33 crc kubenswrapper[4689]: I1201 08:39:33.541188 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:33 crc kubenswrapper[4689]: I1201 08:39:33.541209 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:33 crc kubenswrapper[4689]: I1201 08:39:33.541234 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:33 crc kubenswrapper[4689]: I1201 08:39:33.541255 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:33Z","lastTransitionTime":"2025-12-01T08:39:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:33 crc kubenswrapper[4689]: I1201 08:39:33.644458 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:33 crc kubenswrapper[4689]: I1201 08:39:33.644517 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:33 crc kubenswrapper[4689]: I1201 08:39:33.644535 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:33 crc kubenswrapper[4689]: I1201 08:39:33.644563 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:33 crc kubenswrapper[4689]: I1201 08:39:33.644581 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:33Z","lastTransitionTime":"2025-12-01T08:39:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:33 crc kubenswrapper[4689]: I1201 08:39:33.687098 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8zn56_988f960f-52fa-406f-9320-a8eec7a04f76/ovnkube-controller/2.log" Dec 01 08:39:33 crc kubenswrapper[4689]: I1201 08:39:33.688289 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8zn56_988f960f-52fa-406f-9320-a8eec7a04f76/ovnkube-controller/1.log" Dec 01 08:39:33 crc kubenswrapper[4689]: I1201 08:39:33.692175 4689 generic.go:334] "Generic (PLEG): container finished" podID="988f960f-52fa-406f-9320-a8eec7a04f76" containerID="2c1785e9dd655b78cb3c4139dca5280fb0d84adc92ae5f92ff69ce96b8bb82f7" exitCode=1 Dec 01 08:39:33 crc kubenswrapper[4689]: I1201 08:39:33.692240 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" event={"ID":"988f960f-52fa-406f-9320-a8eec7a04f76","Type":"ContainerDied","Data":"2c1785e9dd655b78cb3c4139dca5280fb0d84adc92ae5f92ff69ce96b8bb82f7"} Dec 01 08:39:33 crc kubenswrapper[4689]: I1201 08:39:33.692356 4689 scope.go:117] "RemoveContainer" containerID="4f548894d4be4438ed8d8452bec2e494c1330ccf1737e238f9ccb675a8023abe" Dec 01 08:39:33 crc kubenswrapper[4689]: I1201 08:39:33.693240 4689 scope.go:117] "RemoveContainer" containerID="2c1785e9dd655b78cb3c4139dca5280fb0d84adc92ae5f92ff69ce96b8bb82f7" Dec 01 08:39:33 crc kubenswrapper[4689]: E1201 08:39:33.693493 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8zn56_openshift-ovn-kubernetes(988f960f-52fa-406f-9320-a8eec7a04f76)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" podUID="988f960f-52fa-406f-9320-a8eec7a04f76" Dec 01 08:39:33 crc kubenswrapper[4689]: I1201 08:39:33.728245 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5d6a08d0-a948-4c69-b3f0-f5e084adb453-metrics-certs\") pod \"network-metrics-daemon-jtwvs\" (UID: \"5d6a08d0-a948-4c69-b3f0-f5e084adb453\") " pod="openshift-multus/network-metrics-daemon-jtwvs" Dec 01 08:39:33 crc kubenswrapper[4689]: E1201 08:39:33.728506 4689 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 01 08:39:33 crc kubenswrapper[4689]: E1201 08:39:33.728613 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d6a08d0-a948-4c69-b3f0-f5e084adb453-metrics-certs podName:5d6a08d0-a948-4c69-b3f0-f5e084adb453 nodeName:}" failed. No retries permitted until 2025-12-01 08:39:49.72858766 +0000 UTC m=+69.800875594 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5d6a08d0-a948-4c69-b3f0-f5e084adb453-metrics-certs") pod "network-metrics-daemon-jtwvs" (UID: "5d6a08d0-a948-4c69-b3f0-f5e084adb453") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 01 08:39:33 crc kubenswrapper[4689]: I1201 08:39:33.730747 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:33Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:33 crc kubenswrapper[4689]: I1201 08:39:33.746321 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:33Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:33 crc kubenswrapper[4689]: I1201 08:39:33.748119 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:33 crc kubenswrapper[4689]: I1201 08:39:33.748169 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:33 crc kubenswrapper[4689]: I1201 08:39:33.748185 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:33 crc kubenswrapper[4689]: I1201 08:39:33.748210 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:33 crc kubenswrapper[4689]: I1201 08:39:33.748230 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:33Z","lastTransitionTime":"2025-12-01T08:39:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:33 crc kubenswrapper[4689]: I1201 08:39:33.764010 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dl2st" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bebcb50-c292-4bca-9299-2fdc21439b18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://768aa329fc277550bd549890a176d80246fcbfa50fbc3b88c444434b980b9986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98wj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dl2st\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:33Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:33 crc kubenswrapper[4689]: I1201 08:39:33.781970 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7p2p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcab587f-eb9b-4dde-a0a1-75ed175999b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://761460333be2b369513cc7812afa57b580daf1e0e9add1c20f33ddf45601632c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6954a34226801e4552090e2c6e76c36de94e28ea8e175bc26392f534fb12214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6954a34226801e4552090e2c6e76c36de94e28ea8e175bc26392f534fb12214\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33d22ffe5ececc3e9210cb8c45471bd5f4b8a76e9ff2002cfc4c099b6973a7ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33d22ffe5ececc3e9210cb8c45471bd5f4b8a76e9ff2002cfc4c099b6973a7ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30e1f218df66bf86cef3d56c80b2d5bafacfabbac8868f3127a70f88431d00e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30e1f218df66bf86cef3d56c80b2d5bafacfabbac8868f3127a70f88431d00e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://548c59ec353473b69fd54aa96a077763c420647673b64000364532db2c3beefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://548c59ec353473b69fd54aa96a077763c420647673b64000364532db2c3beefe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae04920175b7c9665a739c2f390f5b21a7f8227277909132d9719eaba43c9dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae04920175b7c9665a739c2f390f5b21a7f8227277909132d9719eaba43c9dd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://512fd55cf3633a9fcdf425af76d251823e7638bd08a7f32341108470103ea67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://512fd55cf3633a9fcdf425af76d251823e7638bd08a7f32341108470103ea67d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7p2p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:33Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:33 crc kubenswrapper[4689]: I1201 08:39:33.797684 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kg5bw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af90efaa-97be-48b4-bfe6-dc25956d2b5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e208d5bee2ddf73e62749c6865d4a6cc138365c59ab582e2b28e8c01480591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2sbc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kg5bw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:33Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:33 crc kubenswrapper[4689]: I1201 08:39:33.813762 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c2qqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65456ad6-e7d1-4546-a977-244691fc5722\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c50390d9cf966913cfb379da199be0fe90b9085e0d76114903eb624054a7f84b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pwwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cda8fd2f87a8bee5f54685633fc64ce2dd06bfe6e5ea9fa8458345954080e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pwwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-c2qqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:33Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:33 crc kubenswrapper[4689]: I1201 08:39:33.828603 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e81b25bc4fc917a5480b84f4c7d2550829f5ad209d4cac14d2cd64d5b792b9e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:33Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:33 crc kubenswrapper[4689]: I1201 08:39:33.843123 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78a7f1a0302034697497f8a7a3d9547400a4311d7e304fdba95a19750c66658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:33Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:33 crc kubenswrapper[4689]: I1201 08:39:33.850772 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:33 crc kubenswrapper[4689]: I1201 08:39:33.850816 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:33 crc kubenswrapper[4689]: I1201 08:39:33.850832 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:33 crc kubenswrapper[4689]: I1201 08:39:33.850855 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:33 crc kubenswrapper[4689]: I1201 08:39:33.850873 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:33Z","lastTransitionTime":"2025-12-01T08:39:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:33 crc kubenswrapper[4689]: I1201 08:39:33.859691 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62ff4a03d2e50e2d881287008cfedc7ff067e04d3dc24ace25b56677b0eb117c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1ed68789c1f37ddbc952b0927feba7a704b01f3ee52e057e29c989f1d475e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:33Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:33 crc kubenswrapper[4689]: I1201 08:39:33.875288 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f71ff62f-7141-4d50-b8ad-8312dc8a80e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccd36e732198fa380968b1ca9a174e8f380c9d3c3c762c3267b2f65367c4360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d09630f7eb6a631d65473e272c4f98de5ba071d33992776b4101ac797b0854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f804833ecd52f1cd93a4df02639ff912e5c7ff38bfce3ba1c47e8cd5fca46e11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35299e45eecb4b2b98003295a39b7945d1fa2943a5a7f97301054d255687877e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:38:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:33Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:33 crc kubenswrapper[4689]: I1201 08:39:33.891561 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:33Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:33 crc kubenswrapper[4689]: I1201 08:39:33.903443 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4z9l8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74395c07-d5ab-45ec-a616-1d0b1b336583\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f9b1493d035a89354477e779e8277075127a85756f190d5f0c3bf98a15f15b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjzb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4z9l8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:33Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:33 crc kubenswrapper[4689]: I1201 08:39:33.931774 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"988f960f-52fa-406f-9320-a8eec7a04f76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8be3fd2d2a091ced00ce091ea6f81df99a6718fa9ead81d56590d2c8ebc2a36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46c9dc400db69538c36365f4e2d66aae47defc4224c2881532870f875e2b30e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2afbacac849e9ed880e975ae167ea5b2329811c28516152f4d809b043ab61746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be8fa8da5eef9035e80121c75adb9d6653d8683948fa83cfb1ff87a5fb3e8b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81345c33ddfbccd0c6b7d9c505ee81d513ce884fa1326add04026fd82d441e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://211c2ff150fa2d5fb73b0df4cfc5e694bf0b6b6761eec1e6827ac983fd759829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c1785e9dd655b78cb3c4139dca5280fb0d84adc92ae5f92ff69ce96b8bb82f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f548894d4be4438ed8d8452bec2e494c1330ccf1737e238f9ccb675a8023abe\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T08:39:19Z\\\",\\\"message\\\":\\\" 6073 services_controller.go:360] Finished syncing service metrics on namespace openshift-service-ca-operator for network=default : 1.621102ms\\\\nI1201 08:39:19.012439 6073 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1201 08:39:19.012454 6073 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1201 08:39:19.012672 6073 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1201 08:39:19.012688 6073 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-authentication/oauth-openshift\\\\\\\"}\\\\nI1201 08:39:19.012700 6073 services_controller.go:360] Finished syncing service oauth-openshift on namespace openshift-authentication for network=default : 1.358936ms\\\\nI1201 08:39:19.013045 6073 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console-operator/metrics\\\\\\\"}\\\\nI1201 08:39:19.013066 6073 services_controller.go:360] Finished syncing service metrics on namespace openshift-console-operator for network=default : 1.708506ms\\\\nI1201 08:39:19.013517 6073 ovnkube.go:599] Stopped ovnkube\\\\nI1201 08:39:19.013605 6073 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1201 08:39:19.013730 6073 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c1785e9dd655b78cb3c4139dca5280fb0d84adc92ae5f92ff69ce96b8bb82f7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T08:39:33Z\\\",\\\"message\\\":\\\" current time 2025-12-01T08:39:33Z is after 2025-08-24T17:21:41Z]\\\\nI1201 08:39:33.035110 6200 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nI1201 08:39:33.033847 6200 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress-canary/ingress-canary]} name:Service_openshift-ingress-canary/ingress-canary_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.34:8443: 10.217.5.34:8888:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7715118b-bb1b-400a-803e-7ab2cc3eeec0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1201 08:39:33.035122 6200 obj_retry.go:409] Going to retry *v1.Pod resource setup for 6 objects: [openshift-network-diagnostics/network-check-source-55646444c4-trplf openshift-kube-scheduler/openshift-kube-scheduler-crc openshift-multus/multus-dl2st openshift-multus/network-metrics-daemon-jtwvs openshift-machine-config-operator/machine-config-daemon-hmdnx openshift-image-registry/node-ca-kg5bw]\\\\nI1201 08:39:33.035138 6200 obj_retry.go:418] Waiting for all the *\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e96fecee94f6f25ea0bbde3c6a23854fb5dc476f4b5cfb82871358fbcf6809c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://496b7a56d1e3bb23f4d1da0fc5bbe009995c46b1f44faad80ee18a18ab301a18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://496b7a56d1e3bb23f4d1da0fc5bbe009995c46b1f44faad80ee18a18ab301a18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8zn56\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:33Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:33 crc kubenswrapper[4689]: I1201 08:39:33.944256 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jtwvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d6a08d0-a948-4c69-b3f0-f5e084adb453\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkwdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkwdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jtwvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:33Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:33 crc kubenswrapper[4689]: I1201 08:39:33.953607 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:33 crc kubenswrapper[4689]: I1201 08:39:33.953643 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:33 crc kubenswrapper[4689]: I1201 08:39:33.953659 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:33 crc kubenswrapper[4689]: I1201 08:39:33.953679 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:33 crc kubenswrapper[4689]: I1201 08:39:33.953697 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:33Z","lastTransitionTime":"2025-12-01T08:39:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:33 crc kubenswrapper[4689]: I1201 08:39:33.960139 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82556f94-5534-4aae-9690-5ce8e8d38113\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ad3697dfb9a953c1345d69e9f6c393a184bafaf121e9a62a86d05f8f26e3f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20de12d3c9020c7818be4f11a75808c7b7e81db8ae821b12284182d81e7cbceb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35c39a06af6610f8d18eac5f96e5dee0b542fb3fb0c81a102ed8b2a20d054a42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d507cfb5d0bd608d6d5ecd4105f944cdf013df7acf17c4d6237512601b4a7125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d507cfb5d0bd608d6d5ecd4105f944cdf013df7acf17c4d6237512601b4a7125\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:38:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:38:41Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:33Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:33 crc kubenswrapper[4689]: I1201 08:39:33.977711 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1946844b-83ee-401e-b3b6-5994ef81c85e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dbad86a3834b9ec3334de7ccc49988da1f7e42c423801c9789ce12ba70390b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://848171dd65d193193056bff78092de5ea65abaf7af4d168a9c27409765bd0ac2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad9f8bfeec0bea5ec9df44017223d633ce320bba477e99d5cc325712f92fff65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5920f4efa4e812090eb100b51bcf8e37eb82eb6b9fa495c8560ceeea3a94d55b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83ec64e82504ae546c78bdb3eb8b6cf47514373616749723ec457191b95b73c7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 08:38:54.283267 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 08:38:54.288329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1100834793/tls.crt::/tmp/serving-cert-1100834793/tls.key\\\\\\\"\\\\nI1201 08:39:00.137331 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 08:39:00.140699 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 08:39:00.140719 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 08:39:00.140740 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 08:39:00.140746 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 08:39:00.151271 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 08:39:00.151299 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 08:39:00.151305 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 08:39:00.151312 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 08:39:00.151316 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 08:39:00.151322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 08:39:00.151326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 08:39:00.151604 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 08:39:00.154622 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7190f95ad8a5cce27417903e343138ff345b97c0713609f47db91d8fe0c44a7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bba1efe2a197a5c47fc4220c1c53da0db51bcbdf317261c978b7f31348034f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bba1efe2a197a5c47fc4220c1c53da0db51bcbdf317261c978b7f31348034f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:38:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:33Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:33 crc kubenswrapper[4689]: I1201 08:39:33.992753 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3947625d-75bf-4332-a233-1491b2ee9d96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5095b8eb2f2f6319a0221ed4d7ec1fb81540fb75f610f7b57dc23fefbd196b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5v5pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdcf58565a332d76dac9cc12df1e0cd89eee6934be4b3234f8c36e9a8e22983a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5v5pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hmdnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:33Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:34 crc kubenswrapper[4689]: I1201 08:39:34.046499 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 08:39:34 crc kubenswrapper[4689]: I1201 08:39:34.046812 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jtwvs" Dec 01 08:39:34 crc kubenswrapper[4689]: E1201 08:39:34.047316 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 08:39:34 crc kubenswrapper[4689]: I1201 08:39:34.047485 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 08:39:34 crc kubenswrapper[4689]: E1201 08:39:34.047697 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jtwvs" podUID="5d6a08d0-a948-4c69-b3f0-f5e084adb453" Dec 01 08:39:34 crc kubenswrapper[4689]: E1201 08:39:34.048081 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 08:39:34 crc kubenswrapper[4689]: I1201 08:39:34.057039 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:34 crc kubenswrapper[4689]: I1201 08:39:34.057123 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:34 crc kubenswrapper[4689]: I1201 08:39:34.057148 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:34 crc kubenswrapper[4689]: I1201 08:39:34.057182 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:34 crc kubenswrapper[4689]: I1201 08:39:34.057205 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:34Z","lastTransitionTime":"2025-12-01T08:39:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:34 crc kubenswrapper[4689]: I1201 08:39:34.160720 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:34 crc kubenswrapper[4689]: I1201 08:39:34.160792 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:34 crc kubenswrapper[4689]: I1201 08:39:34.160815 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:34 crc kubenswrapper[4689]: I1201 08:39:34.160846 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:34 crc kubenswrapper[4689]: I1201 08:39:34.160870 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:34Z","lastTransitionTime":"2025-12-01T08:39:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:34 crc kubenswrapper[4689]: I1201 08:39:34.263541 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:34 crc kubenswrapper[4689]: I1201 08:39:34.263610 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:34 crc kubenswrapper[4689]: I1201 08:39:34.263633 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:34 crc kubenswrapper[4689]: I1201 08:39:34.263660 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:34 crc kubenswrapper[4689]: I1201 08:39:34.263680 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:34Z","lastTransitionTime":"2025-12-01T08:39:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:34 crc kubenswrapper[4689]: I1201 08:39:34.366632 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:34 crc kubenswrapper[4689]: I1201 08:39:34.366999 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:34 crc kubenswrapper[4689]: I1201 08:39:34.367259 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:34 crc kubenswrapper[4689]: I1201 08:39:34.367480 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:34 crc kubenswrapper[4689]: I1201 08:39:34.367621 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:34Z","lastTransitionTime":"2025-12-01T08:39:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:34 crc kubenswrapper[4689]: I1201 08:39:34.471123 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:34 crc kubenswrapper[4689]: I1201 08:39:34.471192 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:34 crc kubenswrapper[4689]: I1201 08:39:34.471218 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:34 crc kubenswrapper[4689]: I1201 08:39:34.471254 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:34 crc kubenswrapper[4689]: I1201 08:39:34.471276 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:34Z","lastTransitionTime":"2025-12-01T08:39:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:34 crc kubenswrapper[4689]: I1201 08:39:34.574781 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:34 crc kubenswrapper[4689]: I1201 08:39:34.574832 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:34 crc kubenswrapper[4689]: I1201 08:39:34.574841 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:34 crc kubenswrapper[4689]: I1201 08:39:34.574859 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:34 crc kubenswrapper[4689]: I1201 08:39:34.574873 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:34Z","lastTransitionTime":"2025-12-01T08:39:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:34 crc kubenswrapper[4689]: I1201 08:39:34.677179 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:34 crc kubenswrapper[4689]: I1201 08:39:34.677234 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:34 crc kubenswrapper[4689]: I1201 08:39:34.677243 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:34 crc kubenswrapper[4689]: I1201 08:39:34.677260 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:34 crc kubenswrapper[4689]: I1201 08:39:34.677270 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:34Z","lastTransitionTime":"2025-12-01T08:39:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:34 crc kubenswrapper[4689]: I1201 08:39:34.703977 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8zn56_988f960f-52fa-406f-9320-a8eec7a04f76/ovnkube-controller/2.log" Dec 01 08:39:34 crc kubenswrapper[4689]: I1201 08:39:34.711665 4689 scope.go:117] "RemoveContainer" containerID="2c1785e9dd655b78cb3c4139dca5280fb0d84adc92ae5f92ff69ce96b8bb82f7" Dec 01 08:39:34 crc kubenswrapper[4689]: E1201 08:39:34.711983 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8zn56_openshift-ovn-kubernetes(988f960f-52fa-406f-9320-a8eec7a04f76)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" podUID="988f960f-52fa-406f-9320-a8eec7a04f76" Dec 01 08:39:34 crc kubenswrapper[4689]: I1201 08:39:34.731702 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e81b25bc4fc917a5480b84f4c7d2550829f5ad209d4cac14d2cd64d5b792b9e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:34Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:34 crc kubenswrapper[4689]: I1201 08:39:34.750232 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78a7f1a0302034697497f8a7a3d9547400a4311d7e304fdba95a19750c66658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:34Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:34 crc kubenswrapper[4689]: I1201 08:39:34.767690 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62ff4a03d2e50e2d881287008cfedc7ff067e04d3dc24ace25b56677b0eb117c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1ed68789c1f37ddbc952b0927feba7a704b01f3ee52e057e29c989f1d475e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:34Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:34 crc kubenswrapper[4689]: I1201 08:39:34.781036 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:34 crc kubenswrapper[4689]: I1201 08:39:34.781263 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:34 crc kubenswrapper[4689]: I1201 08:39:34.781625 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:34 crc kubenswrapper[4689]: I1201 08:39:34.781693 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:34 crc kubenswrapper[4689]: I1201 08:39:34.781808 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:34Z","lastTransitionTime":"2025-12-01T08:39:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:34 crc kubenswrapper[4689]: I1201 08:39:34.783780 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f71ff62f-7141-4d50-b8ad-8312dc8a80e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccd36e732198fa380968b1ca9a174e8f380c9d3c3c762c3267b2f65367c4360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d09630f7eb6a631d65473e272c4f98de5ba071d33992776b4101ac797b0854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f804833ecd52f1cd93a4df02639ff912e5c7ff38bfce3ba1c47e8cd5fca46e11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35299e45eecb4b2b98003295a39b7945d1fa2943a5a7f97301054d255687877e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:38:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:34Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:34 crc kubenswrapper[4689]: I1201 08:39:34.799707 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:34Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:34 crc kubenswrapper[4689]: I1201 08:39:34.814223 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4z9l8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74395c07-d5ab-45ec-a616-1d0b1b336583\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f9b1493d035a89354477e779e8277075127a85756f190d5f0c3bf98a15f15b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjzb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4z9l8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:34Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:34 crc kubenswrapper[4689]: I1201 08:39:34.836653 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"988f960f-52fa-406f-9320-a8eec7a04f76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8be3fd2d2a091ced00ce091ea6f81df99a6718fa9ead81d56590d2c8ebc2a36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46c9dc400db69538c36365f4e2d66aae47defc4224c2881532870f875e2b30e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2afbacac849e9ed880e975ae167ea5b2329811c28516152f4d809b043ab61746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be8fa8da5eef9035e80121c75adb9d6653d8683948fa83cfb1ff87a5fb3e8b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81345c33ddfbccd0c6b7d9c505ee81d513ce884fa1326add04026fd82d441e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://211c2ff150fa2d5fb73b0df4cfc5e694bf0b6b6761eec1e6827ac983fd759829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c1785e9dd655b78cb3c4139dca5280fb0d84adc92ae5f92ff69ce96b8bb82f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c1785e9dd655b78cb3c4139dca5280fb0d84adc92ae5f92ff69ce96b8bb82f7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T08:39:33Z\\\",\\\"message\\\":\\\" current time 2025-12-01T08:39:33Z is after 2025-08-24T17:21:41Z]\\\\nI1201 08:39:33.035110 6200 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nI1201 08:39:33.033847 6200 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress-canary/ingress-canary]} name:Service_openshift-ingress-canary/ingress-canary_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.34:8443: 10.217.5.34:8888:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7715118b-bb1b-400a-803e-7ab2cc3eeec0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1201 08:39:33.035122 6200 obj_retry.go:409] Going to retry *v1.Pod resource setup for 6 objects: [openshift-network-diagnostics/network-check-source-55646444c4-trplf openshift-kube-scheduler/openshift-kube-scheduler-crc openshift-multus/multus-dl2st openshift-multus/network-metrics-daemon-jtwvs openshift-machine-config-operator/machine-config-daemon-hmdnx openshift-image-registry/node-ca-kg5bw]\\\\nI1201 08:39:33.035138 6200 obj_retry.go:418] Waiting for all the *\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:32Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8zn56_openshift-ovn-kubernetes(988f960f-52fa-406f-9320-a8eec7a04f76)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e96fecee94f6f25ea0bbde3c6a23854fb5dc476f4b5cfb82871358fbcf6809c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://496b7a56d1e3bb23f4d1da0fc5bbe009995c46b1f44faad80ee18a18ab301a18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://496b7a56d1e3bb23f4d1da0fc5bbe009995c46b1f44faad80ee18a18ab301a18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8zn56\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:34Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:34 crc kubenswrapper[4689]: I1201 08:39:34.840552 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:34 crc kubenswrapper[4689]: I1201 08:39:34.840600 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:34 crc kubenswrapper[4689]: I1201 08:39:34.840633 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:34 crc kubenswrapper[4689]: I1201 08:39:34.840655 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:34 crc kubenswrapper[4689]: I1201 08:39:34.840668 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:34Z","lastTransitionTime":"2025-12-01T08:39:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:34 crc kubenswrapper[4689]: I1201 08:39:34.852272 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jtwvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d6a08d0-a948-4c69-b3f0-f5e084adb453\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkwdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkwdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jtwvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:34Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:34 crc kubenswrapper[4689]: E1201 08:39:34.855989 4689 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"87f9359d-17aa-499d-90bc-b05146c26f0f\\\",\\\"systemUUID\\\":\\\"1b1c64ae-9dbc-417f-9fac-7f3e657b08f7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:34Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:34 crc kubenswrapper[4689]: I1201 08:39:34.860752 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:34 crc kubenswrapper[4689]: I1201 08:39:34.860823 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:34 crc kubenswrapper[4689]: I1201 08:39:34.860838 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:34 crc kubenswrapper[4689]: I1201 08:39:34.860883 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:34 crc kubenswrapper[4689]: I1201 08:39:34.860900 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:34Z","lastTransitionTime":"2025-12-01T08:39:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:34 crc kubenswrapper[4689]: I1201 08:39:34.871625 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1946844b-83ee-401e-b3b6-5994ef81c85e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dbad86a3834b9ec3334de7ccc49988da1f7e42c423801c9789ce12ba70390b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://848171dd65d193193056bff78092de5ea65abaf7af4d168a9c27409765bd0ac2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad9f8bfeec0bea5ec9df44017223d633ce320bba477e99d5cc325712f92fff65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5920f4efa4e812090eb100b51bcf8e37eb82eb6b9fa495c8560ceeea3a94d55b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83ec64e82504ae546c78bdb3eb8b6cf47514373616749723ec457191b95b73c7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 08:38:54.283267 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 08:38:54.288329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1100834793/tls.crt::/tmp/serving-cert-1100834793/tls.key\\\\\\\"\\\\nI1201 08:39:00.137331 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 08:39:00.140699 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 08:39:00.140719 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 08:39:00.140740 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 08:39:00.140746 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 08:39:00.151271 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 08:39:00.151299 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 08:39:00.151305 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 08:39:00.151312 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 08:39:00.151316 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 08:39:00.151322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 08:39:00.151326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 08:39:00.151604 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 08:39:00.154622 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7190f95ad8a5cce27417903e343138ff345b97c0713609f47db91d8fe0c44a7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bba1efe2a197a5c47fc4220c1c53da0db51bcbdf317261c978b7f31348034f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bba1efe2a197a5c47fc4220c1c53da0db51bcbdf317261c978b7f31348034f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:38:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:34Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:34 crc kubenswrapper[4689]: E1201 08:39:34.875163 4689 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"87f9359d-17aa-499d-90bc-b05146c26f0f\\\",\\\"systemUUID\\\":\\\"1b1c64ae-9dbc-417f-9fac-7f3e657b08f7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:34Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:34 crc kubenswrapper[4689]: I1201 08:39:34.881625 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:34 crc kubenswrapper[4689]: I1201 08:39:34.881679 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:34 crc kubenswrapper[4689]: I1201 08:39:34.881691 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:34 crc kubenswrapper[4689]: I1201 08:39:34.881709 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:34 crc kubenswrapper[4689]: I1201 08:39:34.881724 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:34Z","lastTransitionTime":"2025-12-01T08:39:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:34 crc kubenswrapper[4689]: I1201 08:39:34.890967 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3947625d-75bf-4332-a233-1491b2ee9d96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5095b8eb2f2f6319a0221ed4d7ec1fb81540fb75f610f7b57dc23fefbd196b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5v5pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdcf58565a332d76dac9cc12df1e0cd89eee6934be4b3234f8c36e9a8e22983a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5v5pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hmdnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:34Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:34 crc kubenswrapper[4689]: E1201 08:39:34.903761 4689 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"87f9359d-17aa-499d-90bc-b05146c26f0f\\\",\\\"systemUUID\\\":\\\"1b1c64ae-9dbc-417f-9fac-7f3e657b08f7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:34Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:34 crc kubenswrapper[4689]: I1201 08:39:34.904383 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82556f94-5534-4aae-9690-5ce8e8d38113\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ad3697dfb9a953c1345d69e9f6c393a184bafaf121e9a62a86d05f8f26e3f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20de12d3c9020c7818be4f11a75808c7b7e81db8ae821b12284182d81e7cbceb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35c39a06af6610f8d18eac5f96e5dee0b542fb3fb0c81a102ed8b2a20d054a42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d507cfb5d0bd608d6d5ecd4105f944cdf013df7acf17c4d6237512601b4a7125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d507cfb5d0bd608d6d5ecd4105f944cdf013df7acf17c4d6237512601b4a7125\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:38:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:38:41Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:34Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:34 crc kubenswrapper[4689]: I1201 08:39:34.908014 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:34 crc kubenswrapper[4689]: I1201 08:39:34.908042 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:34 crc kubenswrapper[4689]: I1201 08:39:34.908053 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:34 crc kubenswrapper[4689]: I1201 08:39:34.908069 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:34 crc kubenswrapper[4689]: I1201 08:39:34.908083 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:34Z","lastTransitionTime":"2025-12-01T08:39:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:34 crc kubenswrapper[4689]: I1201 08:39:34.916921 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:34Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:34 crc kubenswrapper[4689]: E1201 08:39:34.920067 4689 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"87f9359d-17aa-499d-90bc-b05146c26f0f\\\",\\\"systemUUID\\\":\\\"1b1c64ae-9dbc-417f-9fac-7f3e657b08f7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:34Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:34 crc kubenswrapper[4689]: I1201 08:39:34.923520 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:34 crc kubenswrapper[4689]: I1201 08:39:34.923578 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:34 crc kubenswrapper[4689]: I1201 08:39:34.923594 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:34 crc kubenswrapper[4689]: I1201 08:39:34.923612 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:34 crc kubenswrapper[4689]: I1201 08:39:34.923624 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:34Z","lastTransitionTime":"2025-12-01T08:39:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:34 crc kubenswrapper[4689]: I1201 08:39:34.930647 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dl2st" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bebcb50-c292-4bca-9299-2fdc21439b18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://768aa329fc277550bd549890a176d80246fcbfa50fbc3b88c444434b980b9986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98wj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dl2st\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:34Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:34 crc kubenswrapper[4689]: E1201 08:39:34.934764 4689 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"87f9359d-17aa-499d-90bc-b05146c26f0f\\\",\\\"systemUUID\\\":\\\"1b1c64ae-9dbc-417f-9fac-7f3e657b08f7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:34Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:34 crc kubenswrapper[4689]: E1201 08:39:34.934959 4689 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 01 08:39:34 crc kubenswrapper[4689]: I1201 08:39:34.936506 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:34 crc kubenswrapper[4689]: I1201 08:39:34.936530 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:34 crc kubenswrapper[4689]: I1201 08:39:34.936537 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:34 crc kubenswrapper[4689]: I1201 08:39:34.936551 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:34 crc kubenswrapper[4689]: I1201 08:39:34.936561 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:34Z","lastTransitionTime":"2025-12-01T08:39:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:34 crc kubenswrapper[4689]: I1201 08:39:34.945848 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7p2p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcab587f-eb9b-4dde-a0a1-75ed175999b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://761460333be2b369513cc7812afa57b580daf1e0e9add1c20f33ddf45601632c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6954a34226801e4552090e2c6e76c36de94e28ea8e175bc26392f534fb12214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6954a34226801e4552090e2c6e76c36de94e28ea8e175bc26392f534fb12214\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33d22ffe5ececc3e9210cb8c45471bd5f4b8a76e9ff2002cfc4c099b6973a7ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33d22ffe5ececc3e9210cb8c45471bd5f4b8a76e9ff2002cfc4c099b6973a7ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30e1f218df66bf86cef3d56c80b2d5bafacfabbac8868f3127a70f88431d00e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30e1f218df66bf86cef3d56c80b2d5bafacfabbac8868f3127a70f88431d00e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://548c59ec353473b69fd54aa96a077763c420647673b64000364532db2c3beefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://548c59ec353473b69fd54aa96a077763c420647673b64000364532db2c3beefe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae04920175b7c9665a739c2f390f5b21a7f8227277909132d9719eaba43c9dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae04920175b7c9665a739c2f390f5b21a7f8227277909132d9719eaba43c9dd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://512fd55cf3633a9fcdf425af76d251823e7638bd08a7f32341108470103ea67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://512fd55cf3633a9fcdf425af76d251823e7638bd08a7f32341108470103ea67d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7p2p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:34Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:34 crc kubenswrapper[4689]: I1201 08:39:34.958422 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kg5bw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af90efaa-97be-48b4-bfe6-dc25956d2b5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e208d5bee2ddf73e62749c6865d4a6cc138365c59ab582e2b28e8c01480591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2sbc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kg5bw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:34Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:34 crc kubenswrapper[4689]: I1201 08:39:34.971781 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c2qqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65456ad6-e7d1-4546-a977-244691fc5722\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c50390d9cf966913cfb379da199be0fe90b9085e0d76114903eb624054a7f84b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pwwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cda8fd2f87a8bee5f54685633fc64ce2dd06bfe6e5ea9fa8458345954080e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pwwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-c2qqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:34Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:34 crc kubenswrapper[4689]: I1201 08:39:34.986314 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:34Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:35 crc kubenswrapper[4689]: I1201 08:39:35.039775 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:35 crc kubenswrapper[4689]: I1201 08:39:35.039842 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:35 crc kubenswrapper[4689]: I1201 08:39:35.039858 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:35 crc kubenswrapper[4689]: I1201 08:39:35.039877 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:35 crc kubenswrapper[4689]: I1201 08:39:35.039888 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:35Z","lastTransitionTime":"2025-12-01T08:39:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:35 crc kubenswrapper[4689]: I1201 08:39:35.046427 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 08:39:35 crc kubenswrapper[4689]: E1201 08:39:35.046576 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 08:39:35 crc kubenswrapper[4689]: I1201 08:39:35.148137 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:35 crc kubenswrapper[4689]: I1201 08:39:35.148412 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:35 crc kubenswrapper[4689]: I1201 08:39:35.148423 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:35 crc kubenswrapper[4689]: I1201 08:39:35.148479 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:35 crc kubenswrapper[4689]: I1201 08:39:35.148533 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:35Z","lastTransitionTime":"2025-12-01T08:39:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:35 crc kubenswrapper[4689]: I1201 08:39:35.252773 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:35 crc kubenswrapper[4689]: I1201 08:39:35.252817 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:35 crc kubenswrapper[4689]: I1201 08:39:35.252826 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:35 crc kubenswrapper[4689]: I1201 08:39:35.252844 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:35 crc kubenswrapper[4689]: I1201 08:39:35.252856 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:35Z","lastTransitionTime":"2025-12-01T08:39:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:35 crc kubenswrapper[4689]: I1201 08:39:35.356893 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:35 crc kubenswrapper[4689]: I1201 08:39:35.356969 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:35 crc kubenswrapper[4689]: I1201 08:39:35.356990 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:35 crc kubenswrapper[4689]: I1201 08:39:35.357022 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:35 crc kubenswrapper[4689]: I1201 08:39:35.357043 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:35Z","lastTransitionTime":"2025-12-01T08:39:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:35 crc kubenswrapper[4689]: I1201 08:39:35.460972 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:35 crc kubenswrapper[4689]: I1201 08:39:35.461026 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:35 crc kubenswrapper[4689]: I1201 08:39:35.461038 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:35 crc kubenswrapper[4689]: I1201 08:39:35.461076 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:35 crc kubenswrapper[4689]: I1201 08:39:35.461090 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:35Z","lastTransitionTime":"2025-12-01T08:39:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:35 crc kubenswrapper[4689]: I1201 08:39:35.564787 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:35 crc kubenswrapper[4689]: I1201 08:39:35.564863 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:35 crc kubenswrapper[4689]: I1201 08:39:35.564882 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:35 crc kubenswrapper[4689]: I1201 08:39:35.564913 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:35 crc kubenswrapper[4689]: I1201 08:39:35.564934 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:35Z","lastTransitionTime":"2025-12-01T08:39:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:35 crc kubenswrapper[4689]: I1201 08:39:35.668406 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:35 crc kubenswrapper[4689]: I1201 08:39:35.668481 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:35 crc kubenswrapper[4689]: I1201 08:39:35.668501 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:35 crc kubenswrapper[4689]: I1201 08:39:35.668527 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:35 crc kubenswrapper[4689]: I1201 08:39:35.668551 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:35Z","lastTransitionTime":"2025-12-01T08:39:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:35 crc kubenswrapper[4689]: I1201 08:39:35.771592 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:35 crc kubenswrapper[4689]: I1201 08:39:35.771664 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:35 crc kubenswrapper[4689]: I1201 08:39:35.771686 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:35 crc kubenswrapper[4689]: I1201 08:39:35.771722 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:35 crc kubenswrapper[4689]: I1201 08:39:35.771749 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:35Z","lastTransitionTime":"2025-12-01T08:39:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:35 crc kubenswrapper[4689]: I1201 08:39:35.875525 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:35 crc kubenswrapper[4689]: I1201 08:39:35.875605 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:35 crc kubenswrapper[4689]: I1201 08:39:35.875616 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:35 crc kubenswrapper[4689]: I1201 08:39:35.875634 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:35 crc kubenswrapper[4689]: I1201 08:39:35.875646 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:35Z","lastTransitionTime":"2025-12-01T08:39:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:35 crc kubenswrapper[4689]: I1201 08:39:35.980224 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:35 crc kubenswrapper[4689]: I1201 08:39:35.980327 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:35 crc kubenswrapper[4689]: I1201 08:39:35.980360 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:35 crc kubenswrapper[4689]: I1201 08:39:35.980431 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:35 crc kubenswrapper[4689]: I1201 08:39:35.980450 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:35Z","lastTransitionTime":"2025-12-01T08:39:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:36 crc kubenswrapper[4689]: I1201 08:39:36.046942 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 08:39:36 crc kubenswrapper[4689]: I1201 08:39:36.046951 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jtwvs" Dec 01 08:39:36 crc kubenswrapper[4689]: I1201 08:39:36.046942 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 08:39:36 crc kubenswrapper[4689]: E1201 08:39:36.047354 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 08:39:36 crc kubenswrapper[4689]: E1201 08:39:36.047627 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 08:39:36 crc kubenswrapper[4689]: E1201 08:39:36.047856 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jtwvs" podUID="5d6a08d0-a948-4c69-b3f0-f5e084adb453" Dec 01 08:39:36 crc kubenswrapper[4689]: I1201 08:39:36.084212 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:36 crc kubenswrapper[4689]: I1201 08:39:36.084287 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:36 crc kubenswrapper[4689]: I1201 08:39:36.084298 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:36 crc kubenswrapper[4689]: I1201 08:39:36.084316 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:36 crc kubenswrapper[4689]: I1201 08:39:36.084329 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:36Z","lastTransitionTime":"2025-12-01T08:39:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:36 crc kubenswrapper[4689]: I1201 08:39:36.187463 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:36 crc kubenswrapper[4689]: I1201 08:39:36.187931 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:36 crc kubenswrapper[4689]: I1201 08:39:36.188118 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:36 crc kubenswrapper[4689]: I1201 08:39:36.188293 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:36 crc kubenswrapper[4689]: I1201 08:39:36.188546 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:36Z","lastTransitionTime":"2025-12-01T08:39:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:36 crc kubenswrapper[4689]: I1201 08:39:36.291929 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:36 crc kubenswrapper[4689]: I1201 08:39:36.292003 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:36 crc kubenswrapper[4689]: I1201 08:39:36.292021 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:36 crc kubenswrapper[4689]: I1201 08:39:36.292047 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:36 crc kubenswrapper[4689]: I1201 08:39:36.292065 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:36Z","lastTransitionTime":"2025-12-01T08:39:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:36 crc kubenswrapper[4689]: I1201 08:39:36.398508 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:36 crc kubenswrapper[4689]: I1201 08:39:36.398756 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:36 crc kubenswrapper[4689]: I1201 08:39:36.398791 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:36 crc kubenswrapper[4689]: I1201 08:39:36.398876 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:36 crc kubenswrapper[4689]: I1201 08:39:36.398904 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:36Z","lastTransitionTime":"2025-12-01T08:39:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:36 crc kubenswrapper[4689]: I1201 08:39:36.503686 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:36 crc kubenswrapper[4689]: I1201 08:39:36.503780 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:36 crc kubenswrapper[4689]: I1201 08:39:36.503811 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:36 crc kubenswrapper[4689]: I1201 08:39:36.503839 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:36 crc kubenswrapper[4689]: I1201 08:39:36.503857 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:36Z","lastTransitionTime":"2025-12-01T08:39:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:36 crc kubenswrapper[4689]: I1201 08:39:36.608002 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:36 crc kubenswrapper[4689]: I1201 08:39:36.608089 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:36 crc kubenswrapper[4689]: I1201 08:39:36.608114 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:36 crc kubenswrapper[4689]: I1201 08:39:36.608150 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:36 crc kubenswrapper[4689]: I1201 08:39:36.608177 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:36Z","lastTransitionTime":"2025-12-01T08:39:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:36 crc kubenswrapper[4689]: I1201 08:39:36.712360 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:36 crc kubenswrapper[4689]: I1201 08:39:36.712474 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:36 crc kubenswrapper[4689]: I1201 08:39:36.712494 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:36 crc kubenswrapper[4689]: I1201 08:39:36.712520 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:36 crc kubenswrapper[4689]: I1201 08:39:36.712540 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:36Z","lastTransitionTime":"2025-12-01T08:39:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:36 crc kubenswrapper[4689]: I1201 08:39:36.815386 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:36 crc kubenswrapper[4689]: I1201 08:39:36.815453 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:36 crc kubenswrapper[4689]: I1201 08:39:36.815466 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:36 crc kubenswrapper[4689]: I1201 08:39:36.815488 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:36 crc kubenswrapper[4689]: I1201 08:39:36.815501 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:36Z","lastTransitionTime":"2025-12-01T08:39:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:36 crc kubenswrapper[4689]: I1201 08:39:36.927949 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:36 crc kubenswrapper[4689]: I1201 08:39:36.928004 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:36 crc kubenswrapper[4689]: I1201 08:39:36.928018 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:36 crc kubenswrapper[4689]: I1201 08:39:36.928088 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:36 crc kubenswrapper[4689]: I1201 08:39:36.928109 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:36Z","lastTransitionTime":"2025-12-01T08:39:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:37 crc kubenswrapper[4689]: I1201 08:39:37.032628 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:37 crc kubenswrapper[4689]: I1201 08:39:37.032725 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:37 crc kubenswrapper[4689]: I1201 08:39:37.032749 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:37 crc kubenswrapper[4689]: I1201 08:39:37.032783 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:37 crc kubenswrapper[4689]: I1201 08:39:37.032806 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:37Z","lastTransitionTime":"2025-12-01T08:39:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:37 crc kubenswrapper[4689]: I1201 08:39:37.047590 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 08:39:37 crc kubenswrapper[4689]: E1201 08:39:37.048081 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 08:39:37 crc kubenswrapper[4689]: I1201 08:39:37.136151 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:37 crc kubenswrapper[4689]: I1201 08:39:37.136201 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:37 crc kubenswrapper[4689]: I1201 08:39:37.136212 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:37 crc kubenswrapper[4689]: I1201 08:39:37.136234 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:37 crc kubenswrapper[4689]: I1201 08:39:37.136253 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:37Z","lastTransitionTime":"2025-12-01T08:39:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:37 crc kubenswrapper[4689]: I1201 08:39:37.239140 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:37 crc kubenswrapper[4689]: I1201 08:39:37.239200 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:37 crc kubenswrapper[4689]: I1201 08:39:37.239214 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:37 crc kubenswrapper[4689]: I1201 08:39:37.239236 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:37 crc kubenswrapper[4689]: I1201 08:39:37.239248 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:37Z","lastTransitionTime":"2025-12-01T08:39:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:37 crc kubenswrapper[4689]: I1201 08:39:37.342905 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:37 crc kubenswrapper[4689]: I1201 08:39:37.342988 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:37 crc kubenswrapper[4689]: I1201 08:39:37.343001 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:37 crc kubenswrapper[4689]: I1201 08:39:37.343023 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:37 crc kubenswrapper[4689]: I1201 08:39:37.343040 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:37Z","lastTransitionTime":"2025-12-01T08:39:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:37 crc kubenswrapper[4689]: I1201 08:39:37.446338 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:37 crc kubenswrapper[4689]: I1201 08:39:37.446730 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:37 crc kubenswrapper[4689]: I1201 08:39:37.446817 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:37 crc kubenswrapper[4689]: I1201 08:39:37.446925 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:37 crc kubenswrapper[4689]: I1201 08:39:37.447022 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:37Z","lastTransitionTime":"2025-12-01T08:39:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:37 crc kubenswrapper[4689]: I1201 08:39:37.551133 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:37 crc kubenswrapper[4689]: I1201 08:39:37.551197 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:37 crc kubenswrapper[4689]: I1201 08:39:37.551213 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:37 crc kubenswrapper[4689]: I1201 08:39:37.551241 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:37 crc kubenswrapper[4689]: I1201 08:39:37.551254 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:37Z","lastTransitionTime":"2025-12-01T08:39:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:37 crc kubenswrapper[4689]: I1201 08:39:37.657415 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:37 crc kubenswrapper[4689]: I1201 08:39:37.657483 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:37 crc kubenswrapper[4689]: I1201 08:39:37.657518 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:37 crc kubenswrapper[4689]: I1201 08:39:37.657551 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:37 crc kubenswrapper[4689]: I1201 08:39:37.657574 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:37Z","lastTransitionTime":"2025-12-01T08:39:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:37 crc kubenswrapper[4689]: I1201 08:39:37.762105 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:37 crc kubenswrapper[4689]: I1201 08:39:37.762608 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:37 crc kubenswrapper[4689]: I1201 08:39:37.762710 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:37 crc kubenswrapper[4689]: I1201 08:39:37.762857 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:37 crc kubenswrapper[4689]: I1201 08:39:37.763024 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:37Z","lastTransitionTime":"2025-12-01T08:39:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:37 crc kubenswrapper[4689]: I1201 08:39:37.866692 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:37 crc kubenswrapper[4689]: I1201 08:39:37.866734 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:37 crc kubenswrapper[4689]: I1201 08:39:37.866747 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:37 crc kubenswrapper[4689]: I1201 08:39:37.866767 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:37 crc kubenswrapper[4689]: I1201 08:39:37.866781 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:37Z","lastTransitionTime":"2025-12-01T08:39:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:37 crc kubenswrapper[4689]: I1201 08:39:37.972041 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:37 crc kubenswrapper[4689]: I1201 08:39:37.972131 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:37 crc kubenswrapper[4689]: I1201 08:39:37.972157 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:37 crc kubenswrapper[4689]: I1201 08:39:37.972190 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:37 crc kubenswrapper[4689]: I1201 08:39:37.972246 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:37Z","lastTransitionTime":"2025-12-01T08:39:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:38 crc kubenswrapper[4689]: I1201 08:39:38.046884 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jtwvs" Dec 01 08:39:38 crc kubenswrapper[4689]: I1201 08:39:38.046953 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 08:39:38 crc kubenswrapper[4689]: I1201 08:39:38.046884 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 08:39:38 crc kubenswrapper[4689]: E1201 08:39:38.047142 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jtwvs" podUID="5d6a08d0-a948-4c69-b3f0-f5e084adb453" Dec 01 08:39:38 crc kubenswrapper[4689]: E1201 08:39:38.047321 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 08:39:38 crc kubenswrapper[4689]: E1201 08:39:38.047726 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 08:39:38 crc kubenswrapper[4689]: I1201 08:39:38.076484 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:38 crc kubenswrapper[4689]: I1201 08:39:38.076547 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:38 crc kubenswrapper[4689]: I1201 08:39:38.076566 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:38 crc kubenswrapper[4689]: I1201 08:39:38.076593 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:38 crc kubenswrapper[4689]: I1201 08:39:38.076613 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:38Z","lastTransitionTime":"2025-12-01T08:39:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:38 crc kubenswrapper[4689]: I1201 08:39:38.179478 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:38 crc kubenswrapper[4689]: I1201 08:39:38.179536 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:38 crc kubenswrapper[4689]: I1201 08:39:38.179557 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:38 crc kubenswrapper[4689]: I1201 08:39:38.179578 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:38 crc kubenswrapper[4689]: I1201 08:39:38.179593 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:38Z","lastTransitionTime":"2025-12-01T08:39:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:38 crc kubenswrapper[4689]: I1201 08:39:38.282745 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:38 crc kubenswrapper[4689]: I1201 08:39:38.282819 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:38 crc kubenswrapper[4689]: I1201 08:39:38.282842 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:38 crc kubenswrapper[4689]: I1201 08:39:38.282872 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:38 crc kubenswrapper[4689]: I1201 08:39:38.282896 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:38Z","lastTransitionTime":"2025-12-01T08:39:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:38 crc kubenswrapper[4689]: I1201 08:39:38.386307 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:38 crc kubenswrapper[4689]: I1201 08:39:38.386425 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:38 crc kubenswrapper[4689]: I1201 08:39:38.386460 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:38 crc kubenswrapper[4689]: I1201 08:39:38.386492 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:38 crc kubenswrapper[4689]: I1201 08:39:38.386514 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:38Z","lastTransitionTime":"2025-12-01T08:39:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:38 crc kubenswrapper[4689]: I1201 08:39:38.489906 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:38 crc kubenswrapper[4689]: I1201 08:39:38.489975 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:38 crc kubenswrapper[4689]: I1201 08:39:38.489993 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:38 crc kubenswrapper[4689]: I1201 08:39:38.490023 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:38 crc kubenswrapper[4689]: I1201 08:39:38.490043 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:38Z","lastTransitionTime":"2025-12-01T08:39:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:38 crc kubenswrapper[4689]: I1201 08:39:38.592495 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:38 crc kubenswrapper[4689]: I1201 08:39:38.592541 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:38 crc kubenswrapper[4689]: I1201 08:39:38.592551 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:38 crc kubenswrapper[4689]: I1201 08:39:38.592571 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:38 crc kubenswrapper[4689]: I1201 08:39:38.592587 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:38Z","lastTransitionTime":"2025-12-01T08:39:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:38 crc kubenswrapper[4689]: I1201 08:39:38.695491 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:38 crc kubenswrapper[4689]: I1201 08:39:38.695589 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:38 crc kubenswrapper[4689]: I1201 08:39:38.695614 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:38 crc kubenswrapper[4689]: I1201 08:39:38.695645 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:38 crc kubenswrapper[4689]: I1201 08:39:38.695665 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:38Z","lastTransitionTime":"2025-12-01T08:39:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:38 crc kubenswrapper[4689]: I1201 08:39:38.798340 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:38 crc kubenswrapper[4689]: I1201 08:39:38.798527 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:38 crc kubenswrapper[4689]: I1201 08:39:38.798576 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:38 crc kubenswrapper[4689]: I1201 08:39:38.798671 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:38 crc kubenswrapper[4689]: I1201 08:39:38.798692 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:38Z","lastTransitionTime":"2025-12-01T08:39:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:38 crc kubenswrapper[4689]: I1201 08:39:38.902998 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:38 crc kubenswrapper[4689]: I1201 08:39:38.903115 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:38 crc kubenswrapper[4689]: I1201 08:39:38.903148 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:38 crc kubenswrapper[4689]: I1201 08:39:38.903182 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:38 crc kubenswrapper[4689]: I1201 08:39:38.903204 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:38Z","lastTransitionTime":"2025-12-01T08:39:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:39 crc kubenswrapper[4689]: I1201 08:39:39.007047 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:39 crc kubenswrapper[4689]: I1201 08:39:39.007126 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:39 crc kubenswrapper[4689]: I1201 08:39:39.007146 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:39 crc kubenswrapper[4689]: I1201 08:39:39.007178 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:39 crc kubenswrapper[4689]: I1201 08:39:39.007203 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:39Z","lastTransitionTime":"2025-12-01T08:39:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:39 crc kubenswrapper[4689]: I1201 08:39:39.047520 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 08:39:39 crc kubenswrapper[4689]: E1201 08:39:39.047743 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 08:39:39 crc kubenswrapper[4689]: I1201 08:39:39.110971 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:39 crc kubenswrapper[4689]: I1201 08:39:39.111035 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:39 crc kubenswrapper[4689]: I1201 08:39:39.111047 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:39 crc kubenswrapper[4689]: I1201 08:39:39.111068 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:39 crc kubenswrapper[4689]: I1201 08:39:39.111082 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:39Z","lastTransitionTime":"2025-12-01T08:39:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:39 crc kubenswrapper[4689]: I1201 08:39:39.214938 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:39 crc kubenswrapper[4689]: I1201 08:39:39.215071 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:39 crc kubenswrapper[4689]: I1201 08:39:39.215093 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:39 crc kubenswrapper[4689]: I1201 08:39:39.215121 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:39 crc kubenswrapper[4689]: I1201 08:39:39.215140 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:39Z","lastTransitionTime":"2025-12-01T08:39:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:39 crc kubenswrapper[4689]: I1201 08:39:39.318913 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:39 crc kubenswrapper[4689]: I1201 08:39:39.318973 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:39 crc kubenswrapper[4689]: I1201 08:39:39.318993 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:39 crc kubenswrapper[4689]: I1201 08:39:39.319019 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:39 crc kubenswrapper[4689]: I1201 08:39:39.319041 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:39Z","lastTransitionTime":"2025-12-01T08:39:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:39 crc kubenswrapper[4689]: I1201 08:39:39.422794 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:39 crc kubenswrapper[4689]: I1201 08:39:39.422862 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:39 crc kubenswrapper[4689]: I1201 08:39:39.422880 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:39 crc kubenswrapper[4689]: I1201 08:39:39.422905 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:39 crc kubenswrapper[4689]: I1201 08:39:39.422922 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:39Z","lastTransitionTime":"2025-12-01T08:39:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:39 crc kubenswrapper[4689]: I1201 08:39:39.538474 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:39 crc kubenswrapper[4689]: I1201 08:39:39.538559 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:39 crc kubenswrapper[4689]: I1201 08:39:39.538579 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:39 crc kubenswrapper[4689]: I1201 08:39:39.538608 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:39 crc kubenswrapper[4689]: I1201 08:39:39.538628 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:39Z","lastTransitionTime":"2025-12-01T08:39:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:39 crc kubenswrapper[4689]: I1201 08:39:39.641818 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:39 crc kubenswrapper[4689]: I1201 08:39:39.641876 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:39 crc kubenswrapper[4689]: I1201 08:39:39.641892 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:39 crc kubenswrapper[4689]: I1201 08:39:39.641913 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:39 crc kubenswrapper[4689]: I1201 08:39:39.641929 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:39Z","lastTransitionTime":"2025-12-01T08:39:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:39 crc kubenswrapper[4689]: I1201 08:39:39.745035 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:39 crc kubenswrapper[4689]: I1201 08:39:39.745124 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:39 crc kubenswrapper[4689]: I1201 08:39:39.745146 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:39 crc kubenswrapper[4689]: I1201 08:39:39.745171 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:39 crc kubenswrapper[4689]: I1201 08:39:39.745190 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:39Z","lastTransitionTime":"2025-12-01T08:39:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:39 crc kubenswrapper[4689]: I1201 08:39:39.849273 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:39 crc kubenswrapper[4689]: I1201 08:39:39.849344 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:39 crc kubenswrapper[4689]: I1201 08:39:39.849362 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:39 crc kubenswrapper[4689]: I1201 08:39:39.849411 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:39 crc kubenswrapper[4689]: I1201 08:39:39.849428 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:39Z","lastTransitionTime":"2025-12-01T08:39:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:39 crc kubenswrapper[4689]: I1201 08:39:39.952230 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:39 crc kubenswrapper[4689]: I1201 08:39:39.952302 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:39 crc kubenswrapper[4689]: I1201 08:39:39.952318 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:39 crc kubenswrapper[4689]: I1201 08:39:39.952346 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:39 crc kubenswrapper[4689]: I1201 08:39:39.952399 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:39Z","lastTransitionTime":"2025-12-01T08:39:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:40 crc kubenswrapper[4689]: I1201 08:39:40.047039 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 08:39:40 crc kubenswrapper[4689]: I1201 08:39:40.047043 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 08:39:40 crc kubenswrapper[4689]: E1201 08:39:40.047283 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 08:39:40 crc kubenswrapper[4689]: I1201 08:39:40.047043 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jtwvs" Dec 01 08:39:40 crc kubenswrapper[4689]: E1201 08:39:40.047486 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 08:39:40 crc kubenswrapper[4689]: E1201 08:39:40.047658 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jtwvs" podUID="5d6a08d0-a948-4c69-b3f0-f5e084adb453" Dec 01 08:39:40 crc kubenswrapper[4689]: I1201 08:39:40.056157 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:40 crc kubenswrapper[4689]: I1201 08:39:40.056223 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:40 crc kubenswrapper[4689]: I1201 08:39:40.056240 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:40 crc kubenswrapper[4689]: I1201 08:39:40.056266 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:40 crc kubenswrapper[4689]: I1201 08:39:40.056284 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:40Z","lastTransitionTime":"2025-12-01T08:39:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:40 crc kubenswrapper[4689]: I1201 08:39:40.160465 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:40 crc kubenswrapper[4689]: I1201 08:39:40.160535 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:40 crc kubenswrapper[4689]: I1201 08:39:40.160560 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:40 crc kubenswrapper[4689]: I1201 08:39:40.160592 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:40 crc kubenswrapper[4689]: I1201 08:39:40.160618 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:40Z","lastTransitionTime":"2025-12-01T08:39:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:40 crc kubenswrapper[4689]: I1201 08:39:40.263877 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:40 crc kubenswrapper[4689]: I1201 08:39:40.263930 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:40 crc kubenswrapper[4689]: I1201 08:39:40.263941 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:40 crc kubenswrapper[4689]: I1201 08:39:40.263962 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:40 crc kubenswrapper[4689]: I1201 08:39:40.263977 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:40Z","lastTransitionTime":"2025-12-01T08:39:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:40 crc kubenswrapper[4689]: I1201 08:39:40.367326 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:40 crc kubenswrapper[4689]: I1201 08:39:40.367395 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:40 crc kubenswrapper[4689]: I1201 08:39:40.367409 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:40 crc kubenswrapper[4689]: I1201 08:39:40.367433 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:40 crc kubenswrapper[4689]: I1201 08:39:40.367451 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:40Z","lastTransitionTime":"2025-12-01T08:39:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:40 crc kubenswrapper[4689]: I1201 08:39:40.470138 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:40 crc kubenswrapper[4689]: I1201 08:39:40.470251 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:40 crc kubenswrapper[4689]: I1201 08:39:40.470277 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:40 crc kubenswrapper[4689]: I1201 08:39:40.470300 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:40 crc kubenswrapper[4689]: I1201 08:39:40.470318 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:40Z","lastTransitionTime":"2025-12-01T08:39:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:40 crc kubenswrapper[4689]: I1201 08:39:40.574073 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:40 crc kubenswrapper[4689]: I1201 08:39:40.574167 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:40 crc kubenswrapper[4689]: I1201 08:39:40.574192 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:40 crc kubenswrapper[4689]: I1201 08:39:40.574224 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:40 crc kubenswrapper[4689]: I1201 08:39:40.574244 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:40Z","lastTransitionTime":"2025-12-01T08:39:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:40 crc kubenswrapper[4689]: I1201 08:39:40.677712 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:40 crc kubenswrapper[4689]: I1201 08:39:40.677789 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:40 crc kubenswrapper[4689]: I1201 08:39:40.677809 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:40 crc kubenswrapper[4689]: I1201 08:39:40.677835 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:40 crc kubenswrapper[4689]: I1201 08:39:40.677855 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:40Z","lastTransitionTime":"2025-12-01T08:39:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:40 crc kubenswrapper[4689]: I1201 08:39:40.781550 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:40 crc kubenswrapper[4689]: I1201 08:39:40.781603 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:40 crc kubenswrapper[4689]: I1201 08:39:40.781619 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:40 crc kubenswrapper[4689]: I1201 08:39:40.781644 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:40 crc kubenswrapper[4689]: I1201 08:39:40.781664 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:40Z","lastTransitionTime":"2025-12-01T08:39:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:40 crc kubenswrapper[4689]: I1201 08:39:40.884830 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:40 crc kubenswrapper[4689]: I1201 08:39:40.884895 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:40 crc kubenswrapper[4689]: I1201 08:39:40.884912 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:40 crc kubenswrapper[4689]: I1201 08:39:40.884937 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:40 crc kubenswrapper[4689]: I1201 08:39:40.884952 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:40Z","lastTransitionTime":"2025-12-01T08:39:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:40 crc kubenswrapper[4689]: I1201 08:39:40.988010 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:40 crc kubenswrapper[4689]: I1201 08:39:40.988055 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:40 crc kubenswrapper[4689]: I1201 08:39:40.988070 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:40 crc kubenswrapper[4689]: I1201 08:39:40.988091 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:40 crc kubenswrapper[4689]: I1201 08:39:40.988111 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:40Z","lastTransitionTime":"2025-12-01T08:39:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:41 crc kubenswrapper[4689]: I1201 08:39:41.046781 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 08:39:41 crc kubenswrapper[4689]: E1201 08:39:41.047124 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 08:39:41 crc kubenswrapper[4689]: I1201 08:39:41.067396 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f71ff62f-7141-4d50-b8ad-8312dc8a80e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccd36e732198fa380968b1ca9a174e8f380c9d3c3c762c3267b2f65367c4360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d09630f7eb6a631d65473e272c4f98de5ba071d33992776b4101ac797b0854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f804833ecd52f1cd93a4df02639ff912e5c7ff38bfce3ba1c47e8cd5fca46e11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35299e45eecb4b2b98003295a39b7945d1fa2943a5a7f97301054d255687877e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:38:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:41Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:41 crc kubenswrapper[4689]: I1201 08:39:41.086563 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:41Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:41 crc kubenswrapper[4689]: I1201 08:39:41.094033 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:41 crc kubenswrapper[4689]: I1201 08:39:41.094070 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:41 crc kubenswrapper[4689]: I1201 08:39:41.094082 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:41 crc kubenswrapper[4689]: I1201 08:39:41.094101 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:41 crc kubenswrapper[4689]: I1201 08:39:41.094113 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:41Z","lastTransitionTime":"2025-12-01T08:39:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:41 crc kubenswrapper[4689]: I1201 08:39:41.103921 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4z9l8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74395c07-d5ab-45ec-a616-1d0b1b336583\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f9b1493d035a89354477e779e8277075127a85756f190d5f0c3bf98a15f15b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjzb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4z9l8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:41Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:41 crc kubenswrapper[4689]: I1201 08:39:41.128909 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"988f960f-52fa-406f-9320-a8eec7a04f76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8be3fd2d2a091ced00ce091ea6f81df99a6718fa9ead81d56590d2c8ebc2a36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46c9dc400db69538c36365f4e2d66aae47defc4224c2881532870f875e2b30e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2afbacac849e9ed880e975ae167ea5b2329811c28516152f4d809b043ab61746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be8fa8da5eef9035e80121c75adb9d6653d8683948fa83cfb1ff87a5fb3e8b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81345c33ddfbccd0c6b7d9c505ee81d513ce884fa1326add04026fd82d441e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://211c2ff150fa2d5fb73b0df4cfc5e694bf0b6b6761eec1e6827ac983fd759829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c1785e9dd655b78cb3c4139dca5280fb0d84adc92ae5f92ff69ce96b8bb82f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c1785e9dd655b78cb3c4139dca5280fb0d84adc92ae5f92ff69ce96b8bb82f7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T08:39:33Z\\\",\\\"message\\\":\\\" current time 2025-12-01T08:39:33Z is after 2025-08-24T17:21:41Z]\\\\nI1201 08:39:33.035110 6200 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nI1201 08:39:33.033847 6200 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress-canary/ingress-canary]} name:Service_openshift-ingress-canary/ingress-canary_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.34:8443: 10.217.5.34:8888:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7715118b-bb1b-400a-803e-7ab2cc3eeec0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1201 08:39:33.035122 6200 obj_retry.go:409] Going to retry *v1.Pod resource setup for 6 objects: [openshift-network-diagnostics/network-check-source-55646444c4-trplf openshift-kube-scheduler/openshift-kube-scheduler-crc openshift-multus/multus-dl2st openshift-multus/network-metrics-daemon-jtwvs openshift-machine-config-operator/machine-config-daemon-hmdnx openshift-image-registry/node-ca-kg5bw]\\\\nI1201 08:39:33.035138 6200 obj_retry.go:418] Waiting for all the *\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:32Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8zn56_openshift-ovn-kubernetes(988f960f-52fa-406f-9320-a8eec7a04f76)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e96fecee94f6f25ea0bbde3c6a23854fb5dc476f4b5cfb82871358fbcf6809c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://496b7a56d1e3bb23f4d1da0fc5bbe009995c46b1f44faad80ee18a18ab301a18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://496b7a56d1e3bb23f4d1da0fc5bbe009995c46b1f44faad80ee18a18ab301a18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8zn56\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:41Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:41 crc kubenswrapper[4689]: I1201 08:39:41.147908 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jtwvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d6a08d0-a948-4c69-b3f0-f5e084adb453\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkwdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkwdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jtwvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:41Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:41 crc kubenswrapper[4689]: I1201 08:39:41.163452 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1946844b-83ee-401e-b3b6-5994ef81c85e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dbad86a3834b9ec3334de7ccc49988da1f7e42c423801c9789ce12ba70390b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://848171dd65d193193056bff78092de5ea65abaf7af4d168a9c27409765bd0ac2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad9f8bfeec0bea5ec9df44017223d633ce320bba477e99d5cc325712f92fff65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5920f4efa4e812090eb100b51bcf8e37eb82eb6b9fa495c8560ceeea3a94d55b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83ec64e82504ae546c78bdb3eb8b6cf47514373616749723ec457191b95b73c7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 08:38:54.283267 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 08:38:54.288329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1100834793/tls.crt::/tmp/serving-cert-1100834793/tls.key\\\\\\\"\\\\nI1201 08:39:00.137331 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 08:39:00.140699 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 08:39:00.140719 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 08:39:00.140740 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 08:39:00.140746 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 08:39:00.151271 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 08:39:00.151299 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 08:39:00.151305 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 08:39:00.151312 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 08:39:00.151316 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 08:39:00.151322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 08:39:00.151326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 08:39:00.151604 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 08:39:00.154622 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7190f95ad8a5cce27417903e343138ff345b97c0713609f47db91d8fe0c44a7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bba1efe2a197a5c47fc4220c1c53da0db51bcbdf317261c978b7f31348034f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bba1efe2a197a5c47fc4220c1c53da0db51bcbdf317261c978b7f31348034f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:38:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:41Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:41 crc kubenswrapper[4689]: I1201 08:39:41.174698 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3947625d-75bf-4332-a233-1491b2ee9d96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5095b8eb2f2f6319a0221ed4d7ec1fb81540fb75f610f7b57dc23fefbd196b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5v5pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdcf58565a332d76dac9cc12df1e0cd89eee6934be4b3234f8c36e9a8e22983a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5v5pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hmdnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:41Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:41 crc kubenswrapper[4689]: I1201 08:39:41.186806 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82556f94-5534-4aae-9690-5ce8e8d38113\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ad3697dfb9a953c1345d69e9f6c393a184bafaf121e9a62a86d05f8f26e3f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20de12d3c9020c7818be4f11a75808c7b7e81db8ae821b12284182d81e7cbceb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35c39a06af6610f8d18eac5f96e5dee0b542fb3fb0c81a102ed8b2a20d054a42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d507cfb5d0bd608d6d5ecd4105f944cdf013df7acf17c4d6237512601b4a7125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d507cfb5d0bd608d6d5ecd4105f944cdf013df7acf17c4d6237512601b4a7125\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:38:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:38:41Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:41Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:41 crc kubenswrapper[4689]: I1201 08:39:41.197341 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:41 crc kubenswrapper[4689]: I1201 08:39:41.197454 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:41 crc kubenswrapper[4689]: I1201 08:39:41.197479 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:41 crc kubenswrapper[4689]: I1201 08:39:41.197507 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:41 crc kubenswrapper[4689]: I1201 08:39:41.197526 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:41Z","lastTransitionTime":"2025-12-01T08:39:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:41 crc kubenswrapper[4689]: I1201 08:39:41.200234 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:41Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:41 crc kubenswrapper[4689]: I1201 08:39:41.214123 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dl2st" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bebcb50-c292-4bca-9299-2fdc21439b18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://768aa329fc277550bd549890a176d80246fcbfa50fbc3b88c444434b980b9986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98wj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dl2st\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:41Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:41 crc kubenswrapper[4689]: I1201 08:39:41.229564 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7p2p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcab587f-eb9b-4dde-a0a1-75ed175999b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://761460333be2b369513cc7812afa57b580daf1e0e9add1c20f33ddf45601632c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6954a34226801e4552090e2c6e76c36de94e28ea8e175bc26392f534fb12214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6954a34226801e4552090e2c6e76c36de94e28ea8e175bc26392f534fb12214\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33d22ffe5ececc3e9210cb8c45471bd5f4b8a76e9ff2002cfc4c099b6973a7ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33d22ffe5ececc3e9210cb8c45471bd5f4b8a76e9ff2002cfc4c099b6973a7ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30e1f218df66bf86cef3d56c80b2d5bafacfabbac8868f3127a70f88431d00e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30e1f218df66bf86cef3d56c80b2d5bafacfabbac8868f3127a70f88431d00e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://548c59ec353473b69fd54aa96a077763c420647673b64000364532db2c3beefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://548c59ec353473b69fd54aa96a077763c420647673b64000364532db2c3beefe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae04920175b7c9665a739c2f390f5b21a7f8227277909132d9719eaba43c9dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae04920175b7c9665a739c2f390f5b21a7f8227277909132d9719eaba43c9dd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://512fd55cf3633a9fcdf425af76d251823e7638bd08a7f32341108470103ea67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://512fd55cf3633a9fcdf425af76d251823e7638bd08a7f32341108470103ea67d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7p2p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:41Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:41 crc kubenswrapper[4689]: I1201 08:39:41.243819 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kg5bw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af90efaa-97be-48b4-bfe6-dc25956d2b5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e208d5bee2ddf73e62749c6865d4a6cc138365c59ab582e2b28e8c01480591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2sbc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kg5bw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:41Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:41 crc kubenswrapper[4689]: I1201 08:39:41.260180 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c2qqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65456ad6-e7d1-4546-a977-244691fc5722\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c50390d9cf966913cfb379da199be0fe90b9085e0d76114903eb624054a7f84b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pwwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cda8fd2f87a8bee5f54685633fc64ce2dd06bfe6e5ea9fa8458345954080e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pwwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-c2qqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:41Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:41 crc kubenswrapper[4689]: I1201 08:39:41.272249 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:41Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:41 crc kubenswrapper[4689]: I1201 08:39:41.285102 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e81b25bc4fc917a5480b84f4c7d2550829f5ad209d4cac14d2cd64d5b792b9e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:41Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:41 crc kubenswrapper[4689]: I1201 08:39:41.298085 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78a7f1a0302034697497f8a7a3d9547400a4311d7e304fdba95a19750c66658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:41Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:41 crc kubenswrapper[4689]: I1201 08:39:41.300700 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:41 crc kubenswrapper[4689]: I1201 08:39:41.300730 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:41 crc kubenswrapper[4689]: I1201 08:39:41.300744 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:41 crc kubenswrapper[4689]: I1201 08:39:41.300763 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:41 crc kubenswrapper[4689]: I1201 08:39:41.300777 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:41Z","lastTransitionTime":"2025-12-01T08:39:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:41 crc kubenswrapper[4689]: I1201 08:39:41.311239 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62ff4a03d2e50e2d881287008cfedc7ff067e04d3dc24ace25b56677b0eb117c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1ed68789c1f37ddbc952b0927feba7a704b01f3ee52e057e29c989f1d475e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:41Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:41 crc kubenswrapper[4689]: I1201 08:39:41.404104 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:41 crc kubenswrapper[4689]: I1201 08:39:41.404647 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:41 crc kubenswrapper[4689]: I1201 08:39:41.404740 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:41 crc kubenswrapper[4689]: I1201 08:39:41.404827 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:41 crc kubenswrapper[4689]: I1201 08:39:41.404954 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:41Z","lastTransitionTime":"2025-12-01T08:39:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:41 crc kubenswrapper[4689]: I1201 08:39:41.508712 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:41 crc kubenswrapper[4689]: I1201 08:39:41.508765 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:41 crc kubenswrapper[4689]: I1201 08:39:41.508779 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:41 crc kubenswrapper[4689]: I1201 08:39:41.508798 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:41 crc kubenswrapper[4689]: I1201 08:39:41.508811 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:41Z","lastTransitionTime":"2025-12-01T08:39:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:41 crc kubenswrapper[4689]: I1201 08:39:41.611866 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:41 crc kubenswrapper[4689]: I1201 08:39:41.611961 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:41 crc kubenswrapper[4689]: I1201 08:39:41.611998 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:41 crc kubenswrapper[4689]: I1201 08:39:41.612036 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:41 crc kubenswrapper[4689]: I1201 08:39:41.612064 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:41Z","lastTransitionTime":"2025-12-01T08:39:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:41 crc kubenswrapper[4689]: I1201 08:39:41.717320 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:41 crc kubenswrapper[4689]: I1201 08:39:41.717456 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:41 crc kubenswrapper[4689]: I1201 08:39:41.717490 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:41 crc kubenswrapper[4689]: I1201 08:39:41.717524 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:41 crc kubenswrapper[4689]: I1201 08:39:41.717544 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:41Z","lastTransitionTime":"2025-12-01T08:39:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:41 crc kubenswrapper[4689]: I1201 08:39:41.820727 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:41 crc kubenswrapper[4689]: I1201 08:39:41.820843 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:41 crc kubenswrapper[4689]: I1201 08:39:41.820865 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:41 crc kubenswrapper[4689]: I1201 08:39:41.820894 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:41 crc kubenswrapper[4689]: I1201 08:39:41.820913 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:41Z","lastTransitionTime":"2025-12-01T08:39:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:41 crc kubenswrapper[4689]: I1201 08:39:41.924690 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:41 crc kubenswrapper[4689]: I1201 08:39:41.924791 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:41 crc kubenswrapper[4689]: I1201 08:39:41.924809 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:41 crc kubenswrapper[4689]: I1201 08:39:41.924835 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:41 crc kubenswrapper[4689]: I1201 08:39:41.924853 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:41Z","lastTransitionTime":"2025-12-01T08:39:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:42 crc kubenswrapper[4689]: I1201 08:39:42.029031 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:42 crc kubenswrapper[4689]: I1201 08:39:42.029720 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:42 crc kubenswrapper[4689]: I1201 08:39:42.029826 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:42 crc kubenswrapper[4689]: I1201 08:39:42.029926 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:42 crc kubenswrapper[4689]: I1201 08:39:42.030017 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:42Z","lastTransitionTime":"2025-12-01T08:39:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:42 crc kubenswrapper[4689]: I1201 08:39:42.046906 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 08:39:42 crc kubenswrapper[4689]: I1201 08:39:42.046906 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 08:39:42 crc kubenswrapper[4689]: E1201 08:39:42.047281 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 08:39:42 crc kubenswrapper[4689]: I1201 08:39:42.047307 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jtwvs" Dec 01 08:39:42 crc kubenswrapper[4689]: E1201 08:39:42.047540 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jtwvs" podUID="5d6a08d0-a948-4c69-b3f0-f5e084adb453" Dec 01 08:39:42 crc kubenswrapper[4689]: E1201 08:39:42.047230 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 08:39:42 crc kubenswrapper[4689]: I1201 08:39:42.133968 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:42 crc kubenswrapper[4689]: I1201 08:39:42.134002 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:42 crc kubenswrapper[4689]: I1201 08:39:42.134012 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:42 crc kubenswrapper[4689]: I1201 08:39:42.134027 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:42 crc kubenswrapper[4689]: I1201 08:39:42.134036 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:42Z","lastTransitionTime":"2025-12-01T08:39:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:42 crc kubenswrapper[4689]: I1201 08:39:42.237525 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:42 crc kubenswrapper[4689]: I1201 08:39:42.237619 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:42 crc kubenswrapper[4689]: I1201 08:39:42.237646 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:42 crc kubenswrapper[4689]: I1201 08:39:42.237681 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:42 crc kubenswrapper[4689]: I1201 08:39:42.237717 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:42Z","lastTransitionTime":"2025-12-01T08:39:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:42 crc kubenswrapper[4689]: I1201 08:39:42.340626 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:42 crc kubenswrapper[4689]: I1201 08:39:42.340670 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:42 crc kubenswrapper[4689]: I1201 08:39:42.340685 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:42 crc kubenswrapper[4689]: I1201 08:39:42.340705 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:42 crc kubenswrapper[4689]: I1201 08:39:42.340719 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:42Z","lastTransitionTime":"2025-12-01T08:39:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:42 crc kubenswrapper[4689]: I1201 08:39:42.444682 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:42 crc kubenswrapper[4689]: I1201 08:39:42.444796 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:42 crc kubenswrapper[4689]: I1201 08:39:42.444823 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:42 crc kubenswrapper[4689]: I1201 08:39:42.444912 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:42 crc kubenswrapper[4689]: I1201 08:39:42.445021 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:42Z","lastTransitionTime":"2025-12-01T08:39:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:42 crc kubenswrapper[4689]: I1201 08:39:42.548925 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:42 crc kubenswrapper[4689]: I1201 08:39:42.548977 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:42 crc kubenswrapper[4689]: I1201 08:39:42.548995 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:42 crc kubenswrapper[4689]: I1201 08:39:42.549018 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:42 crc kubenswrapper[4689]: I1201 08:39:42.549036 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:42Z","lastTransitionTime":"2025-12-01T08:39:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:42 crc kubenswrapper[4689]: I1201 08:39:42.651939 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:42 crc kubenswrapper[4689]: I1201 08:39:42.652039 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:42 crc kubenswrapper[4689]: I1201 08:39:42.652061 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:42 crc kubenswrapper[4689]: I1201 08:39:42.652087 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:42 crc kubenswrapper[4689]: I1201 08:39:42.652104 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:42Z","lastTransitionTime":"2025-12-01T08:39:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:42 crc kubenswrapper[4689]: I1201 08:39:42.755459 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:42 crc kubenswrapper[4689]: I1201 08:39:42.755580 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:42 crc kubenswrapper[4689]: I1201 08:39:42.755601 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:42 crc kubenswrapper[4689]: I1201 08:39:42.755672 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:42 crc kubenswrapper[4689]: I1201 08:39:42.755692 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:42Z","lastTransitionTime":"2025-12-01T08:39:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:42 crc kubenswrapper[4689]: I1201 08:39:42.859043 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:42 crc kubenswrapper[4689]: I1201 08:39:42.859099 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:42 crc kubenswrapper[4689]: I1201 08:39:42.859118 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:42 crc kubenswrapper[4689]: I1201 08:39:42.859143 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:42 crc kubenswrapper[4689]: I1201 08:39:42.859161 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:42Z","lastTransitionTime":"2025-12-01T08:39:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:42 crc kubenswrapper[4689]: I1201 08:39:42.962501 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:42 crc kubenswrapper[4689]: I1201 08:39:42.962585 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:42 crc kubenswrapper[4689]: I1201 08:39:42.962610 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:42 crc kubenswrapper[4689]: I1201 08:39:42.962641 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:42 crc kubenswrapper[4689]: I1201 08:39:42.962666 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:42Z","lastTransitionTime":"2025-12-01T08:39:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:43 crc kubenswrapper[4689]: I1201 08:39:43.046684 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 08:39:43 crc kubenswrapper[4689]: E1201 08:39:43.046916 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 08:39:43 crc kubenswrapper[4689]: I1201 08:39:43.065911 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:43 crc kubenswrapper[4689]: I1201 08:39:43.065964 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:43 crc kubenswrapper[4689]: I1201 08:39:43.065983 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:43 crc kubenswrapper[4689]: I1201 08:39:43.066006 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:43 crc kubenswrapper[4689]: I1201 08:39:43.066023 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:43Z","lastTransitionTime":"2025-12-01T08:39:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:43 crc kubenswrapper[4689]: I1201 08:39:43.168897 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:43 crc kubenswrapper[4689]: I1201 08:39:43.168948 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:43 crc kubenswrapper[4689]: I1201 08:39:43.168962 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:43 crc kubenswrapper[4689]: I1201 08:39:43.168982 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:43 crc kubenswrapper[4689]: I1201 08:39:43.168997 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:43Z","lastTransitionTime":"2025-12-01T08:39:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:43 crc kubenswrapper[4689]: I1201 08:39:43.272916 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:43 crc kubenswrapper[4689]: I1201 08:39:43.272975 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:43 crc kubenswrapper[4689]: I1201 08:39:43.272993 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:43 crc kubenswrapper[4689]: I1201 08:39:43.273019 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:43 crc kubenswrapper[4689]: I1201 08:39:43.273037 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:43Z","lastTransitionTime":"2025-12-01T08:39:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:43 crc kubenswrapper[4689]: I1201 08:39:43.377312 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:43 crc kubenswrapper[4689]: I1201 08:39:43.377432 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:43 crc kubenswrapper[4689]: I1201 08:39:43.377458 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:43 crc kubenswrapper[4689]: I1201 08:39:43.377489 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:43 crc kubenswrapper[4689]: I1201 08:39:43.377513 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:43Z","lastTransitionTime":"2025-12-01T08:39:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:43 crc kubenswrapper[4689]: I1201 08:39:43.481666 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:43 crc kubenswrapper[4689]: I1201 08:39:43.481728 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:43 crc kubenswrapper[4689]: I1201 08:39:43.481741 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:43 crc kubenswrapper[4689]: I1201 08:39:43.481761 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:43 crc kubenswrapper[4689]: I1201 08:39:43.481776 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:43Z","lastTransitionTime":"2025-12-01T08:39:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:43 crc kubenswrapper[4689]: I1201 08:39:43.585259 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:43 crc kubenswrapper[4689]: I1201 08:39:43.585287 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:43 crc kubenswrapper[4689]: I1201 08:39:43.585294 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:43 crc kubenswrapper[4689]: I1201 08:39:43.585308 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:43 crc kubenswrapper[4689]: I1201 08:39:43.585318 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:43Z","lastTransitionTime":"2025-12-01T08:39:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:43 crc kubenswrapper[4689]: I1201 08:39:43.689473 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:43 crc kubenswrapper[4689]: I1201 08:39:43.689541 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:43 crc kubenswrapper[4689]: I1201 08:39:43.689560 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:43 crc kubenswrapper[4689]: I1201 08:39:43.689587 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:43 crc kubenswrapper[4689]: I1201 08:39:43.689604 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:43Z","lastTransitionTime":"2025-12-01T08:39:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:43 crc kubenswrapper[4689]: I1201 08:39:43.792750 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:43 crc kubenswrapper[4689]: I1201 08:39:43.793121 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:43 crc kubenswrapper[4689]: I1201 08:39:43.793343 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:43 crc kubenswrapper[4689]: I1201 08:39:43.793600 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:43 crc kubenswrapper[4689]: I1201 08:39:43.793762 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:43Z","lastTransitionTime":"2025-12-01T08:39:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:43 crc kubenswrapper[4689]: I1201 08:39:43.897202 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:43 crc kubenswrapper[4689]: I1201 08:39:43.897274 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:43 crc kubenswrapper[4689]: I1201 08:39:43.897330 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:43 crc kubenswrapper[4689]: I1201 08:39:43.897363 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:43 crc kubenswrapper[4689]: I1201 08:39:43.897420 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:43Z","lastTransitionTime":"2025-12-01T08:39:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:44 crc kubenswrapper[4689]: I1201 08:39:44.000270 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:44 crc kubenswrapper[4689]: I1201 08:39:44.000314 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:44 crc kubenswrapper[4689]: I1201 08:39:44.000325 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:44 crc kubenswrapper[4689]: I1201 08:39:44.000344 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:44 crc kubenswrapper[4689]: I1201 08:39:44.000357 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:44Z","lastTransitionTime":"2025-12-01T08:39:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:44 crc kubenswrapper[4689]: I1201 08:39:44.046862 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 08:39:44 crc kubenswrapper[4689]: E1201 08:39:44.047084 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 08:39:44 crc kubenswrapper[4689]: I1201 08:39:44.047110 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 08:39:44 crc kubenswrapper[4689]: I1201 08:39:44.047240 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jtwvs" Dec 01 08:39:44 crc kubenswrapper[4689]: E1201 08:39:44.047335 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 08:39:44 crc kubenswrapper[4689]: E1201 08:39:44.050159 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jtwvs" podUID="5d6a08d0-a948-4c69-b3f0-f5e084adb453" Dec 01 08:39:44 crc kubenswrapper[4689]: I1201 08:39:44.103856 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:44 crc kubenswrapper[4689]: I1201 08:39:44.103942 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:44 crc kubenswrapper[4689]: I1201 08:39:44.103956 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:44 crc kubenswrapper[4689]: I1201 08:39:44.103978 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:44 crc kubenswrapper[4689]: I1201 08:39:44.103993 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:44Z","lastTransitionTime":"2025-12-01T08:39:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:44 crc kubenswrapper[4689]: I1201 08:39:44.207346 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:44 crc kubenswrapper[4689]: I1201 08:39:44.207425 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:44 crc kubenswrapper[4689]: I1201 08:39:44.207442 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:44 crc kubenswrapper[4689]: I1201 08:39:44.207465 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:44 crc kubenswrapper[4689]: I1201 08:39:44.207480 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:44Z","lastTransitionTime":"2025-12-01T08:39:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:44 crc kubenswrapper[4689]: I1201 08:39:44.310880 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:44 crc kubenswrapper[4689]: I1201 08:39:44.310954 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:44 crc kubenswrapper[4689]: I1201 08:39:44.310978 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:44 crc kubenswrapper[4689]: I1201 08:39:44.311010 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:44 crc kubenswrapper[4689]: I1201 08:39:44.311032 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:44Z","lastTransitionTime":"2025-12-01T08:39:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:44 crc kubenswrapper[4689]: I1201 08:39:44.413781 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:44 crc kubenswrapper[4689]: I1201 08:39:44.413824 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:44 crc kubenswrapper[4689]: I1201 08:39:44.413836 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:44 crc kubenswrapper[4689]: I1201 08:39:44.413865 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:44 crc kubenswrapper[4689]: I1201 08:39:44.413880 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:44Z","lastTransitionTime":"2025-12-01T08:39:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:44 crc kubenswrapper[4689]: I1201 08:39:44.516755 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:44 crc kubenswrapper[4689]: I1201 08:39:44.516792 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:44 crc kubenswrapper[4689]: I1201 08:39:44.516801 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:44 crc kubenswrapper[4689]: I1201 08:39:44.516816 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:44 crc kubenswrapper[4689]: I1201 08:39:44.516827 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:44Z","lastTransitionTime":"2025-12-01T08:39:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:44 crc kubenswrapper[4689]: I1201 08:39:44.619635 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:44 crc kubenswrapper[4689]: I1201 08:39:44.619720 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:44 crc kubenswrapper[4689]: I1201 08:39:44.619732 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:44 crc kubenswrapper[4689]: I1201 08:39:44.619749 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:44 crc kubenswrapper[4689]: I1201 08:39:44.619759 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:44Z","lastTransitionTime":"2025-12-01T08:39:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:44 crc kubenswrapper[4689]: I1201 08:39:44.722162 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:44 crc kubenswrapper[4689]: I1201 08:39:44.722216 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:44 crc kubenswrapper[4689]: I1201 08:39:44.722227 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:44 crc kubenswrapper[4689]: I1201 08:39:44.722249 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:44 crc kubenswrapper[4689]: I1201 08:39:44.722266 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:44Z","lastTransitionTime":"2025-12-01T08:39:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:44 crc kubenswrapper[4689]: I1201 08:39:44.825173 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:44 crc kubenswrapper[4689]: I1201 08:39:44.825232 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:44 crc kubenswrapper[4689]: I1201 08:39:44.825246 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:44 crc kubenswrapper[4689]: I1201 08:39:44.825269 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:44 crc kubenswrapper[4689]: I1201 08:39:44.825282 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:44Z","lastTransitionTime":"2025-12-01T08:39:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:44 crc kubenswrapper[4689]: I1201 08:39:44.928088 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:44 crc kubenswrapper[4689]: I1201 08:39:44.928127 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:44 crc kubenswrapper[4689]: I1201 08:39:44.928136 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:44 crc kubenswrapper[4689]: I1201 08:39:44.928232 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:44 crc kubenswrapper[4689]: I1201 08:39:44.928624 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:44Z","lastTransitionTime":"2025-12-01T08:39:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:45 crc kubenswrapper[4689]: I1201 08:39:45.033038 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:45 crc kubenswrapper[4689]: I1201 08:39:45.033626 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:45 crc kubenswrapper[4689]: I1201 08:39:45.033722 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:45 crc kubenswrapper[4689]: I1201 08:39:45.033826 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:45 crc kubenswrapper[4689]: I1201 08:39:45.033945 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:45Z","lastTransitionTime":"2025-12-01T08:39:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:45 crc kubenswrapper[4689]: I1201 08:39:45.046576 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 08:39:45 crc kubenswrapper[4689]: E1201 08:39:45.046840 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 08:39:45 crc kubenswrapper[4689]: I1201 08:39:45.137295 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:45 crc kubenswrapper[4689]: I1201 08:39:45.137722 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:45 crc kubenswrapper[4689]: I1201 08:39:45.137792 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:45 crc kubenswrapper[4689]: I1201 08:39:45.137882 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:45 crc kubenswrapper[4689]: I1201 08:39:45.137952 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:45Z","lastTransitionTime":"2025-12-01T08:39:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:45 crc kubenswrapper[4689]: I1201 08:39:45.240977 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:45 crc kubenswrapper[4689]: I1201 08:39:45.241419 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:45 crc kubenswrapper[4689]: I1201 08:39:45.241487 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:45 crc kubenswrapper[4689]: I1201 08:39:45.241583 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:45 crc kubenswrapper[4689]: I1201 08:39:45.241762 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:45Z","lastTransitionTime":"2025-12-01T08:39:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:45 crc kubenswrapper[4689]: I1201 08:39:45.292329 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:45 crc kubenswrapper[4689]: I1201 08:39:45.292391 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:45 crc kubenswrapper[4689]: I1201 08:39:45.292406 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:45 crc kubenswrapper[4689]: I1201 08:39:45.292425 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:45 crc kubenswrapper[4689]: I1201 08:39:45.292439 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:45Z","lastTransitionTime":"2025-12-01T08:39:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:45 crc kubenswrapper[4689]: E1201 08:39:45.311809 4689 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"87f9359d-17aa-499d-90bc-b05146c26f0f\\\",\\\"systemUUID\\\":\\\"1b1c64ae-9dbc-417f-9fac-7f3e657b08f7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:45Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:45 crc kubenswrapper[4689]: I1201 08:39:45.317241 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:45 crc kubenswrapper[4689]: I1201 08:39:45.317304 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:45 crc kubenswrapper[4689]: I1201 08:39:45.317314 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:45 crc kubenswrapper[4689]: I1201 08:39:45.317331 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:45 crc kubenswrapper[4689]: I1201 08:39:45.317345 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:45Z","lastTransitionTime":"2025-12-01T08:39:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:45 crc kubenswrapper[4689]: E1201 08:39:45.335265 4689 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"87f9359d-17aa-499d-90bc-b05146c26f0f\\\",\\\"systemUUID\\\":\\\"1b1c64ae-9dbc-417f-9fac-7f3e657b08f7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:45Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:45 crc kubenswrapper[4689]: I1201 08:39:45.341088 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:45 crc kubenswrapper[4689]: I1201 08:39:45.341157 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:45 crc kubenswrapper[4689]: I1201 08:39:45.341171 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:45 crc kubenswrapper[4689]: I1201 08:39:45.341195 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:45 crc kubenswrapper[4689]: I1201 08:39:45.341209 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:45Z","lastTransitionTime":"2025-12-01T08:39:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:45 crc kubenswrapper[4689]: E1201 08:39:45.358498 4689 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"87f9359d-17aa-499d-90bc-b05146c26f0f\\\",\\\"systemUUID\\\":\\\"1b1c64ae-9dbc-417f-9fac-7f3e657b08f7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:45Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:45 crc kubenswrapper[4689]: I1201 08:39:45.363674 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:45 crc kubenswrapper[4689]: I1201 08:39:45.363776 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:45 crc kubenswrapper[4689]: I1201 08:39:45.363797 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:45 crc kubenswrapper[4689]: I1201 08:39:45.363863 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:45 crc kubenswrapper[4689]: I1201 08:39:45.363883 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:45Z","lastTransitionTime":"2025-12-01T08:39:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:45 crc kubenswrapper[4689]: E1201 08:39:45.381803 4689 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"87f9359d-17aa-499d-90bc-b05146c26f0f\\\",\\\"systemUUID\\\":\\\"1b1c64ae-9dbc-417f-9fac-7f3e657b08f7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:45Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:45 crc kubenswrapper[4689]: I1201 08:39:45.387954 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:45 crc kubenswrapper[4689]: I1201 08:39:45.387991 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:45 crc kubenswrapper[4689]: I1201 08:39:45.388003 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:45 crc kubenswrapper[4689]: I1201 08:39:45.388023 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:45 crc kubenswrapper[4689]: I1201 08:39:45.388038 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:45Z","lastTransitionTime":"2025-12-01T08:39:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:45 crc kubenswrapper[4689]: E1201 08:39:45.402041 4689 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"87f9359d-17aa-499d-90bc-b05146c26f0f\\\",\\\"systemUUID\\\":\\\"1b1c64ae-9dbc-417f-9fac-7f3e657b08f7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:45Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:45 crc kubenswrapper[4689]: E1201 08:39:45.402222 4689 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 01 08:39:45 crc kubenswrapper[4689]: I1201 08:39:45.404289 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:45 crc kubenswrapper[4689]: I1201 08:39:45.404360 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:45 crc kubenswrapper[4689]: I1201 08:39:45.404412 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:45 crc kubenswrapper[4689]: I1201 08:39:45.404434 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:45 crc kubenswrapper[4689]: I1201 08:39:45.404446 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:45Z","lastTransitionTime":"2025-12-01T08:39:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:45 crc kubenswrapper[4689]: I1201 08:39:45.507907 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:45 crc kubenswrapper[4689]: I1201 08:39:45.507963 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:45 crc kubenswrapper[4689]: I1201 08:39:45.507975 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:45 crc kubenswrapper[4689]: I1201 08:39:45.507994 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:45 crc kubenswrapper[4689]: I1201 08:39:45.508008 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:45Z","lastTransitionTime":"2025-12-01T08:39:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:45 crc kubenswrapper[4689]: I1201 08:39:45.611416 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:45 crc kubenswrapper[4689]: I1201 08:39:45.611486 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:45 crc kubenswrapper[4689]: I1201 08:39:45.611498 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:45 crc kubenswrapper[4689]: I1201 08:39:45.611514 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:45 crc kubenswrapper[4689]: I1201 08:39:45.611528 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:45Z","lastTransitionTime":"2025-12-01T08:39:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:45 crc kubenswrapper[4689]: I1201 08:39:45.714834 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:45 crc kubenswrapper[4689]: I1201 08:39:45.714881 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:45 crc kubenswrapper[4689]: I1201 08:39:45.714897 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:45 crc kubenswrapper[4689]: I1201 08:39:45.714916 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:45 crc kubenswrapper[4689]: I1201 08:39:45.714930 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:45Z","lastTransitionTime":"2025-12-01T08:39:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:45 crc kubenswrapper[4689]: I1201 08:39:45.817900 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:45 crc kubenswrapper[4689]: I1201 08:39:45.817950 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:45 crc kubenswrapper[4689]: I1201 08:39:45.817962 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:45 crc kubenswrapper[4689]: I1201 08:39:45.817981 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:45 crc kubenswrapper[4689]: I1201 08:39:45.817992 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:45Z","lastTransitionTime":"2025-12-01T08:39:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:45 crc kubenswrapper[4689]: I1201 08:39:45.920041 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:45 crc kubenswrapper[4689]: I1201 08:39:45.920082 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:45 crc kubenswrapper[4689]: I1201 08:39:45.920092 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:45 crc kubenswrapper[4689]: I1201 08:39:45.920107 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:45 crc kubenswrapper[4689]: I1201 08:39:45.920120 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:45Z","lastTransitionTime":"2025-12-01T08:39:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:46 crc kubenswrapper[4689]: I1201 08:39:46.023454 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:46 crc kubenswrapper[4689]: I1201 08:39:46.023494 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:46 crc kubenswrapper[4689]: I1201 08:39:46.023504 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:46 crc kubenswrapper[4689]: I1201 08:39:46.023520 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:46 crc kubenswrapper[4689]: I1201 08:39:46.023535 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:46Z","lastTransitionTime":"2025-12-01T08:39:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:46 crc kubenswrapper[4689]: I1201 08:39:46.047016 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jtwvs" Dec 01 08:39:46 crc kubenswrapper[4689]: I1201 08:39:46.047022 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 08:39:46 crc kubenswrapper[4689]: E1201 08:39:46.047177 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jtwvs" podUID="5d6a08d0-a948-4c69-b3f0-f5e084adb453" Dec 01 08:39:46 crc kubenswrapper[4689]: I1201 08:39:46.047124 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 08:39:46 crc kubenswrapper[4689]: E1201 08:39:46.047324 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 08:39:46 crc kubenswrapper[4689]: E1201 08:39:46.047501 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 08:39:46 crc kubenswrapper[4689]: I1201 08:39:46.126521 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:46 crc kubenswrapper[4689]: I1201 08:39:46.126622 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:46 crc kubenswrapper[4689]: I1201 08:39:46.126637 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:46 crc kubenswrapper[4689]: I1201 08:39:46.126655 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:46 crc kubenswrapper[4689]: I1201 08:39:46.126993 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:46Z","lastTransitionTime":"2025-12-01T08:39:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:46 crc kubenswrapper[4689]: I1201 08:39:46.230301 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:46 crc kubenswrapper[4689]: I1201 08:39:46.230355 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:46 crc kubenswrapper[4689]: I1201 08:39:46.230392 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:46 crc kubenswrapper[4689]: I1201 08:39:46.230417 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:46 crc kubenswrapper[4689]: I1201 08:39:46.230430 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:46Z","lastTransitionTime":"2025-12-01T08:39:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:46 crc kubenswrapper[4689]: I1201 08:39:46.333210 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:46 crc kubenswrapper[4689]: I1201 08:39:46.333300 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:46 crc kubenswrapper[4689]: I1201 08:39:46.333316 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:46 crc kubenswrapper[4689]: I1201 08:39:46.333336 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:46 crc kubenswrapper[4689]: I1201 08:39:46.333351 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:46Z","lastTransitionTime":"2025-12-01T08:39:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:46 crc kubenswrapper[4689]: I1201 08:39:46.436739 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:46 crc kubenswrapper[4689]: I1201 08:39:46.436790 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:46 crc kubenswrapper[4689]: I1201 08:39:46.436799 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:46 crc kubenswrapper[4689]: I1201 08:39:46.436818 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:46 crc kubenswrapper[4689]: I1201 08:39:46.436829 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:46Z","lastTransitionTime":"2025-12-01T08:39:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:46 crc kubenswrapper[4689]: I1201 08:39:46.540033 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:46 crc kubenswrapper[4689]: I1201 08:39:46.540095 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:46 crc kubenswrapper[4689]: I1201 08:39:46.540107 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:46 crc kubenswrapper[4689]: I1201 08:39:46.540128 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:46 crc kubenswrapper[4689]: I1201 08:39:46.540141 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:46Z","lastTransitionTime":"2025-12-01T08:39:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:46 crc kubenswrapper[4689]: I1201 08:39:46.644167 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:46 crc kubenswrapper[4689]: I1201 08:39:46.644234 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:46 crc kubenswrapper[4689]: I1201 08:39:46.644246 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:46 crc kubenswrapper[4689]: I1201 08:39:46.644264 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:46 crc kubenswrapper[4689]: I1201 08:39:46.644279 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:46Z","lastTransitionTime":"2025-12-01T08:39:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:46 crc kubenswrapper[4689]: I1201 08:39:46.748009 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:46 crc kubenswrapper[4689]: I1201 08:39:46.748077 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:46 crc kubenswrapper[4689]: I1201 08:39:46.748088 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:46 crc kubenswrapper[4689]: I1201 08:39:46.748109 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:46 crc kubenswrapper[4689]: I1201 08:39:46.748122 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:46Z","lastTransitionTime":"2025-12-01T08:39:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:46 crc kubenswrapper[4689]: I1201 08:39:46.850828 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:46 crc kubenswrapper[4689]: I1201 08:39:46.850895 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:46 crc kubenswrapper[4689]: I1201 08:39:46.850910 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:46 crc kubenswrapper[4689]: I1201 08:39:46.851297 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:46 crc kubenswrapper[4689]: I1201 08:39:46.851351 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:46Z","lastTransitionTime":"2025-12-01T08:39:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:46 crc kubenswrapper[4689]: I1201 08:39:46.955506 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:46 crc kubenswrapper[4689]: I1201 08:39:46.955772 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:46 crc kubenswrapper[4689]: I1201 08:39:46.955790 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:46 crc kubenswrapper[4689]: I1201 08:39:46.955815 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:46 crc kubenswrapper[4689]: I1201 08:39:46.955835 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:46Z","lastTransitionTime":"2025-12-01T08:39:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:47 crc kubenswrapper[4689]: I1201 08:39:47.046724 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 08:39:47 crc kubenswrapper[4689]: E1201 08:39:47.046972 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 08:39:47 crc kubenswrapper[4689]: I1201 08:39:47.059727 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:47 crc kubenswrapper[4689]: I1201 08:39:47.059791 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:47 crc kubenswrapper[4689]: I1201 08:39:47.059810 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:47 crc kubenswrapper[4689]: I1201 08:39:47.059836 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:47 crc kubenswrapper[4689]: I1201 08:39:47.059852 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:47Z","lastTransitionTime":"2025-12-01T08:39:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:47 crc kubenswrapper[4689]: I1201 08:39:47.164065 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:47 crc kubenswrapper[4689]: I1201 08:39:47.164157 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:47 crc kubenswrapper[4689]: I1201 08:39:47.164179 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:47 crc kubenswrapper[4689]: I1201 08:39:47.164211 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:47 crc kubenswrapper[4689]: I1201 08:39:47.164238 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:47Z","lastTransitionTime":"2025-12-01T08:39:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:47 crc kubenswrapper[4689]: I1201 08:39:47.267957 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:47 crc kubenswrapper[4689]: I1201 08:39:47.268104 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:47 crc kubenswrapper[4689]: I1201 08:39:47.268126 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:47 crc kubenswrapper[4689]: I1201 08:39:47.268152 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:47 crc kubenswrapper[4689]: I1201 08:39:47.268170 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:47Z","lastTransitionTime":"2025-12-01T08:39:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:47 crc kubenswrapper[4689]: I1201 08:39:47.370930 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:47 crc kubenswrapper[4689]: I1201 08:39:47.370963 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:47 crc kubenswrapper[4689]: I1201 08:39:47.370975 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:47 crc kubenswrapper[4689]: I1201 08:39:47.370991 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:47 crc kubenswrapper[4689]: I1201 08:39:47.371002 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:47Z","lastTransitionTime":"2025-12-01T08:39:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:47 crc kubenswrapper[4689]: I1201 08:39:47.473489 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:47 crc kubenswrapper[4689]: I1201 08:39:47.473581 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:47 crc kubenswrapper[4689]: I1201 08:39:47.473595 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:47 crc kubenswrapper[4689]: I1201 08:39:47.473613 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:47 crc kubenswrapper[4689]: I1201 08:39:47.473627 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:47Z","lastTransitionTime":"2025-12-01T08:39:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:47 crc kubenswrapper[4689]: I1201 08:39:47.576801 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:47 crc kubenswrapper[4689]: I1201 08:39:47.576860 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:47 crc kubenswrapper[4689]: I1201 08:39:47.576873 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:47 crc kubenswrapper[4689]: I1201 08:39:47.576893 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:47 crc kubenswrapper[4689]: I1201 08:39:47.576910 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:47Z","lastTransitionTime":"2025-12-01T08:39:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:47 crc kubenswrapper[4689]: I1201 08:39:47.679481 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:47 crc kubenswrapper[4689]: I1201 08:39:47.679524 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:47 crc kubenswrapper[4689]: I1201 08:39:47.679535 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:47 crc kubenswrapper[4689]: I1201 08:39:47.679551 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:47 crc kubenswrapper[4689]: I1201 08:39:47.679561 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:47Z","lastTransitionTime":"2025-12-01T08:39:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:47 crc kubenswrapper[4689]: I1201 08:39:47.782065 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:47 crc kubenswrapper[4689]: I1201 08:39:47.782108 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:47 crc kubenswrapper[4689]: I1201 08:39:47.782118 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:47 crc kubenswrapper[4689]: I1201 08:39:47.782138 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:47 crc kubenswrapper[4689]: I1201 08:39:47.782152 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:47Z","lastTransitionTime":"2025-12-01T08:39:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:47 crc kubenswrapper[4689]: I1201 08:39:47.885316 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:47 crc kubenswrapper[4689]: I1201 08:39:47.885351 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:47 crc kubenswrapper[4689]: I1201 08:39:47.885375 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:47 crc kubenswrapper[4689]: I1201 08:39:47.885391 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:47 crc kubenswrapper[4689]: I1201 08:39:47.885420 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:47Z","lastTransitionTime":"2025-12-01T08:39:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:47 crc kubenswrapper[4689]: I1201 08:39:47.987351 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:47 crc kubenswrapper[4689]: I1201 08:39:47.987406 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:47 crc kubenswrapper[4689]: I1201 08:39:47.987415 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:47 crc kubenswrapper[4689]: I1201 08:39:47.987430 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:47 crc kubenswrapper[4689]: I1201 08:39:47.987442 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:47Z","lastTransitionTime":"2025-12-01T08:39:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:48 crc kubenswrapper[4689]: I1201 08:39:48.047118 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jtwvs" Dec 01 08:39:48 crc kubenswrapper[4689]: I1201 08:39:48.047198 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 08:39:48 crc kubenswrapper[4689]: E1201 08:39:48.047284 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jtwvs" podUID="5d6a08d0-a948-4c69-b3f0-f5e084adb453" Dec 01 08:39:48 crc kubenswrapper[4689]: E1201 08:39:48.047346 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 08:39:48 crc kubenswrapper[4689]: I1201 08:39:48.047435 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 08:39:48 crc kubenswrapper[4689]: E1201 08:39:48.047752 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 08:39:48 crc kubenswrapper[4689]: I1201 08:39:48.090833 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:48 crc kubenswrapper[4689]: I1201 08:39:48.090874 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:48 crc kubenswrapper[4689]: I1201 08:39:48.090883 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:48 crc kubenswrapper[4689]: I1201 08:39:48.090899 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:48 crc kubenswrapper[4689]: I1201 08:39:48.090938 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:48Z","lastTransitionTime":"2025-12-01T08:39:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:48 crc kubenswrapper[4689]: I1201 08:39:48.194607 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:48 crc kubenswrapper[4689]: I1201 08:39:48.194675 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:48 crc kubenswrapper[4689]: I1201 08:39:48.194777 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:48 crc kubenswrapper[4689]: I1201 08:39:48.194809 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:48 crc kubenswrapper[4689]: I1201 08:39:48.194829 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:48Z","lastTransitionTime":"2025-12-01T08:39:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:48 crc kubenswrapper[4689]: I1201 08:39:48.297572 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:48 crc kubenswrapper[4689]: I1201 08:39:48.297630 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:48 crc kubenswrapper[4689]: I1201 08:39:48.297641 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:48 crc kubenswrapper[4689]: I1201 08:39:48.297665 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:48 crc kubenswrapper[4689]: I1201 08:39:48.297678 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:48Z","lastTransitionTime":"2025-12-01T08:39:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:48 crc kubenswrapper[4689]: I1201 08:39:48.400099 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:48 crc kubenswrapper[4689]: I1201 08:39:48.400138 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:48 crc kubenswrapper[4689]: I1201 08:39:48.400156 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:48 crc kubenswrapper[4689]: I1201 08:39:48.400175 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:48 crc kubenswrapper[4689]: I1201 08:39:48.400189 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:48Z","lastTransitionTime":"2025-12-01T08:39:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:48 crc kubenswrapper[4689]: I1201 08:39:48.502888 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:48 crc kubenswrapper[4689]: I1201 08:39:48.502994 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:48 crc kubenswrapper[4689]: I1201 08:39:48.503010 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:48 crc kubenswrapper[4689]: I1201 08:39:48.503033 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:48 crc kubenswrapper[4689]: I1201 08:39:48.503047 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:48Z","lastTransitionTime":"2025-12-01T08:39:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:48 crc kubenswrapper[4689]: I1201 08:39:48.606288 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:48 crc kubenswrapper[4689]: I1201 08:39:48.606339 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:48 crc kubenswrapper[4689]: I1201 08:39:48.606350 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:48 crc kubenswrapper[4689]: I1201 08:39:48.606385 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:48 crc kubenswrapper[4689]: I1201 08:39:48.606398 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:48Z","lastTransitionTime":"2025-12-01T08:39:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:48 crc kubenswrapper[4689]: I1201 08:39:48.708918 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:48 crc kubenswrapper[4689]: I1201 08:39:48.708957 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:48 crc kubenswrapper[4689]: I1201 08:39:48.708969 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:48 crc kubenswrapper[4689]: I1201 08:39:48.708984 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:48 crc kubenswrapper[4689]: I1201 08:39:48.708994 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:48Z","lastTransitionTime":"2025-12-01T08:39:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:48 crc kubenswrapper[4689]: I1201 08:39:48.811592 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:48 crc kubenswrapper[4689]: I1201 08:39:48.811674 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:48 crc kubenswrapper[4689]: I1201 08:39:48.811699 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:48 crc kubenswrapper[4689]: I1201 08:39:48.811733 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:48 crc kubenswrapper[4689]: I1201 08:39:48.811758 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:48Z","lastTransitionTime":"2025-12-01T08:39:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:48 crc kubenswrapper[4689]: I1201 08:39:48.914180 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:48 crc kubenswrapper[4689]: I1201 08:39:48.914209 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:48 crc kubenswrapper[4689]: I1201 08:39:48.914218 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:48 crc kubenswrapper[4689]: I1201 08:39:48.914232 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:48 crc kubenswrapper[4689]: I1201 08:39:48.914241 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:48Z","lastTransitionTime":"2025-12-01T08:39:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:49 crc kubenswrapper[4689]: I1201 08:39:49.016823 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:49 crc kubenswrapper[4689]: I1201 08:39:49.016861 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:49 crc kubenswrapper[4689]: I1201 08:39:49.016870 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:49 crc kubenswrapper[4689]: I1201 08:39:49.016899 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:49 crc kubenswrapper[4689]: I1201 08:39:49.016914 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:49Z","lastTransitionTime":"2025-12-01T08:39:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:49 crc kubenswrapper[4689]: I1201 08:39:49.046892 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 08:39:49 crc kubenswrapper[4689]: E1201 08:39:49.047302 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 08:39:49 crc kubenswrapper[4689]: I1201 08:39:49.047628 4689 scope.go:117] "RemoveContainer" containerID="2c1785e9dd655b78cb3c4139dca5280fb0d84adc92ae5f92ff69ce96b8bb82f7" Dec 01 08:39:49 crc kubenswrapper[4689]: E1201 08:39:49.047960 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8zn56_openshift-ovn-kubernetes(988f960f-52fa-406f-9320-a8eec7a04f76)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" podUID="988f960f-52fa-406f-9320-a8eec7a04f76" Dec 01 08:39:49 crc kubenswrapper[4689]: I1201 08:39:49.119910 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:49 crc kubenswrapper[4689]: I1201 08:39:49.119958 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:49 crc kubenswrapper[4689]: I1201 08:39:49.119972 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:49 crc kubenswrapper[4689]: I1201 08:39:49.119993 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:49 crc kubenswrapper[4689]: I1201 08:39:49.120005 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:49Z","lastTransitionTime":"2025-12-01T08:39:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:49 crc kubenswrapper[4689]: I1201 08:39:49.223664 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:49 crc kubenswrapper[4689]: I1201 08:39:49.223736 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:49 crc kubenswrapper[4689]: I1201 08:39:49.223749 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:49 crc kubenswrapper[4689]: I1201 08:39:49.223768 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:49 crc kubenswrapper[4689]: I1201 08:39:49.223781 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:49Z","lastTransitionTime":"2025-12-01T08:39:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:49 crc kubenswrapper[4689]: I1201 08:39:49.327234 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:49 crc kubenswrapper[4689]: I1201 08:39:49.327615 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:49 crc kubenswrapper[4689]: I1201 08:39:49.327665 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:49 crc kubenswrapper[4689]: I1201 08:39:49.327694 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:49 crc kubenswrapper[4689]: I1201 08:39:49.327716 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:49Z","lastTransitionTime":"2025-12-01T08:39:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:49 crc kubenswrapper[4689]: I1201 08:39:49.430502 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:49 crc kubenswrapper[4689]: I1201 08:39:49.430562 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:49 crc kubenswrapper[4689]: I1201 08:39:49.430575 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:49 crc kubenswrapper[4689]: I1201 08:39:49.430594 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:49 crc kubenswrapper[4689]: I1201 08:39:49.430613 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:49Z","lastTransitionTime":"2025-12-01T08:39:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:49 crc kubenswrapper[4689]: I1201 08:39:49.533908 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:49 crc kubenswrapper[4689]: I1201 08:39:49.533970 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:49 crc kubenswrapper[4689]: I1201 08:39:49.533988 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:49 crc kubenswrapper[4689]: I1201 08:39:49.534014 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:49 crc kubenswrapper[4689]: I1201 08:39:49.534030 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:49Z","lastTransitionTime":"2025-12-01T08:39:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:49 crc kubenswrapper[4689]: I1201 08:39:49.636323 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:49 crc kubenswrapper[4689]: I1201 08:39:49.636379 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:49 crc kubenswrapper[4689]: I1201 08:39:49.636389 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:49 crc kubenswrapper[4689]: I1201 08:39:49.636412 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:49 crc kubenswrapper[4689]: I1201 08:39:49.636445 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:49Z","lastTransitionTime":"2025-12-01T08:39:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:49 crc kubenswrapper[4689]: I1201 08:39:49.731738 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5d6a08d0-a948-4c69-b3f0-f5e084adb453-metrics-certs\") pod \"network-metrics-daemon-jtwvs\" (UID: \"5d6a08d0-a948-4c69-b3f0-f5e084adb453\") " pod="openshift-multus/network-metrics-daemon-jtwvs" Dec 01 08:39:49 crc kubenswrapper[4689]: E1201 08:39:49.731958 4689 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 01 08:39:49 crc kubenswrapper[4689]: E1201 08:39:49.732059 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d6a08d0-a948-4c69-b3f0-f5e084adb453-metrics-certs podName:5d6a08d0-a948-4c69-b3f0-f5e084adb453 nodeName:}" failed. No retries permitted until 2025-12-01 08:40:21.732019413 +0000 UTC m=+101.804307317 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5d6a08d0-a948-4c69-b3f0-f5e084adb453-metrics-certs") pod "network-metrics-daemon-jtwvs" (UID: "5d6a08d0-a948-4c69-b3f0-f5e084adb453") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 01 08:39:49 crc kubenswrapper[4689]: I1201 08:39:49.738606 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:49 crc kubenswrapper[4689]: I1201 08:39:49.738650 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:49 crc kubenswrapper[4689]: I1201 08:39:49.738667 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:49 crc kubenswrapper[4689]: I1201 08:39:49.738692 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:49 crc kubenswrapper[4689]: I1201 08:39:49.738711 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:49Z","lastTransitionTime":"2025-12-01T08:39:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:49 crc kubenswrapper[4689]: I1201 08:39:49.841841 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:49 crc kubenswrapper[4689]: I1201 08:39:49.841879 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:49 crc kubenswrapper[4689]: I1201 08:39:49.841891 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:49 crc kubenswrapper[4689]: I1201 08:39:49.841910 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:49 crc kubenswrapper[4689]: I1201 08:39:49.841926 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:49Z","lastTransitionTime":"2025-12-01T08:39:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:49 crc kubenswrapper[4689]: I1201 08:39:49.944615 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:49 crc kubenswrapper[4689]: I1201 08:39:49.944657 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:49 crc kubenswrapper[4689]: I1201 08:39:49.944671 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:49 crc kubenswrapper[4689]: I1201 08:39:49.944692 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:49 crc kubenswrapper[4689]: I1201 08:39:49.944708 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:49Z","lastTransitionTime":"2025-12-01T08:39:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:50 crc kubenswrapper[4689]: I1201 08:39:50.046438 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 08:39:50 crc kubenswrapper[4689]: I1201 08:39:50.046475 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jtwvs" Dec 01 08:39:50 crc kubenswrapper[4689]: I1201 08:39:50.046526 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 08:39:50 crc kubenswrapper[4689]: E1201 08:39:50.046665 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 08:39:50 crc kubenswrapper[4689]: E1201 08:39:50.046763 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jtwvs" podUID="5d6a08d0-a948-4c69-b3f0-f5e084adb453" Dec 01 08:39:50 crc kubenswrapper[4689]: E1201 08:39:50.046879 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 08:39:50 crc kubenswrapper[4689]: I1201 08:39:50.047522 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:50 crc kubenswrapper[4689]: I1201 08:39:50.047592 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:50 crc kubenswrapper[4689]: I1201 08:39:50.047606 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:50 crc kubenswrapper[4689]: I1201 08:39:50.047642 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:50 crc kubenswrapper[4689]: I1201 08:39:50.047663 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:50Z","lastTransitionTime":"2025-12-01T08:39:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:50 crc kubenswrapper[4689]: I1201 08:39:50.150849 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:50 crc kubenswrapper[4689]: I1201 08:39:50.150899 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:50 crc kubenswrapper[4689]: I1201 08:39:50.150926 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:50 crc kubenswrapper[4689]: I1201 08:39:50.150944 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:50 crc kubenswrapper[4689]: I1201 08:39:50.150958 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:50Z","lastTransitionTime":"2025-12-01T08:39:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:50 crc kubenswrapper[4689]: I1201 08:39:50.253747 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:50 crc kubenswrapper[4689]: I1201 08:39:50.253806 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:50 crc kubenswrapper[4689]: I1201 08:39:50.253815 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:50 crc kubenswrapper[4689]: I1201 08:39:50.253833 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:50 crc kubenswrapper[4689]: I1201 08:39:50.253844 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:50Z","lastTransitionTime":"2025-12-01T08:39:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:50 crc kubenswrapper[4689]: I1201 08:39:50.356991 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:50 crc kubenswrapper[4689]: I1201 08:39:50.357052 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:50 crc kubenswrapper[4689]: I1201 08:39:50.357062 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:50 crc kubenswrapper[4689]: I1201 08:39:50.357082 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:50 crc kubenswrapper[4689]: I1201 08:39:50.357096 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:50Z","lastTransitionTime":"2025-12-01T08:39:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:50 crc kubenswrapper[4689]: I1201 08:39:50.459964 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:50 crc kubenswrapper[4689]: I1201 08:39:50.460029 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:50 crc kubenswrapper[4689]: I1201 08:39:50.460041 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:50 crc kubenswrapper[4689]: I1201 08:39:50.460059 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:50 crc kubenswrapper[4689]: I1201 08:39:50.460098 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:50Z","lastTransitionTime":"2025-12-01T08:39:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:50 crc kubenswrapper[4689]: I1201 08:39:50.563438 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:50 crc kubenswrapper[4689]: I1201 08:39:50.563481 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:50 crc kubenswrapper[4689]: I1201 08:39:50.563493 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:50 crc kubenswrapper[4689]: I1201 08:39:50.563514 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:50 crc kubenswrapper[4689]: I1201 08:39:50.563527 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:50Z","lastTransitionTime":"2025-12-01T08:39:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:50 crc kubenswrapper[4689]: I1201 08:39:50.666582 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:50 crc kubenswrapper[4689]: I1201 08:39:50.666613 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:50 crc kubenswrapper[4689]: I1201 08:39:50.666623 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:50 crc kubenswrapper[4689]: I1201 08:39:50.666636 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:50 crc kubenswrapper[4689]: I1201 08:39:50.666646 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:50Z","lastTransitionTime":"2025-12-01T08:39:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:50 crc kubenswrapper[4689]: I1201 08:39:50.769130 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:50 crc kubenswrapper[4689]: I1201 08:39:50.769160 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:50 crc kubenswrapper[4689]: I1201 08:39:50.769168 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:50 crc kubenswrapper[4689]: I1201 08:39:50.769183 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:50 crc kubenswrapper[4689]: I1201 08:39:50.769193 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:50Z","lastTransitionTime":"2025-12-01T08:39:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:50 crc kubenswrapper[4689]: I1201 08:39:50.872240 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:50 crc kubenswrapper[4689]: I1201 08:39:50.872291 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:50 crc kubenswrapper[4689]: I1201 08:39:50.872302 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:50 crc kubenswrapper[4689]: I1201 08:39:50.872321 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:50 crc kubenswrapper[4689]: I1201 08:39:50.872333 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:50Z","lastTransitionTime":"2025-12-01T08:39:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:50 crc kubenswrapper[4689]: I1201 08:39:50.976598 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:50 crc kubenswrapper[4689]: I1201 08:39:50.976645 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:50 crc kubenswrapper[4689]: I1201 08:39:50.976659 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:50 crc kubenswrapper[4689]: I1201 08:39:50.976678 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:50 crc kubenswrapper[4689]: I1201 08:39:50.976691 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:50Z","lastTransitionTime":"2025-12-01T08:39:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:51 crc kubenswrapper[4689]: I1201 08:39:51.046769 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 08:39:51 crc kubenswrapper[4689]: E1201 08:39:51.046965 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 08:39:51 crc kubenswrapper[4689]: I1201 08:39:51.058705 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82556f94-5534-4aae-9690-5ce8e8d38113\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ad3697dfb9a953c1345d69e9f6c393a184bafaf121e9a62a86d05f8f26e3f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20de12d3c9020c7818be4f11a75808c7b7e81db8ae821b12284182d81e7cbceb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35c39a06af6610f8d18eac5f96e5dee0b542fb3fb0c81a102ed8b2a20d054a42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d507cfb5d0bd608d6d5ecd4105f944cdf013df7acf17c4d6237512601b4a7125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d507cfb5d0bd608d6d5ecd4105f944cdf013df7acf17c4d6237512601b4a7125\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:38:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:38:41Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:51Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:51 crc kubenswrapper[4689]: I1201 08:39:51.071777 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1946844b-83ee-401e-b3b6-5994ef81c85e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dbad86a3834b9ec3334de7ccc49988da1f7e42c423801c9789ce12ba70390b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://848171dd65d193193056bff78092de5ea65abaf7af4d168a9c27409765bd0ac2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad9f8bfeec0bea5ec9df44017223d633ce320bba477e99d5cc325712f92fff65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5920f4efa4e812090eb100b51bcf8e37eb82eb6b9fa495c8560ceeea3a94d55b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83ec64e82504ae546c78bdb3eb8b6cf47514373616749723ec457191b95b73c7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 08:38:54.283267 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 08:38:54.288329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1100834793/tls.crt::/tmp/serving-cert-1100834793/tls.key\\\\\\\"\\\\nI1201 08:39:00.137331 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 08:39:00.140699 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 08:39:00.140719 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 08:39:00.140740 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 08:39:00.140746 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 08:39:00.151271 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 08:39:00.151299 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 08:39:00.151305 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 08:39:00.151312 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 08:39:00.151316 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 08:39:00.151322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 08:39:00.151326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 08:39:00.151604 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 08:39:00.154622 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7190f95ad8a5cce27417903e343138ff345b97c0713609f47db91d8fe0c44a7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bba1efe2a197a5c47fc4220c1c53da0db51bcbdf317261c978b7f31348034f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bba1efe2a197a5c47fc4220c1c53da0db51bcbdf317261c978b7f31348034f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:38:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:51Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:51 crc kubenswrapper[4689]: I1201 08:39:51.081653 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:51 crc kubenswrapper[4689]: I1201 08:39:51.081686 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:51 crc kubenswrapper[4689]: I1201 08:39:51.081696 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:51 crc kubenswrapper[4689]: I1201 08:39:51.081714 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:51 crc kubenswrapper[4689]: I1201 08:39:51.081726 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:51Z","lastTransitionTime":"2025-12-01T08:39:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:51 crc kubenswrapper[4689]: I1201 08:39:51.082457 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3947625d-75bf-4332-a233-1491b2ee9d96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5095b8eb2f2f6319a0221ed4d7ec1fb81540fb75f610f7b57dc23fefbd196b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5v5pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdcf58565a332d76dac9cc12df1e0cd89eee6934be4b3234f8c36e9a8e22983a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5v5pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hmdnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:51Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:51 crc kubenswrapper[4689]: I1201 08:39:51.095829 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7p2p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcab587f-eb9b-4dde-a0a1-75ed175999b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://761460333be2b369513cc7812afa57b580daf1e0e9add1c20f33ddf45601632c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6954a34226801e4552090e2c6e76c36de94e28ea8e175bc26392f534fb12214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6954a34226801e4552090e2c6e76c36de94e28ea8e175bc26392f534fb12214\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33d22ffe5ececc3e9210cb8c45471bd5f4b8a76e9ff2002cfc4c099b6973a7ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33d22ffe5ececc3e9210cb8c45471bd5f4b8a76e9ff2002cfc4c099b6973a7ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30e1f218df66bf86cef3d56c80b2d5bafacfabbac8868f3127a70f88431d00e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30e1f218df66bf86cef3d56c80b2d5bafacfabbac8868f3127a70f88431d00e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://548c59ec353473b69fd54aa96a077763c420647673b64000364532db2c3beefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://548c59ec353473b69fd54aa96a077763c420647673b64000364532db2c3beefe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae04920175b7c9665a739c2f390f5b21a7f8227277909132d9719eaba43c9dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae04920175b7c9665a739c2f390f5b21a7f8227277909132d9719eaba43c9dd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://512fd55cf3633a9fcdf425af76d251823e7638bd08a7f32341108470103ea67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://512fd55cf3633a9fcdf425af76d251823e7638bd08a7f32341108470103ea67d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7p2p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:51Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:51 crc kubenswrapper[4689]: I1201 08:39:51.105468 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kg5bw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af90efaa-97be-48b4-bfe6-dc25956d2b5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e208d5bee2ddf73e62749c6865d4a6cc138365c59ab582e2b28e8c01480591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2sbc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kg5bw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:51Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:51 crc kubenswrapper[4689]: I1201 08:39:51.116776 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c2qqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65456ad6-e7d1-4546-a977-244691fc5722\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c50390d9cf966913cfb379da199be0fe90b9085e0d76114903eb624054a7f84b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pwwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cda8fd2f87a8bee5f54685633fc64ce2dd06bfe6e5ea9fa8458345954080e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pwwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-c2qqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:51Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:51 crc kubenswrapper[4689]: I1201 08:39:51.155866 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:51Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:51 crc kubenswrapper[4689]: I1201 08:39:51.191251 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:51 crc kubenswrapper[4689]: I1201 08:39:51.191302 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:51 crc kubenswrapper[4689]: I1201 08:39:51.191313 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:51 crc kubenswrapper[4689]: I1201 08:39:51.191330 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:51 crc kubenswrapper[4689]: I1201 08:39:51.191342 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:51Z","lastTransitionTime":"2025-12-01T08:39:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:51 crc kubenswrapper[4689]: I1201 08:39:51.212878 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:51Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:51 crc kubenswrapper[4689]: I1201 08:39:51.232028 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dl2st" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bebcb50-c292-4bca-9299-2fdc21439b18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://768aa329fc277550bd549890a176d80246fcbfa50fbc3b88c444434b980b9986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98wj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dl2st\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:51Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:51 crc kubenswrapper[4689]: I1201 08:39:51.247253 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62ff4a03d2e50e2d881287008cfedc7ff067e04d3dc24ace25b56677b0eb117c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1ed68789c1f37ddbc952b0927feba7a704b01f3ee52e057e29c989f1d475e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:51Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:51 crc kubenswrapper[4689]: I1201 08:39:51.260449 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e81b25bc4fc917a5480b84f4c7d2550829f5ad209d4cac14d2cd64d5b792b9e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:51Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:51 crc kubenswrapper[4689]: I1201 08:39:51.273130 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78a7f1a0302034697497f8a7a3d9547400a4311d7e304fdba95a19750c66658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:51Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:51 crc kubenswrapper[4689]: I1201 08:39:51.285628 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4z9l8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74395c07-d5ab-45ec-a616-1d0b1b336583\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f9b1493d035a89354477e779e8277075127a85756f190d5f0c3bf98a15f15b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjzb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4z9l8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:51Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:51 crc kubenswrapper[4689]: I1201 08:39:51.295532 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:51 crc kubenswrapper[4689]: I1201 08:39:51.295572 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:51 crc kubenswrapper[4689]: I1201 08:39:51.295587 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:51 crc kubenswrapper[4689]: I1201 08:39:51.295610 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:51 crc kubenswrapper[4689]: I1201 08:39:51.295625 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:51Z","lastTransitionTime":"2025-12-01T08:39:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:51 crc kubenswrapper[4689]: I1201 08:39:51.312831 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"988f960f-52fa-406f-9320-a8eec7a04f76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8be3fd2d2a091ced00ce091ea6f81df99a6718fa9ead81d56590d2c8ebc2a36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46c9dc400db69538c36365f4e2d66aae47defc4224c2881532870f875e2b30e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2afbacac849e9ed880e975ae167ea5b2329811c28516152f4d809b043ab61746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be8fa8da5eef9035e80121c75adb9d6653d8683948fa83cfb1ff87a5fb3e8b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81345c33ddfbccd0c6b7d9c505ee81d513ce884fa1326add04026fd82d441e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://211c2ff150fa2d5fb73b0df4cfc5e694bf0b6b6761eec1e6827ac983fd759829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c1785e9dd655b78cb3c4139dca5280fb0d84adc92ae5f92ff69ce96b8bb82f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c1785e9dd655b78cb3c4139dca5280fb0d84adc92ae5f92ff69ce96b8bb82f7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T08:39:33Z\\\",\\\"message\\\":\\\" current time 2025-12-01T08:39:33Z is after 2025-08-24T17:21:41Z]\\\\nI1201 08:39:33.035110 6200 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nI1201 08:39:33.033847 6200 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress-canary/ingress-canary]} name:Service_openshift-ingress-canary/ingress-canary_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.34:8443: 10.217.5.34:8888:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7715118b-bb1b-400a-803e-7ab2cc3eeec0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1201 08:39:33.035122 6200 obj_retry.go:409] Going to retry *v1.Pod resource setup for 6 objects: [openshift-network-diagnostics/network-check-source-55646444c4-trplf openshift-kube-scheduler/openshift-kube-scheduler-crc openshift-multus/multus-dl2st openshift-multus/network-metrics-daemon-jtwvs openshift-machine-config-operator/machine-config-daemon-hmdnx openshift-image-registry/node-ca-kg5bw]\\\\nI1201 08:39:33.035138 6200 obj_retry.go:418] Waiting for all the *\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:32Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8zn56_openshift-ovn-kubernetes(988f960f-52fa-406f-9320-a8eec7a04f76)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e96fecee94f6f25ea0bbde3c6a23854fb5dc476f4b5cfb82871358fbcf6809c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://496b7a56d1e3bb23f4d1da0fc5bbe009995c46b1f44faad80ee18a18ab301a18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://496b7a56d1e3bb23f4d1da0fc5bbe009995c46b1f44faad80ee18a18ab301a18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8zn56\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:51Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:51 crc kubenswrapper[4689]: I1201 08:39:51.324901 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jtwvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d6a08d0-a948-4c69-b3f0-f5e084adb453\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkwdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkwdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jtwvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:51Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:51 crc kubenswrapper[4689]: I1201 08:39:51.336881 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f71ff62f-7141-4d50-b8ad-8312dc8a80e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccd36e732198fa380968b1ca9a174e8f380c9d3c3c762c3267b2f65367c4360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d09630f7eb6a631d65473e272c4f98de5ba071d33992776b4101ac797b0854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f804833ecd52f1cd93a4df02639ff912e5c7ff38bfce3ba1c47e8cd5fca46e11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35299e45eecb4b2b98003295a39b7945d1fa2943a5a7f97301054d255687877e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:38:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:51Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:51 crc kubenswrapper[4689]: I1201 08:39:51.350320 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:51Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:51 crc kubenswrapper[4689]: I1201 08:39:51.399000 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:51 crc kubenswrapper[4689]: I1201 08:39:51.399048 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:51 crc kubenswrapper[4689]: I1201 08:39:51.399061 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:51 crc kubenswrapper[4689]: I1201 08:39:51.399080 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:51 crc kubenswrapper[4689]: I1201 08:39:51.399092 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:51Z","lastTransitionTime":"2025-12-01T08:39:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:51 crc kubenswrapper[4689]: I1201 08:39:51.502792 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:51 crc kubenswrapper[4689]: I1201 08:39:51.502835 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:51 crc kubenswrapper[4689]: I1201 08:39:51.502849 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:51 crc kubenswrapper[4689]: I1201 08:39:51.502866 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:51 crc kubenswrapper[4689]: I1201 08:39:51.502878 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:51Z","lastTransitionTime":"2025-12-01T08:39:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:51 crc kubenswrapper[4689]: I1201 08:39:51.605812 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:51 crc kubenswrapper[4689]: I1201 08:39:51.605879 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:51 crc kubenswrapper[4689]: I1201 08:39:51.605900 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:51 crc kubenswrapper[4689]: I1201 08:39:51.605929 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:51 crc kubenswrapper[4689]: I1201 08:39:51.605949 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:51Z","lastTransitionTime":"2025-12-01T08:39:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:51 crc kubenswrapper[4689]: I1201 08:39:51.710104 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:51 crc kubenswrapper[4689]: I1201 08:39:51.710142 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:51 crc kubenswrapper[4689]: I1201 08:39:51.710154 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:51 crc kubenswrapper[4689]: I1201 08:39:51.710172 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:51 crc kubenswrapper[4689]: I1201 08:39:51.710188 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:51Z","lastTransitionTime":"2025-12-01T08:39:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:51 crc kubenswrapper[4689]: I1201 08:39:51.783262 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dl2st_6bebcb50-c292-4bca-9299-2fdc21439b18/kube-multus/0.log" Dec 01 08:39:51 crc kubenswrapper[4689]: I1201 08:39:51.783326 4689 generic.go:334] "Generic (PLEG): container finished" podID="6bebcb50-c292-4bca-9299-2fdc21439b18" containerID="768aa329fc277550bd549890a176d80246fcbfa50fbc3b88c444434b980b9986" exitCode=1 Dec 01 08:39:51 crc kubenswrapper[4689]: I1201 08:39:51.783400 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-dl2st" event={"ID":"6bebcb50-c292-4bca-9299-2fdc21439b18","Type":"ContainerDied","Data":"768aa329fc277550bd549890a176d80246fcbfa50fbc3b88c444434b980b9986"} Dec 01 08:39:51 crc kubenswrapper[4689]: I1201 08:39:51.783806 4689 scope.go:117] "RemoveContainer" containerID="768aa329fc277550bd549890a176d80246fcbfa50fbc3b88c444434b980b9986" Dec 01 08:39:51 crc kubenswrapper[4689]: I1201 08:39:51.803417 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4z9l8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74395c07-d5ab-45ec-a616-1d0b1b336583\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f9b1493d035a89354477e779e8277075127a85756f190d5f0c3bf98a15f15b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjzb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4z9l8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:51Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:51 crc kubenswrapper[4689]: I1201 08:39:51.812612 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:51 crc kubenswrapper[4689]: I1201 08:39:51.813009 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:51 crc kubenswrapper[4689]: I1201 08:39:51.813018 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:51 crc kubenswrapper[4689]: I1201 08:39:51.813037 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:51 crc kubenswrapper[4689]: I1201 08:39:51.813052 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:51Z","lastTransitionTime":"2025-12-01T08:39:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:51 crc kubenswrapper[4689]: I1201 08:39:51.826694 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"988f960f-52fa-406f-9320-a8eec7a04f76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8be3fd2d2a091ced00ce091ea6f81df99a6718fa9ead81d56590d2c8ebc2a36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46c9dc400db69538c36365f4e2d66aae47defc4224c2881532870f875e2b30e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2afbacac849e9ed880e975ae167ea5b2329811c28516152f4d809b043ab61746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be8fa8da5eef9035e80121c75adb9d6653d8683948fa83cfb1ff87a5fb3e8b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81345c33ddfbccd0c6b7d9c505ee81d513ce884fa1326add04026fd82d441e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://211c2ff150fa2d5fb73b0df4cfc5e694bf0b6b6761eec1e6827ac983fd759829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c1785e9dd655b78cb3c4139dca5280fb0d84adc92ae5f92ff69ce96b8bb82f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c1785e9dd655b78cb3c4139dca5280fb0d84adc92ae5f92ff69ce96b8bb82f7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T08:39:33Z\\\",\\\"message\\\":\\\" current time 2025-12-01T08:39:33Z is after 2025-08-24T17:21:41Z]\\\\nI1201 08:39:33.035110 6200 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nI1201 08:39:33.033847 6200 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress-canary/ingress-canary]} name:Service_openshift-ingress-canary/ingress-canary_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.34:8443: 10.217.5.34:8888:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7715118b-bb1b-400a-803e-7ab2cc3eeec0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1201 08:39:33.035122 6200 obj_retry.go:409] Going to retry *v1.Pod resource setup for 6 objects: [openshift-network-diagnostics/network-check-source-55646444c4-trplf openshift-kube-scheduler/openshift-kube-scheduler-crc openshift-multus/multus-dl2st openshift-multus/network-metrics-daemon-jtwvs openshift-machine-config-operator/machine-config-daemon-hmdnx openshift-image-registry/node-ca-kg5bw]\\\\nI1201 08:39:33.035138 6200 obj_retry.go:418] Waiting for all the *\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:32Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8zn56_openshift-ovn-kubernetes(988f960f-52fa-406f-9320-a8eec7a04f76)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e96fecee94f6f25ea0bbde3c6a23854fb5dc476f4b5cfb82871358fbcf6809c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://496b7a56d1e3bb23f4d1da0fc5bbe009995c46b1f44faad80ee18a18ab301a18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://496b7a56d1e3bb23f4d1da0fc5bbe009995c46b1f44faad80ee18a18ab301a18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8zn56\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:51Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:51 crc kubenswrapper[4689]: I1201 08:39:51.839938 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jtwvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d6a08d0-a948-4c69-b3f0-f5e084adb453\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkwdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkwdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jtwvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:51Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:51 crc kubenswrapper[4689]: I1201 08:39:51.852086 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f71ff62f-7141-4d50-b8ad-8312dc8a80e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccd36e732198fa380968b1ca9a174e8f380c9d3c3c762c3267b2f65367c4360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d09630f7eb6a631d65473e272c4f98de5ba071d33992776b4101ac797b0854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f804833ecd52f1cd93a4df02639ff912e5c7ff38bfce3ba1c47e8cd5fca46e11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35299e45eecb4b2b98003295a39b7945d1fa2943a5a7f97301054d255687877e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:38:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:51Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:51 crc kubenswrapper[4689]: I1201 08:39:51.863304 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:51Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:51 crc kubenswrapper[4689]: I1201 08:39:51.874881 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82556f94-5534-4aae-9690-5ce8e8d38113\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ad3697dfb9a953c1345d69e9f6c393a184bafaf121e9a62a86d05f8f26e3f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20de12d3c9020c7818be4f11a75808c7b7e81db8ae821b12284182d81e7cbceb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35c39a06af6610f8d18eac5f96e5dee0b542fb3fb0c81a102ed8b2a20d054a42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d507cfb5d0bd608d6d5ecd4105f944cdf013df7acf17c4d6237512601b4a7125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d507cfb5d0bd608d6d5ecd4105f944cdf013df7acf17c4d6237512601b4a7125\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:38:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:38:41Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:51Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:51 crc kubenswrapper[4689]: I1201 08:39:51.889007 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1946844b-83ee-401e-b3b6-5994ef81c85e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dbad86a3834b9ec3334de7ccc49988da1f7e42c423801c9789ce12ba70390b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://848171dd65d193193056bff78092de5ea65abaf7af4d168a9c27409765bd0ac2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad9f8bfeec0bea5ec9df44017223d633ce320bba477e99d5cc325712f92fff65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5920f4efa4e812090eb100b51bcf8e37eb82eb6b9fa495c8560ceeea3a94d55b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83ec64e82504ae546c78bdb3eb8b6cf47514373616749723ec457191b95b73c7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 08:38:54.283267 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 08:38:54.288329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1100834793/tls.crt::/tmp/serving-cert-1100834793/tls.key\\\\\\\"\\\\nI1201 08:39:00.137331 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 08:39:00.140699 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 08:39:00.140719 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 08:39:00.140740 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 08:39:00.140746 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 08:39:00.151271 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 08:39:00.151299 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 08:39:00.151305 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 08:39:00.151312 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 08:39:00.151316 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 08:39:00.151322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 08:39:00.151326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 08:39:00.151604 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 08:39:00.154622 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7190f95ad8a5cce27417903e343138ff345b97c0713609f47db91d8fe0c44a7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bba1efe2a197a5c47fc4220c1c53da0db51bcbdf317261c978b7f31348034f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bba1efe2a197a5c47fc4220c1c53da0db51bcbdf317261c978b7f31348034f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:38:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:51Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:51 crc kubenswrapper[4689]: I1201 08:39:51.901676 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3947625d-75bf-4332-a233-1491b2ee9d96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5095b8eb2f2f6319a0221ed4d7ec1fb81540fb75f610f7b57dc23fefbd196b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5v5pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdcf58565a332d76dac9cc12df1e0cd89eee6934be4b3234f8c36e9a8e22983a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5v5pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hmdnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:51Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:51 crc kubenswrapper[4689]: I1201 08:39:51.916977 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:51 crc kubenswrapper[4689]: I1201 08:39:51.917025 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:51 crc kubenswrapper[4689]: I1201 08:39:51.917038 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:51 crc kubenswrapper[4689]: I1201 08:39:51.917056 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:51 crc kubenswrapper[4689]: I1201 08:39:51.917068 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:51Z","lastTransitionTime":"2025-12-01T08:39:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:51 crc kubenswrapper[4689]: I1201 08:39:51.922537 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7p2p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcab587f-eb9b-4dde-a0a1-75ed175999b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://761460333be2b369513cc7812afa57b580daf1e0e9add1c20f33ddf45601632c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6954a34226801e4552090e2c6e76c36de94e28ea8e175bc26392f534fb12214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6954a34226801e4552090e2c6e76c36de94e28ea8e175bc26392f534fb12214\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33d22ffe5ececc3e9210cb8c45471bd5f4b8a76e9ff2002cfc4c099b6973a7ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33d22ffe5ececc3e9210cb8c45471bd5f4b8a76e9ff2002cfc4c099b6973a7ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30e1f218df66bf86cef3d56c80b2d5bafacfabbac8868f3127a70f88431d00e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30e1f218df66bf86cef3d56c80b2d5bafacfabbac8868f3127a70f88431d00e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://548c59ec353473b69fd54aa96a077763c420647673b64000364532db2c3beefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://548c59ec353473b69fd54aa96a077763c420647673b64000364532db2c3beefe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae04920175b7c9665a739c2f390f5b21a7f8227277909132d9719eaba43c9dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae04920175b7c9665a739c2f390f5b21a7f8227277909132d9719eaba43c9dd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://512fd55cf3633a9fcdf425af76d251823e7638bd08a7f32341108470103ea67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://512fd55cf3633a9fcdf425af76d251823e7638bd08a7f32341108470103ea67d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7p2p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:51Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:51 crc kubenswrapper[4689]: I1201 08:39:51.937208 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kg5bw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af90efaa-97be-48b4-bfe6-dc25956d2b5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e208d5bee2ddf73e62749c6865d4a6cc138365c59ab582e2b28e8c01480591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2sbc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kg5bw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:51Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:51 crc kubenswrapper[4689]: I1201 08:39:51.948359 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c2qqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65456ad6-e7d1-4546-a977-244691fc5722\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c50390d9cf966913cfb379da199be0fe90b9085e0d76114903eb624054a7f84b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pwwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cda8fd2f87a8bee5f54685633fc64ce2dd06bfe6e5ea9fa8458345954080e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pwwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-c2qqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:51Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:51 crc kubenswrapper[4689]: I1201 08:39:51.960609 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:51Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:51 crc kubenswrapper[4689]: I1201 08:39:51.970541 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:51Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:52 crc kubenswrapper[4689]: I1201 08:39:52.001293 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dl2st" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bebcb50-c292-4bca-9299-2fdc21439b18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://768aa329fc277550bd549890a176d80246fcbfa50fbc3b88c444434b980b9986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://768aa329fc277550bd549890a176d80246fcbfa50fbc3b88c444434b980b9986\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T08:39:51Z\\\",\\\"message\\\":\\\"2025-12-01T08:39:06+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d704f42a-5127-4c68-bde9-b46f9b6549c3\\\\n2025-12-01T08:39:06+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d704f42a-5127-4c68-bde9-b46f9b6549c3 to /host/opt/cni/bin/\\\\n2025-12-01T08:39:06Z [verbose] multus-daemon started\\\\n2025-12-01T08:39:06Z [verbose] Readiness Indicator file check\\\\n2025-12-01T08:39:51Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98wj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dl2st\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:51Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:52 crc kubenswrapper[4689]: I1201 08:39:52.019537 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:52 crc kubenswrapper[4689]: I1201 08:39:52.019595 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:52 crc kubenswrapper[4689]: I1201 08:39:52.019610 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:52 crc kubenswrapper[4689]: I1201 08:39:52.019628 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:52 crc kubenswrapper[4689]: I1201 08:39:52.019644 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:52Z","lastTransitionTime":"2025-12-01T08:39:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:52 crc kubenswrapper[4689]: I1201 08:39:52.023446 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62ff4a03d2e50e2d881287008cfedc7ff067e04d3dc24ace25b56677b0eb117c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1ed68789c1f37ddbc952b0927feba7a704b01f3ee52e057e29c989f1d475e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:52Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:52 crc kubenswrapper[4689]: I1201 08:39:52.037030 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e81b25bc4fc917a5480b84f4c7d2550829f5ad209d4cac14d2cd64d5b792b9e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:52Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:52 crc kubenswrapper[4689]: I1201 08:39:52.046589 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 08:39:52 crc kubenswrapper[4689]: I1201 08:39:52.046751 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jtwvs" Dec 01 08:39:52 crc kubenswrapper[4689]: E1201 08:39:52.046839 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 08:39:52 crc kubenswrapper[4689]: I1201 08:39:52.046863 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 08:39:52 crc kubenswrapper[4689]: E1201 08:39:52.046983 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jtwvs" podUID="5d6a08d0-a948-4c69-b3f0-f5e084adb453" Dec 01 08:39:52 crc kubenswrapper[4689]: E1201 08:39:52.047068 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 08:39:52 crc kubenswrapper[4689]: I1201 08:39:52.047609 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78a7f1a0302034697497f8a7a3d9547400a4311d7e304fdba95a19750c66658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:52Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:52 crc kubenswrapper[4689]: I1201 08:39:52.123247 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:52 crc kubenswrapper[4689]: I1201 08:39:52.123307 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:52 crc kubenswrapper[4689]: I1201 08:39:52.123317 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:52 crc kubenswrapper[4689]: I1201 08:39:52.123335 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:52 crc kubenswrapper[4689]: I1201 08:39:52.123345 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:52Z","lastTransitionTime":"2025-12-01T08:39:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:52 crc kubenswrapper[4689]: I1201 08:39:52.311843 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:52 crc kubenswrapper[4689]: I1201 08:39:52.311901 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:52 crc kubenswrapper[4689]: I1201 08:39:52.311914 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:52 crc kubenswrapper[4689]: I1201 08:39:52.311933 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:52 crc kubenswrapper[4689]: I1201 08:39:52.311947 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:52Z","lastTransitionTime":"2025-12-01T08:39:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:52 crc kubenswrapper[4689]: I1201 08:39:52.415011 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:52 crc kubenswrapper[4689]: I1201 08:39:52.415063 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:52 crc kubenswrapper[4689]: I1201 08:39:52.415078 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:52 crc kubenswrapper[4689]: I1201 08:39:52.415098 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:52 crc kubenswrapper[4689]: I1201 08:39:52.415112 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:52Z","lastTransitionTime":"2025-12-01T08:39:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:52 crc kubenswrapper[4689]: I1201 08:39:52.517659 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:52 crc kubenswrapper[4689]: I1201 08:39:52.517703 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:52 crc kubenswrapper[4689]: I1201 08:39:52.517716 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:52 crc kubenswrapper[4689]: I1201 08:39:52.517737 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:52 crc kubenswrapper[4689]: I1201 08:39:52.517753 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:52Z","lastTransitionTime":"2025-12-01T08:39:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:52 crc kubenswrapper[4689]: I1201 08:39:52.620077 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:52 crc kubenswrapper[4689]: I1201 08:39:52.620122 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:52 crc kubenswrapper[4689]: I1201 08:39:52.620135 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:52 crc kubenswrapper[4689]: I1201 08:39:52.620158 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:52 crc kubenswrapper[4689]: I1201 08:39:52.620174 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:52Z","lastTransitionTime":"2025-12-01T08:39:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:52 crc kubenswrapper[4689]: I1201 08:39:52.723077 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:52 crc kubenswrapper[4689]: I1201 08:39:52.723132 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:52 crc kubenswrapper[4689]: I1201 08:39:52.723141 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:52 crc kubenswrapper[4689]: I1201 08:39:52.723156 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:52 crc kubenswrapper[4689]: I1201 08:39:52.723166 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:52Z","lastTransitionTime":"2025-12-01T08:39:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:52 crc kubenswrapper[4689]: I1201 08:39:52.788739 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dl2st_6bebcb50-c292-4bca-9299-2fdc21439b18/kube-multus/0.log" Dec 01 08:39:52 crc kubenswrapper[4689]: I1201 08:39:52.788827 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-dl2st" event={"ID":"6bebcb50-c292-4bca-9299-2fdc21439b18","Type":"ContainerStarted","Data":"f0a76050989ec3f5388f58967fc29b953bea67fbbf75db6ad980546718f4f034"} Dec 01 08:39:52 crc kubenswrapper[4689]: I1201 08:39:52.801441 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f71ff62f-7141-4d50-b8ad-8312dc8a80e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccd36e732198fa380968b1ca9a174e8f380c9d3c3c762c3267b2f65367c4360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d09630f7eb6a631d65473e272c4f98de5ba071d33992776b4101ac797b0854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f804833ecd52f1cd93a4df02639ff912e5c7ff38bfce3ba1c47e8cd5fca46e11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35299e45eecb4b2b98003295a39b7945d1fa2943a5a7f97301054d255687877e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:38:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:52Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:52 crc kubenswrapper[4689]: I1201 08:39:52.813775 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:52Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:52 crc kubenswrapper[4689]: I1201 08:39:52.822927 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4z9l8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74395c07-d5ab-45ec-a616-1d0b1b336583\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f9b1493d035a89354477e779e8277075127a85756f190d5f0c3bf98a15f15b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjzb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4z9l8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:52Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:52 crc kubenswrapper[4689]: I1201 08:39:52.827502 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:52 crc kubenswrapper[4689]: I1201 08:39:52.827531 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:52 crc kubenswrapper[4689]: I1201 08:39:52.827543 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:52 crc kubenswrapper[4689]: I1201 08:39:52.827558 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:52 crc kubenswrapper[4689]: I1201 08:39:52.827573 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:52Z","lastTransitionTime":"2025-12-01T08:39:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:52 crc kubenswrapper[4689]: I1201 08:39:52.840796 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"988f960f-52fa-406f-9320-a8eec7a04f76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8be3fd2d2a091ced00ce091ea6f81df99a6718fa9ead81d56590d2c8ebc2a36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46c9dc400db69538c36365f4e2d66aae47defc4224c2881532870f875e2b30e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2afbacac849e9ed880e975ae167ea5b2329811c28516152f4d809b043ab61746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be8fa8da5eef9035e80121c75adb9d6653d8683948fa83cfb1ff87a5fb3e8b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81345c33ddfbccd0c6b7d9c505ee81d513ce884fa1326add04026fd82d441e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://211c2ff150fa2d5fb73b0df4cfc5e694bf0b6b6761eec1e6827ac983fd759829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c1785e9dd655b78cb3c4139dca5280fb0d84adc92ae5f92ff69ce96b8bb82f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c1785e9dd655b78cb3c4139dca5280fb0d84adc92ae5f92ff69ce96b8bb82f7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T08:39:33Z\\\",\\\"message\\\":\\\" current time 2025-12-01T08:39:33Z is after 2025-08-24T17:21:41Z]\\\\nI1201 08:39:33.035110 6200 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nI1201 08:39:33.033847 6200 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress-canary/ingress-canary]} name:Service_openshift-ingress-canary/ingress-canary_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.34:8443: 10.217.5.34:8888:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7715118b-bb1b-400a-803e-7ab2cc3eeec0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1201 08:39:33.035122 6200 obj_retry.go:409] Going to retry *v1.Pod resource setup for 6 objects: [openshift-network-diagnostics/network-check-source-55646444c4-trplf openshift-kube-scheduler/openshift-kube-scheduler-crc openshift-multus/multus-dl2st openshift-multus/network-metrics-daemon-jtwvs openshift-machine-config-operator/machine-config-daemon-hmdnx openshift-image-registry/node-ca-kg5bw]\\\\nI1201 08:39:33.035138 6200 obj_retry.go:418] Waiting for all the *\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:32Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8zn56_openshift-ovn-kubernetes(988f960f-52fa-406f-9320-a8eec7a04f76)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e96fecee94f6f25ea0bbde3c6a23854fb5dc476f4b5cfb82871358fbcf6809c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://496b7a56d1e3bb23f4d1da0fc5bbe009995c46b1f44faad80ee18a18ab301a18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://496b7a56d1e3bb23f4d1da0fc5bbe009995c46b1f44faad80ee18a18ab301a18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8zn56\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:52Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:52 crc kubenswrapper[4689]: I1201 08:39:52.857950 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jtwvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d6a08d0-a948-4c69-b3f0-f5e084adb453\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkwdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkwdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jtwvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:52Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:52 crc kubenswrapper[4689]: I1201 08:39:52.868885 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82556f94-5534-4aae-9690-5ce8e8d38113\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ad3697dfb9a953c1345d69e9f6c393a184bafaf121e9a62a86d05f8f26e3f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20de12d3c9020c7818be4f11a75808c7b7e81db8ae821b12284182d81e7cbceb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35c39a06af6610f8d18eac5f96e5dee0b542fb3fb0c81a102ed8b2a20d054a42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d507cfb5d0bd608d6d5ecd4105f944cdf013df7acf17c4d6237512601b4a7125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d507cfb5d0bd608d6d5ecd4105f944cdf013df7acf17c4d6237512601b4a7125\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:38:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:38:41Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:52Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:52 crc kubenswrapper[4689]: I1201 08:39:52.882557 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1946844b-83ee-401e-b3b6-5994ef81c85e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dbad86a3834b9ec3334de7ccc49988da1f7e42c423801c9789ce12ba70390b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://848171dd65d193193056bff78092de5ea65abaf7af4d168a9c27409765bd0ac2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad9f8bfeec0bea5ec9df44017223d633ce320bba477e99d5cc325712f92fff65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5920f4efa4e812090eb100b51bcf8e37eb82eb6b9fa495c8560ceeea3a94d55b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83ec64e82504ae546c78bdb3eb8b6cf47514373616749723ec457191b95b73c7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 08:38:54.283267 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 08:38:54.288329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1100834793/tls.crt::/tmp/serving-cert-1100834793/tls.key\\\\\\\"\\\\nI1201 08:39:00.137331 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 08:39:00.140699 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 08:39:00.140719 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 08:39:00.140740 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 08:39:00.140746 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 08:39:00.151271 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 08:39:00.151299 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 08:39:00.151305 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 08:39:00.151312 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 08:39:00.151316 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 08:39:00.151322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 08:39:00.151326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 08:39:00.151604 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 08:39:00.154622 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7190f95ad8a5cce27417903e343138ff345b97c0713609f47db91d8fe0c44a7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bba1efe2a197a5c47fc4220c1c53da0db51bcbdf317261c978b7f31348034f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bba1efe2a197a5c47fc4220c1c53da0db51bcbdf317261c978b7f31348034f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:38:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:52Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:52 crc kubenswrapper[4689]: I1201 08:39:52.893346 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3947625d-75bf-4332-a233-1491b2ee9d96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5095b8eb2f2f6319a0221ed4d7ec1fb81540fb75f610f7b57dc23fefbd196b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5v5pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdcf58565a332d76dac9cc12df1e0cd89eee6934be4b3234f8c36e9a8e22983a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5v5pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hmdnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:52Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:52 crc kubenswrapper[4689]: I1201 08:39:52.903869 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:52Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:52 crc kubenswrapper[4689]: I1201 08:39:52.916586 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:52Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:52 crc kubenswrapper[4689]: I1201 08:39:52.928562 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dl2st" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bebcb50-c292-4bca-9299-2fdc21439b18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0a76050989ec3f5388f58967fc29b953bea67fbbf75db6ad980546718f4f034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://768aa329fc277550bd549890a176d80246fcbfa50fbc3b88c444434b980b9986\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T08:39:51Z\\\",\\\"message\\\":\\\"2025-12-01T08:39:06+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d704f42a-5127-4c68-bde9-b46f9b6549c3\\\\n2025-12-01T08:39:06+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d704f42a-5127-4c68-bde9-b46f9b6549c3 to /host/opt/cni/bin/\\\\n2025-12-01T08:39:06Z [verbose] multus-daemon started\\\\n2025-12-01T08:39:06Z [verbose] Readiness Indicator file check\\\\n2025-12-01T08:39:51Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98wj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dl2st\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:52Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:52 crc kubenswrapper[4689]: I1201 08:39:52.931458 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:52 crc kubenswrapper[4689]: I1201 08:39:52.931496 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:52 crc kubenswrapper[4689]: I1201 08:39:52.931505 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:52 crc kubenswrapper[4689]: I1201 08:39:52.931523 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:52 crc kubenswrapper[4689]: I1201 08:39:52.931535 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:52Z","lastTransitionTime":"2025-12-01T08:39:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:52 crc kubenswrapper[4689]: I1201 08:39:52.943221 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7p2p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcab587f-eb9b-4dde-a0a1-75ed175999b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://761460333be2b369513cc7812afa57b580daf1e0e9add1c20f33ddf45601632c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6954a34226801e4552090e2c6e76c36de94e28ea8e175bc26392f534fb12214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6954a34226801e4552090e2c6e76c36de94e28ea8e175bc26392f534fb12214\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33d22ffe5ececc3e9210cb8c45471bd5f4b8a76e9ff2002cfc4c099b6973a7ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33d22ffe5ececc3e9210cb8c45471bd5f4b8a76e9ff2002cfc4c099b6973a7ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30e1f218df66bf86cef3d56c80b2d5bafacfabbac8868f3127a70f88431d00e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30e1f218df66bf86cef3d56c80b2d5bafacfabbac8868f3127a70f88431d00e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://548c59ec353473b69fd54aa96a077763c420647673b64000364532db2c3beefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://548c59ec353473b69fd54aa96a077763c420647673b64000364532db2c3beefe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae04920175b7c9665a739c2f390f5b21a7f8227277909132d9719eaba43c9dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae04920175b7c9665a739c2f390f5b21a7f8227277909132d9719eaba43c9dd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://512fd55cf3633a9fcdf425af76d251823e7638bd08a7f32341108470103ea67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://512fd55cf3633a9fcdf425af76d251823e7638bd08a7f32341108470103ea67d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7p2p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:52Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:52 crc kubenswrapper[4689]: I1201 08:39:52.952563 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kg5bw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af90efaa-97be-48b4-bfe6-dc25956d2b5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e208d5bee2ddf73e62749c6865d4a6cc138365c59ab582e2b28e8c01480591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2sbc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kg5bw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:52Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:52 crc kubenswrapper[4689]: I1201 08:39:52.961585 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c2qqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65456ad6-e7d1-4546-a977-244691fc5722\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c50390d9cf966913cfb379da199be0fe90b9085e0d76114903eb624054a7f84b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pwwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cda8fd2f87a8bee5f54685633fc64ce2dd06bfe6e5ea9fa8458345954080e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pwwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-c2qqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:52Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:52 crc kubenswrapper[4689]: I1201 08:39:52.974039 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e81b25bc4fc917a5480b84f4c7d2550829f5ad209d4cac14d2cd64d5b792b9e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:52Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:52 crc kubenswrapper[4689]: I1201 08:39:52.989586 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78a7f1a0302034697497f8a7a3d9547400a4311d7e304fdba95a19750c66658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:52Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:53 crc kubenswrapper[4689]: I1201 08:39:53.007337 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62ff4a03d2e50e2d881287008cfedc7ff067e04d3dc24ace25b56677b0eb117c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1ed68789c1f37ddbc952b0927feba7a704b01f3ee52e057e29c989f1d475e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:53Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:53 crc kubenswrapper[4689]: I1201 08:39:53.034501 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:53 crc kubenswrapper[4689]: I1201 08:39:53.034549 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:53 crc kubenswrapper[4689]: I1201 08:39:53.034560 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:53 crc kubenswrapper[4689]: I1201 08:39:53.034578 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:53 crc kubenswrapper[4689]: I1201 08:39:53.034592 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:53Z","lastTransitionTime":"2025-12-01T08:39:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:53 crc kubenswrapper[4689]: I1201 08:39:53.046943 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 08:39:53 crc kubenswrapper[4689]: E1201 08:39:53.047167 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 08:39:53 crc kubenswrapper[4689]: I1201 08:39:53.137470 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:53 crc kubenswrapper[4689]: I1201 08:39:53.137536 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:53 crc kubenswrapper[4689]: I1201 08:39:53.137553 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:53 crc kubenswrapper[4689]: I1201 08:39:53.137574 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:53 crc kubenswrapper[4689]: I1201 08:39:53.137590 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:53Z","lastTransitionTime":"2025-12-01T08:39:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:53 crc kubenswrapper[4689]: I1201 08:39:53.240855 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:53 crc kubenswrapper[4689]: I1201 08:39:53.240925 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:53 crc kubenswrapper[4689]: I1201 08:39:53.240938 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:53 crc kubenswrapper[4689]: I1201 08:39:53.240960 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:53 crc kubenswrapper[4689]: I1201 08:39:53.240975 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:53Z","lastTransitionTime":"2025-12-01T08:39:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:53 crc kubenswrapper[4689]: I1201 08:39:53.344185 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:53 crc kubenswrapper[4689]: I1201 08:39:53.344232 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:53 crc kubenswrapper[4689]: I1201 08:39:53.344241 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:53 crc kubenswrapper[4689]: I1201 08:39:53.344257 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:53 crc kubenswrapper[4689]: I1201 08:39:53.344268 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:53Z","lastTransitionTime":"2025-12-01T08:39:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:53 crc kubenswrapper[4689]: I1201 08:39:53.447304 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:53 crc kubenswrapper[4689]: I1201 08:39:53.447347 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:53 crc kubenswrapper[4689]: I1201 08:39:53.447358 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:53 crc kubenswrapper[4689]: I1201 08:39:53.447403 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:53 crc kubenswrapper[4689]: I1201 08:39:53.447414 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:53Z","lastTransitionTime":"2025-12-01T08:39:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:53 crc kubenswrapper[4689]: I1201 08:39:53.551113 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:53 crc kubenswrapper[4689]: I1201 08:39:53.551177 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:53 crc kubenswrapper[4689]: I1201 08:39:53.551198 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:53 crc kubenswrapper[4689]: I1201 08:39:53.551223 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:53 crc kubenswrapper[4689]: I1201 08:39:53.551241 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:53Z","lastTransitionTime":"2025-12-01T08:39:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:53 crc kubenswrapper[4689]: I1201 08:39:53.654220 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:53 crc kubenswrapper[4689]: I1201 08:39:53.654269 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:53 crc kubenswrapper[4689]: I1201 08:39:53.654279 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:53 crc kubenswrapper[4689]: I1201 08:39:53.654300 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:53 crc kubenswrapper[4689]: I1201 08:39:53.654360 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:53Z","lastTransitionTime":"2025-12-01T08:39:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:53 crc kubenswrapper[4689]: I1201 08:39:53.757581 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:53 crc kubenswrapper[4689]: I1201 08:39:53.757622 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:53 crc kubenswrapper[4689]: I1201 08:39:53.757631 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:53 crc kubenswrapper[4689]: I1201 08:39:53.757646 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:53 crc kubenswrapper[4689]: I1201 08:39:53.757656 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:53Z","lastTransitionTime":"2025-12-01T08:39:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:53 crc kubenswrapper[4689]: I1201 08:39:53.861018 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:53 crc kubenswrapper[4689]: I1201 08:39:53.861067 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:53 crc kubenswrapper[4689]: I1201 08:39:53.861086 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:53 crc kubenswrapper[4689]: I1201 08:39:53.861111 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:53 crc kubenswrapper[4689]: I1201 08:39:53.861131 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:53Z","lastTransitionTime":"2025-12-01T08:39:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:53 crc kubenswrapper[4689]: I1201 08:39:53.964874 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:53 crc kubenswrapper[4689]: I1201 08:39:53.964926 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:53 crc kubenswrapper[4689]: I1201 08:39:53.964939 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:53 crc kubenswrapper[4689]: I1201 08:39:53.964962 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:53 crc kubenswrapper[4689]: I1201 08:39:53.964975 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:53Z","lastTransitionTime":"2025-12-01T08:39:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:54 crc kubenswrapper[4689]: I1201 08:39:54.047062 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jtwvs" Dec 01 08:39:54 crc kubenswrapper[4689]: E1201 08:39:54.047302 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jtwvs" podUID="5d6a08d0-a948-4c69-b3f0-f5e084adb453" Dec 01 08:39:54 crc kubenswrapper[4689]: I1201 08:39:54.047673 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 08:39:54 crc kubenswrapper[4689]: I1201 08:39:54.047757 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 08:39:54 crc kubenswrapper[4689]: E1201 08:39:54.048165 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 08:39:54 crc kubenswrapper[4689]: E1201 08:39:54.048304 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 08:39:54 crc kubenswrapper[4689]: I1201 08:39:54.068702 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:54 crc kubenswrapper[4689]: I1201 08:39:54.068746 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:54 crc kubenswrapper[4689]: I1201 08:39:54.068756 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:54 crc kubenswrapper[4689]: I1201 08:39:54.068772 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:54 crc kubenswrapper[4689]: I1201 08:39:54.068786 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:54Z","lastTransitionTime":"2025-12-01T08:39:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:54 crc kubenswrapper[4689]: I1201 08:39:54.171684 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:54 crc kubenswrapper[4689]: I1201 08:39:54.171725 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:54 crc kubenswrapper[4689]: I1201 08:39:54.171738 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:54 crc kubenswrapper[4689]: I1201 08:39:54.171753 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:54 crc kubenswrapper[4689]: I1201 08:39:54.171765 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:54Z","lastTransitionTime":"2025-12-01T08:39:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:54 crc kubenswrapper[4689]: I1201 08:39:54.274323 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:54 crc kubenswrapper[4689]: I1201 08:39:54.274829 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:54 crc kubenswrapper[4689]: I1201 08:39:54.275033 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:54 crc kubenswrapper[4689]: I1201 08:39:54.275180 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:54 crc kubenswrapper[4689]: I1201 08:39:54.275339 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:54Z","lastTransitionTime":"2025-12-01T08:39:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:54 crc kubenswrapper[4689]: I1201 08:39:54.379549 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:54 crc kubenswrapper[4689]: I1201 08:39:54.379602 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:54 crc kubenswrapper[4689]: I1201 08:39:54.379615 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:54 crc kubenswrapper[4689]: I1201 08:39:54.379637 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:54 crc kubenswrapper[4689]: I1201 08:39:54.379653 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:54Z","lastTransitionTime":"2025-12-01T08:39:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:54 crc kubenswrapper[4689]: I1201 08:39:54.482314 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:54 crc kubenswrapper[4689]: I1201 08:39:54.482809 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:54 crc kubenswrapper[4689]: I1201 08:39:54.483009 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:54 crc kubenswrapper[4689]: I1201 08:39:54.483173 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:54 crc kubenswrapper[4689]: I1201 08:39:54.483319 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:54Z","lastTransitionTime":"2025-12-01T08:39:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:54 crc kubenswrapper[4689]: I1201 08:39:54.586354 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:54 crc kubenswrapper[4689]: I1201 08:39:54.586478 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:54 crc kubenswrapper[4689]: I1201 08:39:54.586503 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:54 crc kubenswrapper[4689]: I1201 08:39:54.586537 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:54 crc kubenswrapper[4689]: I1201 08:39:54.586560 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:54Z","lastTransitionTime":"2025-12-01T08:39:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:54 crc kubenswrapper[4689]: I1201 08:39:54.690675 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:54 crc kubenswrapper[4689]: I1201 08:39:54.690756 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:54 crc kubenswrapper[4689]: I1201 08:39:54.690783 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:54 crc kubenswrapper[4689]: I1201 08:39:54.690816 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:54 crc kubenswrapper[4689]: I1201 08:39:54.690842 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:54Z","lastTransitionTime":"2025-12-01T08:39:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:54 crc kubenswrapper[4689]: I1201 08:39:54.793289 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:54 crc kubenswrapper[4689]: I1201 08:39:54.793550 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:54 crc kubenswrapper[4689]: I1201 08:39:54.793639 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:54 crc kubenswrapper[4689]: I1201 08:39:54.793746 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:54 crc kubenswrapper[4689]: I1201 08:39:54.793826 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:54Z","lastTransitionTime":"2025-12-01T08:39:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:54 crc kubenswrapper[4689]: I1201 08:39:54.896993 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:54 crc kubenswrapper[4689]: I1201 08:39:54.897065 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:54 crc kubenswrapper[4689]: I1201 08:39:54.897087 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:54 crc kubenswrapper[4689]: I1201 08:39:54.897118 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:54 crc kubenswrapper[4689]: I1201 08:39:54.897142 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:54Z","lastTransitionTime":"2025-12-01T08:39:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:55 crc kubenswrapper[4689]: I1201 08:39:55.000115 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:55 crc kubenswrapper[4689]: I1201 08:39:55.001172 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:55 crc kubenswrapper[4689]: I1201 08:39:55.001332 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:55 crc kubenswrapper[4689]: I1201 08:39:55.001563 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:55 crc kubenswrapper[4689]: I1201 08:39:55.001762 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:55Z","lastTransitionTime":"2025-12-01T08:39:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:55 crc kubenswrapper[4689]: I1201 08:39:55.047106 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 08:39:55 crc kubenswrapper[4689]: E1201 08:39:55.047358 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 08:39:55 crc kubenswrapper[4689]: I1201 08:39:55.065745 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Dec 01 08:39:55 crc kubenswrapper[4689]: I1201 08:39:55.106702 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:55 crc kubenswrapper[4689]: I1201 08:39:55.106770 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:55 crc kubenswrapper[4689]: I1201 08:39:55.106790 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:55 crc kubenswrapper[4689]: I1201 08:39:55.106817 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:55 crc kubenswrapper[4689]: I1201 08:39:55.106836 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:55Z","lastTransitionTime":"2025-12-01T08:39:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:55 crc kubenswrapper[4689]: I1201 08:39:55.210206 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:55 crc kubenswrapper[4689]: I1201 08:39:55.210268 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:55 crc kubenswrapper[4689]: I1201 08:39:55.210283 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:55 crc kubenswrapper[4689]: I1201 08:39:55.210318 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:55 crc kubenswrapper[4689]: I1201 08:39:55.210336 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:55Z","lastTransitionTime":"2025-12-01T08:39:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:55 crc kubenswrapper[4689]: I1201 08:39:55.313765 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:55 crc kubenswrapper[4689]: I1201 08:39:55.313839 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:55 crc kubenswrapper[4689]: I1201 08:39:55.313857 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:55 crc kubenswrapper[4689]: I1201 08:39:55.313913 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:55 crc kubenswrapper[4689]: I1201 08:39:55.313937 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:55Z","lastTransitionTime":"2025-12-01T08:39:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:55 crc kubenswrapper[4689]: I1201 08:39:55.418066 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:55 crc kubenswrapper[4689]: I1201 08:39:55.418580 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:55 crc kubenswrapper[4689]: I1201 08:39:55.418832 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:55 crc kubenswrapper[4689]: I1201 08:39:55.419072 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:55 crc kubenswrapper[4689]: I1201 08:39:55.419290 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:55Z","lastTransitionTime":"2025-12-01T08:39:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:55 crc kubenswrapper[4689]: I1201 08:39:55.523200 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:55 crc kubenswrapper[4689]: I1201 08:39:55.523694 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:55 crc kubenswrapper[4689]: I1201 08:39:55.523877 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:55 crc kubenswrapper[4689]: I1201 08:39:55.524069 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:55 crc kubenswrapper[4689]: I1201 08:39:55.524247 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:55Z","lastTransitionTime":"2025-12-01T08:39:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:55 crc kubenswrapper[4689]: I1201 08:39:55.627196 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:55 crc kubenswrapper[4689]: I1201 08:39:55.627229 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:55 crc kubenswrapper[4689]: I1201 08:39:55.627237 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:55 crc kubenswrapper[4689]: I1201 08:39:55.627254 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:55 crc kubenswrapper[4689]: I1201 08:39:55.627263 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:55Z","lastTransitionTime":"2025-12-01T08:39:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:55 crc kubenswrapper[4689]: I1201 08:39:55.731588 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:55 crc kubenswrapper[4689]: I1201 08:39:55.731928 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:55 crc kubenswrapper[4689]: I1201 08:39:55.732034 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:55 crc kubenswrapper[4689]: I1201 08:39:55.732144 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:55 crc kubenswrapper[4689]: I1201 08:39:55.732247 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:55Z","lastTransitionTime":"2025-12-01T08:39:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:55 crc kubenswrapper[4689]: I1201 08:39:55.804510 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:55 crc kubenswrapper[4689]: I1201 08:39:55.804606 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:55 crc kubenswrapper[4689]: I1201 08:39:55.804637 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:55 crc kubenswrapper[4689]: I1201 08:39:55.804666 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:55 crc kubenswrapper[4689]: I1201 08:39:55.804685 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:55Z","lastTransitionTime":"2025-12-01T08:39:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:55 crc kubenswrapper[4689]: E1201 08:39:55.821914 4689 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"87f9359d-17aa-499d-90bc-b05146c26f0f\\\",\\\"systemUUID\\\":\\\"1b1c64ae-9dbc-417f-9fac-7f3e657b08f7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:55Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:55 crc kubenswrapper[4689]: I1201 08:39:55.827998 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:55 crc kubenswrapper[4689]: I1201 08:39:55.828044 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:55 crc kubenswrapper[4689]: I1201 08:39:55.828056 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:55 crc kubenswrapper[4689]: I1201 08:39:55.828077 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:55 crc kubenswrapper[4689]: I1201 08:39:55.828091 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:55Z","lastTransitionTime":"2025-12-01T08:39:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:55 crc kubenswrapper[4689]: E1201 08:39:55.850760 4689 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"87f9359d-17aa-499d-90bc-b05146c26f0f\\\",\\\"systemUUID\\\":\\\"1b1c64ae-9dbc-417f-9fac-7f3e657b08f7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:55Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:55 crc kubenswrapper[4689]: I1201 08:39:55.855420 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:55 crc kubenswrapper[4689]: I1201 08:39:55.855493 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:55 crc kubenswrapper[4689]: I1201 08:39:55.855511 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:55 crc kubenswrapper[4689]: I1201 08:39:55.855585 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:55 crc kubenswrapper[4689]: I1201 08:39:55.855609 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:55Z","lastTransitionTime":"2025-12-01T08:39:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:55 crc kubenswrapper[4689]: E1201 08:39:55.871867 4689 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"87f9359d-17aa-499d-90bc-b05146c26f0f\\\",\\\"systemUUID\\\":\\\"1b1c64ae-9dbc-417f-9fac-7f3e657b08f7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:55Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:55 crc kubenswrapper[4689]: I1201 08:39:55.876521 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:55 crc kubenswrapper[4689]: I1201 08:39:55.876554 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:55 crc kubenswrapper[4689]: I1201 08:39:55.876564 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:55 crc kubenswrapper[4689]: I1201 08:39:55.876579 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:55 crc kubenswrapper[4689]: I1201 08:39:55.876592 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:55Z","lastTransitionTime":"2025-12-01T08:39:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:55 crc kubenswrapper[4689]: E1201 08:39:55.889394 4689 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"87f9359d-17aa-499d-90bc-b05146c26f0f\\\",\\\"systemUUID\\\":\\\"1b1c64ae-9dbc-417f-9fac-7f3e657b08f7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:55Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:55 crc kubenswrapper[4689]: I1201 08:39:55.902435 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:55 crc kubenswrapper[4689]: I1201 08:39:55.902489 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:55 crc kubenswrapper[4689]: I1201 08:39:55.902507 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:55 crc kubenswrapper[4689]: I1201 08:39:55.902533 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:55 crc kubenswrapper[4689]: I1201 08:39:55.902546 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:55Z","lastTransitionTime":"2025-12-01T08:39:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:55 crc kubenswrapper[4689]: E1201 08:39:55.918696 4689 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:39:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"87f9359d-17aa-499d-90bc-b05146c26f0f\\\",\\\"systemUUID\\\":\\\"1b1c64ae-9dbc-417f-9fac-7f3e657b08f7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:39:55Z is after 2025-08-24T17:21:41Z" Dec 01 08:39:55 crc kubenswrapper[4689]: E1201 08:39:55.918860 4689 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 01 08:39:55 crc kubenswrapper[4689]: I1201 08:39:55.923590 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:55 crc kubenswrapper[4689]: I1201 08:39:55.923659 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:55 crc kubenswrapper[4689]: I1201 08:39:55.923671 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:55 crc kubenswrapper[4689]: I1201 08:39:55.923694 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:55 crc kubenswrapper[4689]: I1201 08:39:55.923706 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:55Z","lastTransitionTime":"2025-12-01T08:39:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:56 crc kubenswrapper[4689]: I1201 08:39:56.026349 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:56 crc kubenswrapper[4689]: I1201 08:39:56.026417 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:56 crc kubenswrapper[4689]: I1201 08:39:56.026430 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:56 crc kubenswrapper[4689]: I1201 08:39:56.026449 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:56 crc kubenswrapper[4689]: I1201 08:39:56.026460 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:56Z","lastTransitionTime":"2025-12-01T08:39:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:56 crc kubenswrapper[4689]: I1201 08:39:56.047405 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jtwvs" Dec 01 08:39:56 crc kubenswrapper[4689]: I1201 08:39:56.047558 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 08:39:56 crc kubenswrapper[4689]: E1201 08:39:56.047632 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jtwvs" podUID="5d6a08d0-a948-4c69-b3f0-f5e084adb453" Dec 01 08:39:56 crc kubenswrapper[4689]: I1201 08:39:56.047676 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 08:39:56 crc kubenswrapper[4689]: E1201 08:39:56.047840 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 08:39:56 crc kubenswrapper[4689]: E1201 08:39:56.048014 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 08:39:56 crc kubenswrapper[4689]: I1201 08:39:56.130566 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:56 crc kubenswrapper[4689]: I1201 08:39:56.130736 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:56 crc kubenswrapper[4689]: I1201 08:39:56.130781 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:56 crc kubenswrapper[4689]: I1201 08:39:56.130838 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:56 crc kubenswrapper[4689]: I1201 08:39:56.130856 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:56Z","lastTransitionTime":"2025-12-01T08:39:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:56 crc kubenswrapper[4689]: I1201 08:39:56.234861 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:56 crc kubenswrapper[4689]: I1201 08:39:56.234923 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:56 crc kubenswrapper[4689]: I1201 08:39:56.234943 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:56 crc kubenswrapper[4689]: I1201 08:39:56.235166 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:56 crc kubenswrapper[4689]: I1201 08:39:56.235187 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:56Z","lastTransitionTime":"2025-12-01T08:39:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:56 crc kubenswrapper[4689]: I1201 08:39:56.339736 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:56 crc kubenswrapper[4689]: I1201 08:39:56.339815 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:56 crc kubenswrapper[4689]: I1201 08:39:56.339827 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:56 crc kubenswrapper[4689]: I1201 08:39:56.339910 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:56 crc kubenswrapper[4689]: I1201 08:39:56.339925 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:56Z","lastTransitionTime":"2025-12-01T08:39:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:56 crc kubenswrapper[4689]: I1201 08:39:56.443213 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:56 crc kubenswrapper[4689]: I1201 08:39:56.443262 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:56 crc kubenswrapper[4689]: I1201 08:39:56.443273 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:56 crc kubenswrapper[4689]: I1201 08:39:56.443297 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:56 crc kubenswrapper[4689]: I1201 08:39:56.443311 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:56Z","lastTransitionTime":"2025-12-01T08:39:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:56 crc kubenswrapper[4689]: I1201 08:39:56.546846 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:56 crc kubenswrapper[4689]: I1201 08:39:56.546910 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:56 crc kubenswrapper[4689]: I1201 08:39:56.546923 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:56 crc kubenswrapper[4689]: I1201 08:39:56.546944 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:56 crc kubenswrapper[4689]: I1201 08:39:56.546964 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:56Z","lastTransitionTime":"2025-12-01T08:39:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:56 crc kubenswrapper[4689]: I1201 08:39:56.650754 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:56 crc kubenswrapper[4689]: I1201 08:39:56.650807 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:56 crc kubenswrapper[4689]: I1201 08:39:56.650816 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:56 crc kubenswrapper[4689]: I1201 08:39:56.650833 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:56 crc kubenswrapper[4689]: I1201 08:39:56.650848 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:56Z","lastTransitionTime":"2025-12-01T08:39:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:56 crc kubenswrapper[4689]: I1201 08:39:56.754702 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:56 crc kubenswrapper[4689]: I1201 08:39:56.754783 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:56 crc kubenswrapper[4689]: I1201 08:39:56.754796 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:56 crc kubenswrapper[4689]: I1201 08:39:56.754819 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:56 crc kubenswrapper[4689]: I1201 08:39:56.754833 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:56Z","lastTransitionTime":"2025-12-01T08:39:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:56 crc kubenswrapper[4689]: I1201 08:39:56.858282 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:56 crc kubenswrapper[4689]: I1201 08:39:56.858347 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:56 crc kubenswrapper[4689]: I1201 08:39:56.858359 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:56 crc kubenswrapper[4689]: I1201 08:39:56.858400 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:56 crc kubenswrapper[4689]: I1201 08:39:56.858416 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:56Z","lastTransitionTime":"2025-12-01T08:39:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:56 crc kubenswrapper[4689]: I1201 08:39:56.962255 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:56 crc kubenswrapper[4689]: I1201 08:39:56.962320 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:56 crc kubenswrapper[4689]: I1201 08:39:56.962332 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:56 crc kubenswrapper[4689]: I1201 08:39:56.962356 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:56 crc kubenswrapper[4689]: I1201 08:39:56.962397 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:56Z","lastTransitionTime":"2025-12-01T08:39:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:57 crc kubenswrapper[4689]: I1201 08:39:57.047495 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 08:39:57 crc kubenswrapper[4689]: E1201 08:39:57.047729 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 08:39:57 crc kubenswrapper[4689]: I1201 08:39:57.065615 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:57 crc kubenswrapper[4689]: I1201 08:39:57.065670 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:57 crc kubenswrapper[4689]: I1201 08:39:57.065681 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:57 crc kubenswrapper[4689]: I1201 08:39:57.065702 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:57 crc kubenswrapper[4689]: I1201 08:39:57.065715 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:57Z","lastTransitionTime":"2025-12-01T08:39:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:57 crc kubenswrapper[4689]: I1201 08:39:57.169481 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:57 crc kubenswrapper[4689]: I1201 08:39:57.169539 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:57 crc kubenswrapper[4689]: I1201 08:39:57.169557 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:57 crc kubenswrapper[4689]: I1201 08:39:57.169587 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:57 crc kubenswrapper[4689]: I1201 08:39:57.169606 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:57Z","lastTransitionTime":"2025-12-01T08:39:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:57 crc kubenswrapper[4689]: I1201 08:39:57.273429 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:57 crc kubenswrapper[4689]: I1201 08:39:57.273488 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:57 crc kubenswrapper[4689]: I1201 08:39:57.273499 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:57 crc kubenswrapper[4689]: I1201 08:39:57.273520 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:57 crc kubenswrapper[4689]: I1201 08:39:57.273537 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:57Z","lastTransitionTime":"2025-12-01T08:39:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:57 crc kubenswrapper[4689]: I1201 08:39:57.405337 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:57 crc kubenswrapper[4689]: I1201 08:39:57.405437 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:57 crc kubenswrapper[4689]: I1201 08:39:57.405449 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:57 crc kubenswrapper[4689]: I1201 08:39:57.405468 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:57 crc kubenswrapper[4689]: I1201 08:39:57.405481 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:57Z","lastTransitionTime":"2025-12-01T08:39:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:57 crc kubenswrapper[4689]: I1201 08:39:57.509078 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:57 crc kubenswrapper[4689]: I1201 08:39:57.509155 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:57 crc kubenswrapper[4689]: I1201 08:39:57.509178 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:57 crc kubenswrapper[4689]: I1201 08:39:57.509211 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:57 crc kubenswrapper[4689]: I1201 08:39:57.509234 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:57Z","lastTransitionTime":"2025-12-01T08:39:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:57 crc kubenswrapper[4689]: I1201 08:39:57.612962 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:57 crc kubenswrapper[4689]: I1201 08:39:57.613038 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:57 crc kubenswrapper[4689]: I1201 08:39:57.613055 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:57 crc kubenswrapper[4689]: I1201 08:39:57.613083 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:57 crc kubenswrapper[4689]: I1201 08:39:57.613102 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:57Z","lastTransitionTime":"2025-12-01T08:39:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:57 crc kubenswrapper[4689]: I1201 08:39:57.716297 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:57 crc kubenswrapper[4689]: I1201 08:39:57.716402 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:57 crc kubenswrapper[4689]: I1201 08:39:57.716420 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:57 crc kubenswrapper[4689]: I1201 08:39:57.716448 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:57 crc kubenswrapper[4689]: I1201 08:39:57.716469 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:57Z","lastTransitionTime":"2025-12-01T08:39:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:57 crc kubenswrapper[4689]: I1201 08:39:57.820153 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:57 crc kubenswrapper[4689]: I1201 08:39:57.820222 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:57 crc kubenswrapper[4689]: I1201 08:39:57.820236 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:57 crc kubenswrapper[4689]: I1201 08:39:57.820260 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:57 crc kubenswrapper[4689]: I1201 08:39:57.820274 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:57Z","lastTransitionTime":"2025-12-01T08:39:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:57 crc kubenswrapper[4689]: I1201 08:39:57.923624 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:57 crc kubenswrapper[4689]: I1201 08:39:57.923683 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:57 crc kubenswrapper[4689]: I1201 08:39:57.923696 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:57 crc kubenswrapper[4689]: I1201 08:39:57.923718 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:57 crc kubenswrapper[4689]: I1201 08:39:57.923733 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:57Z","lastTransitionTime":"2025-12-01T08:39:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:58 crc kubenswrapper[4689]: I1201 08:39:58.027146 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:58 crc kubenswrapper[4689]: I1201 08:39:58.027211 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:58 crc kubenswrapper[4689]: I1201 08:39:58.027225 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:58 crc kubenswrapper[4689]: I1201 08:39:58.027249 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:58 crc kubenswrapper[4689]: I1201 08:39:58.027262 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:58Z","lastTransitionTime":"2025-12-01T08:39:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:58 crc kubenswrapper[4689]: I1201 08:39:58.047107 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jtwvs" Dec 01 08:39:58 crc kubenswrapper[4689]: I1201 08:39:58.047190 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 08:39:58 crc kubenswrapper[4689]: I1201 08:39:58.047283 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 08:39:58 crc kubenswrapper[4689]: E1201 08:39:58.047422 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jtwvs" podUID="5d6a08d0-a948-4c69-b3f0-f5e084adb453" Dec 01 08:39:58 crc kubenswrapper[4689]: E1201 08:39:58.047587 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 08:39:58 crc kubenswrapper[4689]: E1201 08:39:58.047689 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 08:39:58 crc kubenswrapper[4689]: I1201 08:39:58.139245 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:58 crc kubenswrapper[4689]: I1201 08:39:58.139306 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:58 crc kubenswrapper[4689]: I1201 08:39:58.139316 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:58 crc kubenswrapper[4689]: I1201 08:39:58.139339 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:58 crc kubenswrapper[4689]: I1201 08:39:58.139350 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:58Z","lastTransitionTime":"2025-12-01T08:39:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:58 crc kubenswrapper[4689]: I1201 08:39:58.243226 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:58 crc kubenswrapper[4689]: I1201 08:39:58.243313 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:58 crc kubenswrapper[4689]: I1201 08:39:58.243336 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:58 crc kubenswrapper[4689]: I1201 08:39:58.243399 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:58 crc kubenswrapper[4689]: I1201 08:39:58.243428 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:58Z","lastTransitionTime":"2025-12-01T08:39:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:58 crc kubenswrapper[4689]: I1201 08:39:58.346841 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:58 crc kubenswrapper[4689]: I1201 08:39:58.346911 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:58 crc kubenswrapper[4689]: I1201 08:39:58.346929 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:58 crc kubenswrapper[4689]: I1201 08:39:58.346945 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:58 crc kubenswrapper[4689]: I1201 08:39:58.346955 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:58Z","lastTransitionTime":"2025-12-01T08:39:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:58 crc kubenswrapper[4689]: I1201 08:39:58.450076 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:58 crc kubenswrapper[4689]: I1201 08:39:58.450128 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:58 crc kubenswrapper[4689]: I1201 08:39:58.450138 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:58 crc kubenswrapper[4689]: I1201 08:39:58.450161 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:58 crc kubenswrapper[4689]: I1201 08:39:58.450172 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:58Z","lastTransitionTime":"2025-12-01T08:39:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:58 crc kubenswrapper[4689]: I1201 08:39:58.553753 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:58 crc kubenswrapper[4689]: I1201 08:39:58.553798 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:58 crc kubenswrapper[4689]: I1201 08:39:58.553811 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:58 crc kubenswrapper[4689]: I1201 08:39:58.553831 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:58 crc kubenswrapper[4689]: I1201 08:39:58.553841 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:58Z","lastTransitionTime":"2025-12-01T08:39:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:58 crc kubenswrapper[4689]: I1201 08:39:58.656936 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:58 crc kubenswrapper[4689]: I1201 08:39:58.657004 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:58 crc kubenswrapper[4689]: I1201 08:39:58.657022 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:58 crc kubenswrapper[4689]: I1201 08:39:58.657047 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:58 crc kubenswrapper[4689]: I1201 08:39:58.657065 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:58Z","lastTransitionTime":"2025-12-01T08:39:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:58 crc kubenswrapper[4689]: I1201 08:39:58.760106 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:58 crc kubenswrapper[4689]: I1201 08:39:58.760146 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:58 crc kubenswrapper[4689]: I1201 08:39:58.760158 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:58 crc kubenswrapper[4689]: I1201 08:39:58.760176 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:58 crc kubenswrapper[4689]: I1201 08:39:58.760187 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:58Z","lastTransitionTime":"2025-12-01T08:39:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:58 crc kubenswrapper[4689]: I1201 08:39:58.863193 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:58 crc kubenswrapper[4689]: I1201 08:39:58.863257 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:58 crc kubenswrapper[4689]: I1201 08:39:58.863270 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:58 crc kubenswrapper[4689]: I1201 08:39:58.863292 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:58 crc kubenswrapper[4689]: I1201 08:39:58.863307 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:58Z","lastTransitionTime":"2025-12-01T08:39:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:58 crc kubenswrapper[4689]: I1201 08:39:58.966402 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:58 crc kubenswrapper[4689]: I1201 08:39:58.966445 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:58 crc kubenswrapper[4689]: I1201 08:39:58.966458 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:58 crc kubenswrapper[4689]: I1201 08:39:58.966473 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:58 crc kubenswrapper[4689]: I1201 08:39:58.966484 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:58Z","lastTransitionTime":"2025-12-01T08:39:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:59 crc kubenswrapper[4689]: I1201 08:39:59.046687 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 08:39:59 crc kubenswrapper[4689]: E1201 08:39:59.046946 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 08:39:59 crc kubenswrapper[4689]: I1201 08:39:59.070404 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:59 crc kubenswrapper[4689]: I1201 08:39:59.070498 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:59 crc kubenswrapper[4689]: I1201 08:39:59.070519 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:59 crc kubenswrapper[4689]: I1201 08:39:59.070757 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:59 crc kubenswrapper[4689]: I1201 08:39:59.070773 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:59Z","lastTransitionTime":"2025-12-01T08:39:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:59 crc kubenswrapper[4689]: I1201 08:39:59.173843 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:59 crc kubenswrapper[4689]: I1201 08:39:59.173909 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:59 crc kubenswrapper[4689]: I1201 08:39:59.173926 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:59 crc kubenswrapper[4689]: I1201 08:39:59.173950 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:59 crc kubenswrapper[4689]: I1201 08:39:59.173972 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:59Z","lastTransitionTime":"2025-12-01T08:39:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:59 crc kubenswrapper[4689]: I1201 08:39:59.277243 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:59 crc kubenswrapper[4689]: I1201 08:39:59.277294 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:59 crc kubenswrapper[4689]: I1201 08:39:59.277306 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:59 crc kubenswrapper[4689]: I1201 08:39:59.277328 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:59 crc kubenswrapper[4689]: I1201 08:39:59.277343 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:59Z","lastTransitionTime":"2025-12-01T08:39:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:59 crc kubenswrapper[4689]: I1201 08:39:59.380761 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:59 crc kubenswrapper[4689]: I1201 08:39:59.380822 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:59 crc kubenswrapper[4689]: I1201 08:39:59.380838 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:59 crc kubenswrapper[4689]: I1201 08:39:59.380863 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:59 crc kubenswrapper[4689]: I1201 08:39:59.380881 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:59Z","lastTransitionTime":"2025-12-01T08:39:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:59 crc kubenswrapper[4689]: I1201 08:39:59.484843 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:59 crc kubenswrapper[4689]: I1201 08:39:59.484909 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:59 crc kubenswrapper[4689]: I1201 08:39:59.484929 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:59 crc kubenswrapper[4689]: I1201 08:39:59.484957 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:59 crc kubenswrapper[4689]: I1201 08:39:59.484974 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:59Z","lastTransitionTime":"2025-12-01T08:39:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:59 crc kubenswrapper[4689]: I1201 08:39:59.589491 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:59 crc kubenswrapper[4689]: I1201 08:39:59.589554 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:59 crc kubenswrapper[4689]: I1201 08:39:59.589567 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:59 crc kubenswrapper[4689]: I1201 08:39:59.589604 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:59 crc kubenswrapper[4689]: I1201 08:39:59.589617 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:59Z","lastTransitionTime":"2025-12-01T08:39:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:59 crc kubenswrapper[4689]: I1201 08:39:59.692928 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:59 crc kubenswrapper[4689]: I1201 08:39:59.693023 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:59 crc kubenswrapper[4689]: I1201 08:39:59.693069 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:59 crc kubenswrapper[4689]: I1201 08:39:59.693099 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:59 crc kubenswrapper[4689]: I1201 08:39:59.693149 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:59Z","lastTransitionTime":"2025-12-01T08:39:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:59 crc kubenswrapper[4689]: I1201 08:39:59.796578 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:59 crc kubenswrapper[4689]: I1201 08:39:59.796652 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:59 crc kubenswrapper[4689]: I1201 08:39:59.796668 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:59 crc kubenswrapper[4689]: I1201 08:39:59.796699 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:59 crc kubenswrapper[4689]: I1201 08:39:59.796718 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:59Z","lastTransitionTime":"2025-12-01T08:39:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:39:59 crc kubenswrapper[4689]: I1201 08:39:59.899310 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:39:59 crc kubenswrapper[4689]: I1201 08:39:59.899434 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:39:59 crc kubenswrapper[4689]: I1201 08:39:59.899461 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:39:59 crc kubenswrapper[4689]: I1201 08:39:59.899495 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:39:59 crc kubenswrapper[4689]: I1201 08:39:59.899520 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:39:59Z","lastTransitionTime":"2025-12-01T08:39:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:00 crc kubenswrapper[4689]: I1201 08:40:00.002667 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:00 crc kubenswrapper[4689]: I1201 08:40:00.002716 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:00 crc kubenswrapper[4689]: I1201 08:40:00.002725 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:00 crc kubenswrapper[4689]: I1201 08:40:00.002740 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:00 crc kubenswrapper[4689]: I1201 08:40:00.002753 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:00Z","lastTransitionTime":"2025-12-01T08:40:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:00 crc kubenswrapper[4689]: I1201 08:40:00.047306 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 08:40:00 crc kubenswrapper[4689]: I1201 08:40:00.047343 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jtwvs" Dec 01 08:40:00 crc kubenswrapper[4689]: I1201 08:40:00.047481 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 08:40:00 crc kubenswrapper[4689]: E1201 08:40:00.047586 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 08:40:00 crc kubenswrapper[4689]: E1201 08:40:00.047728 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 08:40:00 crc kubenswrapper[4689]: E1201 08:40:00.047838 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jtwvs" podUID="5d6a08d0-a948-4c69-b3f0-f5e084adb453" Dec 01 08:40:00 crc kubenswrapper[4689]: I1201 08:40:00.105649 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:00 crc kubenswrapper[4689]: I1201 08:40:00.105733 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:00 crc kubenswrapper[4689]: I1201 08:40:00.105761 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:00 crc kubenswrapper[4689]: I1201 08:40:00.105798 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:00 crc kubenswrapper[4689]: I1201 08:40:00.105818 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:00Z","lastTransitionTime":"2025-12-01T08:40:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:00 crc kubenswrapper[4689]: I1201 08:40:00.208837 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:00 crc kubenswrapper[4689]: I1201 08:40:00.208899 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:00 crc kubenswrapper[4689]: I1201 08:40:00.208917 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:00 crc kubenswrapper[4689]: I1201 08:40:00.208943 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:00 crc kubenswrapper[4689]: I1201 08:40:00.208971 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:00Z","lastTransitionTime":"2025-12-01T08:40:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:00 crc kubenswrapper[4689]: I1201 08:40:00.312170 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:00 crc kubenswrapper[4689]: I1201 08:40:00.312233 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:00 crc kubenswrapper[4689]: I1201 08:40:00.312246 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:00 crc kubenswrapper[4689]: I1201 08:40:00.312271 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:00 crc kubenswrapper[4689]: I1201 08:40:00.312289 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:00Z","lastTransitionTime":"2025-12-01T08:40:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:00 crc kubenswrapper[4689]: I1201 08:40:00.417423 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:00 crc kubenswrapper[4689]: I1201 08:40:00.417508 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:00 crc kubenswrapper[4689]: I1201 08:40:00.417519 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:00 crc kubenswrapper[4689]: I1201 08:40:00.417579 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:00 crc kubenswrapper[4689]: I1201 08:40:00.417594 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:00Z","lastTransitionTime":"2025-12-01T08:40:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:00 crc kubenswrapper[4689]: I1201 08:40:00.520763 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:00 crc kubenswrapper[4689]: I1201 08:40:00.520857 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:00 crc kubenswrapper[4689]: I1201 08:40:00.520875 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:00 crc kubenswrapper[4689]: I1201 08:40:00.520907 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:00 crc kubenswrapper[4689]: I1201 08:40:00.520926 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:00Z","lastTransitionTime":"2025-12-01T08:40:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:00 crc kubenswrapper[4689]: I1201 08:40:00.624664 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:00 crc kubenswrapper[4689]: I1201 08:40:00.624704 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:00 crc kubenswrapper[4689]: I1201 08:40:00.624714 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:00 crc kubenswrapper[4689]: I1201 08:40:00.624730 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:00 crc kubenswrapper[4689]: I1201 08:40:00.624742 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:00Z","lastTransitionTime":"2025-12-01T08:40:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:00 crc kubenswrapper[4689]: I1201 08:40:00.731041 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:00 crc kubenswrapper[4689]: I1201 08:40:00.731137 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:00 crc kubenswrapper[4689]: I1201 08:40:00.731154 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:00 crc kubenswrapper[4689]: I1201 08:40:00.731180 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:00 crc kubenswrapper[4689]: I1201 08:40:00.731202 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:00Z","lastTransitionTime":"2025-12-01T08:40:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:00 crc kubenswrapper[4689]: I1201 08:40:00.834840 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:00 crc kubenswrapper[4689]: I1201 08:40:00.834907 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:00 crc kubenswrapper[4689]: I1201 08:40:00.834919 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:00 crc kubenswrapper[4689]: I1201 08:40:00.834937 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:00 crc kubenswrapper[4689]: I1201 08:40:00.834948 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:00Z","lastTransitionTime":"2025-12-01T08:40:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:00 crc kubenswrapper[4689]: I1201 08:40:00.937514 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:00 crc kubenswrapper[4689]: I1201 08:40:00.937571 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:00 crc kubenswrapper[4689]: I1201 08:40:00.937581 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:00 crc kubenswrapper[4689]: I1201 08:40:00.937597 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:00 crc kubenswrapper[4689]: I1201 08:40:00.937607 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:00Z","lastTransitionTime":"2025-12-01T08:40:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:01 crc kubenswrapper[4689]: I1201 08:40:01.040518 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:01 crc kubenswrapper[4689]: I1201 08:40:01.040587 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:01 crc kubenswrapper[4689]: I1201 08:40:01.040601 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:01 crc kubenswrapper[4689]: I1201 08:40:01.040635 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:01 crc kubenswrapper[4689]: I1201 08:40:01.040655 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:01Z","lastTransitionTime":"2025-12-01T08:40:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:01 crc kubenswrapper[4689]: I1201 08:40:01.047072 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 08:40:01 crc kubenswrapper[4689]: E1201 08:40:01.048022 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 08:40:01 crc kubenswrapper[4689]: I1201 08:40:01.048791 4689 scope.go:117] "RemoveContainer" containerID="2c1785e9dd655b78cb3c4139dca5280fb0d84adc92ae5f92ff69ce96b8bb82f7" Dec 01 08:40:01 crc kubenswrapper[4689]: I1201 08:40:01.064170 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c050d26-238a-4a6a-9ced-fa621716bc33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2c1b6982c1f372b680ec1f056266b51f7156a1881dea415db18d5280c7bf92b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a56b83ce083a611ba49ed00ff01a20ff09d5379160397a8f3f650269825849af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a56b83ce083a611ba49ed00ff01a20ff09d5379160397a8f3f650269825849af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:38:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:40:01Z is after 2025-08-24T17:21:41Z" Dec 01 08:40:01 crc kubenswrapper[4689]: I1201 08:40:01.101814 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e81b25bc4fc917a5480b84f4c7d2550829f5ad209d4cac14d2cd64d5b792b9e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:40:01Z is after 2025-08-24T17:21:41Z" Dec 01 08:40:01 crc kubenswrapper[4689]: I1201 08:40:01.117854 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78a7f1a0302034697497f8a7a3d9547400a4311d7e304fdba95a19750c66658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:40:01Z is after 2025-08-24T17:21:41Z" Dec 01 08:40:01 crc kubenswrapper[4689]: I1201 08:40:01.132743 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62ff4a03d2e50e2d881287008cfedc7ff067e04d3dc24ace25b56677b0eb117c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1ed68789c1f37ddbc952b0927feba7a704b01f3ee52e057e29c989f1d475e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:40:01Z is after 2025-08-24T17:21:41Z" Dec 01 08:40:01 crc kubenswrapper[4689]: I1201 08:40:01.143703 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:01 crc kubenswrapper[4689]: I1201 08:40:01.143748 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:01 crc kubenswrapper[4689]: I1201 08:40:01.143759 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:01 crc kubenswrapper[4689]: I1201 08:40:01.143791 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:01 crc kubenswrapper[4689]: I1201 08:40:01.143817 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:01Z","lastTransitionTime":"2025-12-01T08:40:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:01 crc kubenswrapper[4689]: I1201 08:40:01.149341 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f71ff62f-7141-4d50-b8ad-8312dc8a80e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccd36e732198fa380968b1ca9a174e8f380c9d3c3c762c3267b2f65367c4360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d09630f7eb6a631d65473e272c4f98de5ba071d33992776b4101ac797b0854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f804833ecd52f1cd93a4df02639ff912e5c7ff38bfce3ba1c47e8cd5fca46e11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35299e45eecb4b2b98003295a39b7945d1fa2943a5a7f97301054d255687877e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:38:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:40:01Z is after 2025-08-24T17:21:41Z" Dec 01 08:40:01 crc kubenswrapper[4689]: I1201 08:40:01.164707 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:40:01Z is after 2025-08-24T17:21:41Z" Dec 01 08:40:01 crc kubenswrapper[4689]: I1201 08:40:01.176741 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4z9l8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74395c07-d5ab-45ec-a616-1d0b1b336583\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f9b1493d035a89354477e779e8277075127a85756f190d5f0c3bf98a15f15b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjzb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4z9l8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:40:01Z is after 2025-08-24T17:21:41Z" Dec 01 08:40:01 crc kubenswrapper[4689]: I1201 08:40:01.209014 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"988f960f-52fa-406f-9320-a8eec7a04f76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8be3fd2d2a091ced00ce091ea6f81df99a6718fa9ead81d56590d2c8ebc2a36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46c9dc400db69538c36365f4e2d66aae47defc4224c2881532870f875e2b30e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2afbacac849e9ed880e975ae167ea5b2329811c28516152f4d809b043ab61746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be8fa8da5eef9035e80121c75adb9d6653d8683948fa83cfb1ff87a5fb3e8b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81345c33ddfbccd0c6b7d9c505ee81d513ce884fa1326add04026fd82d441e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://211c2ff150fa2d5fb73b0df4cfc5e694bf0b6b6761eec1e6827ac983fd759829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c1785e9dd655b78cb3c4139dca5280fb0d84adc92ae5f92ff69ce96b8bb82f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c1785e9dd655b78cb3c4139dca5280fb0d84adc92ae5f92ff69ce96b8bb82f7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T08:39:33Z\\\",\\\"message\\\":\\\" current time 2025-12-01T08:39:33Z is after 2025-08-24T17:21:41Z]\\\\nI1201 08:39:33.035110 6200 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nI1201 08:39:33.033847 6200 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress-canary/ingress-canary]} name:Service_openshift-ingress-canary/ingress-canary_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.34:8443: 10.217.5.34:8888:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7715118b-bb1b-400a-803e-7ab2cc3eeec0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1201 08:39:33.035122 6200 obj_retry.go:409] Going to retry *v1.Pod resource setup for 6 objects: [openshift-network-diagnostics/network-check-source-55646444c4-trplf openshift-kube-scheduler/openshift-kube-scheduler-crc openshift-multus/multus-dl2st openshift-multus/network-metrics-daemon-jtwvs openshift-machine-config-operator/machine-config-daemon-hmdnx openshift-image-registry/node-ca-kg5bw]\\\\nI1201 08:39:33.035138 6200 obj_retry.go:418] Waiting for all the *\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:32Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8zn56_openshift-ovn-kubernetes(988f960f-52fa-406f-9320-a8eec7a04f76)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e96fecee94f6f25ea0bbde3c6a23854fb5dc476f4b5cfb82871358fbcf6809c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://496b7a56d1e3bb23f4d1da0fc5bbe009995c46b1f44faad80ee18a18ab301a18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://496b7a56d1e3bb23f4d1da0fc5bbe009995c46b1f44faad80ee18a18ab301a18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8zn56\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:40:01Z is after 2025-08-24T17:21:41Z" Dec 01 08:40:01 crc kubenswrapper[4689]: I1201 08:40:01.221508 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jtwvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d6a08d0-a948-4c69-b3f0-f5e084adb453\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkwdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkwdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jtwvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:40:01Z is after 2025-08-24T17:21:41Z" Dec 01 08:40:01 crc kubenswrapper[4689]: I1201 08:40:01.236568 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82556f94-5534-4aae-9690-5ce8e8d38113\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ad3697dfb9a953c1345d69e9f6c393a184bafaf121e9a62a86d05f8f26e3f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20de12d3c9020c7818be4f11a75808c7b7e81db8ae821b12284182d81e7cbceb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35c39a06af6610f8d18eac5f96e5dee0b542fb3fb0c81a102ed8b2a20d054a42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d507cfb5d0bd608d6d5ecd4105f944cdf013df7acf17c4d6237512601b4a7125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d507cfb5d0bd608d6d5ecd4105f944cdf013df7acf17c4d6237512601b4a7125\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:38:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:38:41Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:40:01Z is after 2025-08-24T17:21:41Z" Dec 01 08:40:01 crc kubenswrapper[4689]: I1201 08:40:01.246834 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:01 crc kubenswrapper[4689]: I1201 08:40:01.246898 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:01 crc kubenswrapper[4689]: I1201 08:40:01.246910 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:01 crc kubenswrapper[4689]: I1201 08:40:01.246932 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:01 crc kubenswrapper[4689]: I1201 08:40:01.246946 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:01Z","lastTransitionTime":"2025-12-01T08:40:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:01 crc kubenswrapper[4689]: I1201 08:40:01.252838 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1946844b-83ee-401e-b3b6-5994ef81c85e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dbad86a3834b9ec3334de7ccc49988da1f7e42c423801c9789ce12ba70390b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://848171dd65d193193056bff78092de5ea65abaf7af4d168a9c27409765bd0ac2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad9f8bfeec0bea5ec9df44017223d633ce320bba477e99d5cc325712f92fff65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5920f4efa4e812090eb100b51bcf8e37eb82eb6b9fa495c8560ceeea3a94d55b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83ec64e82504ae546c78bdb3eb8b6cf47514373616749723ec457191b95b73c7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 08:38:54.283267 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 08:38:54.288329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1100834793/tls.crt::/tmp/serving-cert-1100834793/tls.key\\\\\\\"\\\\nI1201 08:39:00.137331 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 08:39:00.140699 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 08:39:00.140719 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 08:39:00.140740 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 08:39:00.140746 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 08:39:00.151271 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 08:39:00.151299 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 08:39:00.151305 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 08:39:00.151312 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 08:39:00.151316 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 08:39:00.151322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 08:39:00.151326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 08:39:00.151604 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 08:39:00.154622 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7190f95ad8a5cce27417903e343138ff345b97c0713609f47db91d8fe0c44a7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bba1efe2a197a5c47fc4220c1c53da0db51bcbdf317261c978b7f31348034f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bba1efe2a197a5c47fc4220c1c53da0db51bcbdf317261c978b7f31348034f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:38:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:40:01Z is after 2025-08-24T17:21:41Z" Dec 01 08:40:01 crc kubenswrapper[4689]: I1201 08:40:01.266193 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3947625d-75bf-4332-a233-1491b2ee9d96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5095b8eb2f2f6319a0221ed4d7ec1fb81540fb75f610f7b57dc23fefbd196b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5v5pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdcf58565a332d76dac9cc12df1e0cd89eee6934be4b3234f8c36e9a8e22983a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5v5pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hmdnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:40:01Z is after 2025-08-24T17:21:41Z" Dec 01 08:40:01 crc kubenswrapper[4689]: I1201 08:40:01.282257 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:40:01Z is after 2025-08-24T17:21:41Z" Dec 01 08:40:01 crc kubenswrapper[4689]: I1201 08:40:01.297202 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:40:01Z is after 2025-08-24T17:21:41Z" Dec 01 08:40:01 crc kubenswrapper[4689]: I1201 08:40:01.311410 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dl2st" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bebcb50-c292-4bca-9299-2fdc21439b18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0a76050989ec3f5388f58967fc29b953bea67fbbf75db6ad980546718f4f034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://768aa329fc277550bd549890a176d80246fcbfa50fbc3b88c444434b980b9986\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T08:39:51Z\\\",\\\"message\\\":\\\"2025-12-01T08:39:06+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d704f42a-5127-4c68-bde9-b46f9b6549c3\\\\n2025-12-01T08:39:06+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d704f42a-5127-4c68-bde9-b46f9b6549c3 to /host/opt/cni/bin/\\\\n2025-12-01T08:39:06Z [verbose] multus-daemon started\\\\n2025-12-01T08:39:06Z [verbose] Readiness Indicator file check\\\\n2025-12-01T08:39:51Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98wj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dl2st\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:40:01Z is after 2025-08-24T17:21:41Z" Dec 01 08:40:01 crc kubenswrapper[4689]: I1201 08:40:01.324299 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7p2p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcab587f-eb9b-4dde-a0a1-75ed175999b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://761460333be2b369513cc7812afa57b580daf1e0e9add1c20f33ddf45601632c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6954a34226801e4552090e2c6e76c36de94e28ea8e175bc26392f534fb12214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6954a34226801e4552090e2c6e76c36de94e28ea8e175bc26392f534fb12214\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33d22ffe5ececc3e9210cb8c45471bd5f4b8a76e9ff2002cfc4c099b6973a7ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33d22ffe5ececc3e9210cb8c45471bd5f4b8a76e9ff2002cfc4c099b6973a7ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30e1f218df66bf86cef3d56c80b2d5bafacfabbac8868f3127a70f88431d00e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30e1f218df66bf86cef3d56c80b2d5bafacfabbac8868f3127a70f88431d00e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://548c59ec353473b69fd54aa96a077763c420647673b64000364532db2c3beefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://548c59ec353473b69fd54aa96a077763c420647673b64000364532db2c3beefe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae04920175b7c9665a739c2f390f5b21a7f8227277909132d9719eaba43c9dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae04920175b7c9665a739c2f390f5b21a7f8227277909132d9719eaba43c9dd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://512fd55cf3633a9fcdf425af76d251823e7638bd08a7f32341108470103ea67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://512fd55cf3633a9fcdf425af76d251823e7638bd08a7f32341108470103ea67d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7p2p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:40:01Z is after 2025-08-24T17:21:41Z" Dec 01 08:40:01 crc kubenswrapper[4689]: I1201 08:40:01.334338 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kg5bw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af90efaa-97be-48b4-bfe6-dc25956d2b5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e208d5bee2ddf73e62749c6865d4a6cc138365c59ab582e2b28e8c01480591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2sbc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kg5bw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:40:01Z is after 2025-08-24T17:21:41Z" Dec 01 08:40:01 crc kubenswrapper[4689]: I1201 08:40:01.345525 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c2qqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65456ad6-e7d1-4546-a977-244691fc5722\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c50390d9cf966913cfb379da199be0fe90b9085e0d76114903eb624054a7f84b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pwwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cda8fd2f87a8bee5f54685633fc64ce2dd06bfe6e5ea9fa8458345954080e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pwwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-c2qqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:40:01Z is after 2025-08-24T17:21:41Z" Dec 01 08:40:01 crc kubenswrapper[4689]: I1201 08:40:01.349347 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:01 crc kubenswrapper[4689]: I1201 08:40:01.349409 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:01 crc kubenswrapper[4689]: I1201 08:40:01.349421 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:01 crc kubenswrapper[4689]: I1201 08:40:01.349444 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:01 crc kubenswrapper[4689]: I1201 08:40:01.349460 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:01Z","lastTransitionTime":"2025-12-01T08:40:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:01 crc kubenswrapper[4689]: I1201 08:40:01.497777 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:01 crc kubenswrapper[4689]: I1201 08:40:01.497813 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:01 crc kubenswrapper[4689]: I1201 08:40:01.497835 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:01 crc kubenswrapper[4689]: I1201 08:40:01.497854 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:01 crc kubenswrapper[4689]: I1201 08:40:01.497865 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:01Z","lastTransitionTime":"2025-12-01T08:40:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:01 crc kubenswrapper[4689]: I1201 08:40:01.604260 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:01 crc kubenswrapper[4689]: I1201 08:40:01.604302 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:01 crc kubenswrapper[4689]: I1201 08:40:01.604717 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:01 crc kubenswrapper[4689]: I1201 08:40:01.604739 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:01 crc kubenswrapper[4689]: I1201 08:40:01.604759 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:01Z","lastTransitionTime":"2025-12-01T08:40:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:01 crc kubenswrapper[4689]: I1201 08:40:01.707151 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:01 crc kubenswrapper[4689]: I1201 08:40:01.707212 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:01 crc kubenswrapper[4689]: I1201 08:40:01.707222 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:01 crc kubenswrapper[4689]: I1201 08:40:01.707237 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:01 crc kubenswrapper[4689]: I1201 08:40:01.707247 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:01Z","lastTransitionTime":"2025-12-01T08:40:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:01 crc kubenswrapper[4689]: I1201 08:40:01.810873 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:01 crc kubenswrapper[4689]: I1201 08:40:01.811290 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:01 crc kubenswrapper[4689]: I1201 08:40:01.811462 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:01 crc kubenswrapper[4689]: I1201 08:40:01.811611 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:01 crc kubenswrapper[4689]: I1201 08:40:01.811739 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:01Z","lastTransitionTime":"2025-12-01T08:40:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:01 crc kubenswrapper[4689]: I1201 08:40:01.828218 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8zn56_988f960f-52fa-406f-9320-a8eec7a04f76/ovnkube-controller/2.log" Dec 01 08:40:01 crc kubenswrapper[4689]: I1201 08:40:01.833062 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" event={"ID":"988f960f-52fa-406f-9320-a8eec7a04f76","Type":"ContainerStarted","Data":"f3c9833e8018d8e0b8fdf680a449297d9419da3fb486c78f21c586947f97e1a1"} Dec 01 08:40:01 crc kubenswrapper[4689]: I1201 08:40:01.834018 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" Dec 01 08:40:01 crc kubenswrapper[4689]: I1201 08:40:01.869009 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7p2p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcab587f-eb9b-4dde-a0a1-75ed175999b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://761460333be2b369513cc7812afa57b580daf1e0e9add1c20f33ddf45601632c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6954a34226801e4552090e2c6e76c36de94e28ea8e175bc26392f534fb12214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6954a34226801e4552090e2c6e76c36de94e28ea8e175bc26392f534fb12214\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33d22ffe5ececc3e9210cb8c45471bd5f4b8a76e9ff2002cfc4c099b6973a7ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33d22ffe5ececc3e9210cb8c45471bd5f4b8a76e9ff2002cfc4c099b6973a7ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30e1f218df66bf86cef3d56c80b2d5bafacfabbac8868f3127a70f88431d00e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30e1f218df66bf86cef3d56c80b2d5bafacfabbac8868f3127a70f88431d00e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://548c59ec353473b69fd54aa96a077763c420647673b64000364532db2c3beefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://548c59ec353473b69fd54aa96a077763c420647673b64000364532db2c3beefe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae04920175b7c9665a739c2f390f5b21a7f8227277909132d9719eaba43c9dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae04920175b7c9665a739c2f390f5b21a7f8227277909132d9719eaba43c9dd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://512fd55cf3633a9fcdf425af76d251823e7638bd08a7f32341108470103ea67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://512fd55cf3633a9fcdf425af76d251823e7638bd08a7f32341108470103ea67d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7p2p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:40:01Z is after 2025-08-24T17:21:41Z" Dec 01 08:40:01 crc kubenswrapper[4689]: I1201 08:40:01.897024 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kg5bw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af90efaa-97be-48b4-bfe6-dc25956d2b5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e208d5bee2ddf73e62749c6865d4a6cc138365c59ab582e2b28e8c01480591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2sbc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kg5bw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:40:01Z is after 2025-08-24T17:21:41Z" Dec 01 08:40:01 crc kubenswrapper[4689]: I1201 08:40:01.917651 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:01 crc kubenswrapper[4689]: I1201 08:40:01.917712 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:01 crc kubenswrapper[4689]: I1201 08:40:01.917732 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:01 crc kubenswrapper[4689]: I1201 08:40:01.917760 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:01 crc kubenswrapper[4689]: I1201 08:40:01.917786 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:01Z","lastTransitionTime":"2025-12-01T08:40:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:01 crc kubenswrapper[4689]: I1201 08:40:01.917908 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c2qqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65456ad6-e7d1-4546-a977-244691fc5722\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c50390d9cf966913cfb379da199be0fe90b9085e0d76114903eb624054a7f84b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pwwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cda8fd2f87a8bee5f54685633fc64ce2dd06bfe6e5ea9fa8458345954080e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pwwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-c2qqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:40:01Z is after 2025-08-24T17:21:41Z" Dec 01 08:40:01 crc kubenswrapper[4689]: I1201 08:40:01.944958 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:40:01Z is after 2025-08-24T17:21:41Z" Dec 01 08:40:01 crc kubenswrapper[4689]: I1201 08:40:01.963920 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:40:01Z is after 2025-08-24T17:21:41Z" Dec 01 08:40:01 crc kubenswrapper[4689]: I1201 08:40:01.997739 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dl2st" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bebcb50-c292-4bca-9299-2fdc21439b18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0a76050989ec3f5388f58967fc29b953bea67fbbf75db6ad980546718f4f034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://768aa329fc277550bd549890a176d80246fcbfa50fbc3b88c444434b980b9986\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T08:39:51Z\\\",\\\"message\\\":\\\"2025-12-01T08:39:06+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d704f42a-5127-4c68-bde9-b46f9b6549c3\\\\n2025-12-01T08:39:06+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d704f42a-5127-4c68-bde9-b46f9b6549c3 to /host/opt/cni/bin/\\\\n2025-12-01T08:39:06Z [verbose] multus-daemon started\\\\n2025-12-01T08:39:06Z [verbose] Readiness Indicator file check\\\\n2025-12-01T08:39:51Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98wj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dl2st\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:40:01Z is after 2025-08-24T17:21:41Z" Dec 01 08:40:02 crc kubenswrapper[4689]: I1201 08:40:02.021793 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:02 crc kubenswrapper[4689]: I1201 08:40:02.022142 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:02 crc kubenswrapper[4689]: I1201 08:40:02.022226 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:02 crc kubenswrapper[4689]: I1201 08:40:02.022313 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:02 crc kubenswrapper[4689]: I1201 08:40:02.022358 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62ff4a03d2e50e2d881287008cfedc7ff067e04d3dc24ace25b56677b0eb117c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1ed68789c1f37ddbc952b0927feba7a704b01f3ee52e057e29c989f1d475e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:40:02Z is after 2025-08-24T17:21:41Z" Dec 01 08:40:02 crc kubenswrapper[4689]: I1201 08:40:02.022492 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:02Z","lastTransitionTime":"2025-12-01T08:40:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:02 crc kubenswrapper[4689]: I1201 08:40:02.035853 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c050d26-238a-4a6a-9ced-fa621716bc33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2c1b6982c1f372b680ec1f056266b51f7156a1881dea415db18d5280c7bf92b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a56b83ce083a611ba49ed00ff01a20ff09d5379160397a8f3f650269825849af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a56b83ce083a611ba49ed00ff01a20ff09d5379160397a8f3f650269825849af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:38:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:40:02Z is after 2025-08-24T17:21:41Z" Dec 01 08:40:02 crc kubenswrapper[4689]: I1201 08:40:02.046868 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 08:40:02 crc kubenswrapper[4689]: I1201 08:40:02.046914 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jtwvs" Dec 01 08:40:02 crc kubenswrapper[4689]: E1201 08:40:02.047055 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 08:40:02 crc kubenswrapper[4689]: I1201 08:40:02.047274 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 08:40:02 crc kubenswrapper[4689]: E1201 08:40:02.047347 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jtwvs" podUID="5d6a08d0-a948-4c69-b3f0-f5e084adb453" Dec 01 08:40:02 crc kubenswrapper[4689]: E1201 08:40:02.047601 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 08:40:02 crc kubenswrapper[4689]: I1201 08:40:02.059516 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e81b25bc4fc917a5480b84f4c7d2550829f5ad209d4cac14d2cd64d5b792b9e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:40:02Z is after 2025-08-24T17:21:41Z" Dec 01 08:40:02 crc kubenswrapper[4689]: I1201 08:40:02.078978 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78a7f1a0302034697497f8a7a3d9547400a4311d7e304fdba95a19750c66658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:40:02Z is after 2025-08-24T17:21:41Z" Dec 01 08:40:02 crc kubenswrapper[4689]: I1201 08:40:02.089636 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4z9l8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74395c07-d5ab-45ec-a616-1d0b1b336583\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f9b1493d035a89354477e779e8277075127a85756f190d5f0c3bf98a15f15b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjzb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4z9l8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:40:02Z is after 2025-08-24T17:21:41Z" Dec 01 08:40:02 crc kubenswrapper[4689]: I1201 08:40:02.110479 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"988f960f-52fa-406f-9320-a8eec7a04f76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8be3fd2d2a091ced00ce091ea6f81df99a6718fa9ead81d56590d2c8ebc2a36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46c9dc400db69538c36365f4e2d66aae47defc4224c2881532870f875e2b30e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2afbacac849e9ed880e975ae167ea5b2329811c28516152f4d809b043ab61746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be8fa8da5eef9035e80121c75adb9d6653d8683948fa83cfb1ff87a5fb3e8b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81345c33ddfbccd0c6b7d9c505ee81d513ce884fa1326add04026fd82d441e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://211c2ff150fa2d5fb73b0df4cfc5e694bf0b6b6761eec1e6827ac983fd759829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3c9833e8018d8e0b8fdf680a449297d9419da3fb486c78f21c586947f97e1a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c1785e9dd655b78cb3c4139dca5280fb0d84adc92ae5f92ff69ce96b8bb82f7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T08:39:33Z\\\",\\\"message\\\":\\\" current time 2025-12-01T08:39:33Z is after 2025-08-24T17:21:41Z]\\\\nI1201 08:39:33.035110 6200 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nI1201 08:39:33.033847 6200 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress-canary/ingress-canary]} name:Service_openshift-ingress-canary/ingress-canary_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.34:8443: 10.217.5.34:8888:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7715118b-bb1b-400a-803e-7ab2cc3eeec0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1201 08:39:33.035122 6200 obj_retry.go:409] Going to retry *v1.Pod resource setup for 6 objects: [openshift-network-diagnostics/network-check-source-55646444c4-trplf openshift-kube-scheduler/openshift-kube-scheduler-crc openshift-multus/multus-dl2st openshift-multus/network-metrics-daemon-jtwvs openshift-machine-config-operator/machine-config-daemon-hmdnx openshift-image-registry/node-ca-kg5bw]\\\\nI1201 08:39:33.035138 6200 obj_retry.go:418] Waiting for all the *\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:32Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:40:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e96fecee94f6f25ea0bbde3c6a23854fb5dc476f4b5cfb82871358fbcf6809c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://496b7a56d1e3bb23f4d1da0fc5bbe009995c46b1f44faad80ee18a18ab301a18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://496b7a56d1e3bb23f4d1da0fc5bbe009995c46b1f44faad80ee18a18ab301a18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8zn56\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:40:02Z is after 2025-08-24T17:21:41Z" Dec 01 08:40:02 crc kubenswrapper[4689]: I1201 08:40:02.123186 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jtwvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d6a08d0-a948-4c69-b3f0-f5e084adb453\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkwdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkwdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jtwvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:40:02Z is after 2025-08-24T17:21:41Z" Dec 01 08:40:02 crc kubenswrapper[4689]: I1201 08:40:02.125481 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:02 crc kubenswrapper[4689]: I1201 08:40:02.125517 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:02 crc kubenswrapper[4689]: I1201 08:40:02.125530 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:02 crc kubenswrapper[4689]: I1201 08:40:02.125552 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:02 crc kubenswrapper[4689]: I1201 08:40:02.125566 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:02Z","lastTransitionTime":"2025-12-01T08:40:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:02 crc kubenswrapper[4689]: I1201 08:40:02.140898 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f71ff62f-7141-4d50-b8ad-8312dc8a80e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccd36e732198fa380968b1ca9a174e8f380c9d3c3c762c3267b2f65367c4360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d09630f7eb6a631d65473e272c4f98de5ba071d33992776b4101ac797b0854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f804833ecd52f1cd93a4df02639ff912e5c7ff38bfce3ba1c47e8cd5fca46e11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35299e45eecb4b2b98003295a39b7945d1fa2943a5a7f97301054d255687877e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:38:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:40:02Z is after 2025-08-24T17:21:41Z" Dec 01 08:40:02 crc kubenswrapper[4689]: I1201 08:40:02.266974 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:02 crc kubenswrapper[4689]: I1201 08:40:02.267020 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:02 crc kubenswrapper[4689]: I1201 08:40:02.267029 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:02 crc kubenswrapper[4689]: I1201 08:40:02.267048 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:02 crc kubenswrapper[4689]: I1201 08:40:02.267060 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:02Z","lastTransitionTime":"2025-12-01T08:40:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:02 crc kubenswrapper[4689]: I1201 08:40:02.270060 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:40:02Z is after 2025-08-24T17:21:41Z" Dec 01 08:40:02 crc kubenswrapper[4689]: I1201 08:40:02.291653 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82556f94-5534-4aae-9690-5ce8e8d38113\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ad3697dfb9a953c1345d69e9f6c393a184bafaf121e9a62a86d05f8f26e3f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20de12d3c9020c7818be4f11a75808c7b7e81db8ae821b12284182d81e7cbceb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35c39a06af6610f8d18eac5f96e5dee0b542fb3fb0c81a102ed8b2a20d054a42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d507cfb5d0bd608d6d5ecd4105f944cdf013df7acf17c4d6237512601b4a7125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d507cfb5d0bd608d6d5ecd4105f944cdf013df7acf17c4d6237512601b4a7125\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:38:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:38:41Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:40:02Z is after 2025-08-24T17:21:41Z" Dec 01 08:40:02 crc kubenswrapper[4689]: I1201 08:40:02.309341 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1946844b-83ee-401e-b3b6-5994ef81c85e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dbad86a3834b9ec3334de7ccc49988da1f7e42c423801c9789ce12ba70390b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://848171dd65d193193056bff78092de5ea65abaf7af4d168a9c27409765bd0ac2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad9f8bfeec0bea5ec9df44017223d633ce320bba477e99d5cc325712f92fff65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5920f4efa4e812090eb100b51bcf8e37eb82eb6b9fa495c8560ceeea3a94d55b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83ec64e82504ae546c78bdb3eb8b6cf47514373616749723ec457191b95b73c7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 08:38:54.283267 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 08:38:54.288329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1100834793/tls.crt::/tmp/serving-cert-1100834793/tls.key\\\\\\\"\\\\nI1201 08:39:00.137331 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 08:39:00.140699 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 08:39:00.140719 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 08:39:00.140740 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 08:39:00.140746 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 08:39:00.151271 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 08:39:00.151299 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 08:39:00.151305 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 08:39:00.151312 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 08:39:00.151316 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 08:39:00.151322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 08:39:00.151326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 08:39:00.151604 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 08:39:00.154622 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7190f95ad8a5cce27417903e343138ff345b97c0713609f47db91d8fe0c44a7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bba1efe2a197a5c47fc4220c1c53da0db51bcbdf317261c978b7f31348034f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bba1efe2a197a5c47fc4220c1c53da0db51bcbdf317261c978b7f31348034f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:38:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:40:02Z is after 2025-08-24T17:21:41Z" Dec 01 08:40:02 crc kubenswrapper[4689]: I1201 08:40:02.325918 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3947625d-75bf-4332-a233-1491b2ee9d96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5095b8eb2f2f6319a0221ed4d7ec1fb81540fb75f610f7b57dc23fefbd196b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5v5pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdcf58565a332d76dac9cc12df1e0cd89eee6934be4b3234f8c36e9a8e22983a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5v5pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hmdnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:40:02Z is after 2025-08-24T17:21:41Z" Dec 01 08:40:02 crc kubenswrapper[4689]: I1201 08:40:02.369993 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:02 crc kubenswrapper[4689]: I1201 08:40:02.370029 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:02 crc kubenswrapper[4689]: I1201 08:40:02.370039 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:02 crc kubenswrapper[4689]: I1201 08:40:02.370054 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:02 crc kubenswrapper[4689]: I1201 08:40:02.370064 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:02Z","lastTransitionTime":"2025-12-01T08:40:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:02 crc kubenswrapper[4689]: I1201 08:40:02.473077 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:02 crc kubenswrapper[4689]: I1201 08:40:02.473134 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:02 crc kubenswrapper[4689]: I1201 08:40:02.473144 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:02 crc kubenswrapper[4689]: I1201 08:40:02.473161 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:02 crc kubenswrapper[4689]: I1201 08:40:02.473172 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:02Z","lastTransitionTime":"2025-12-01T08:40:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:02 crc kubenswrapper[4689]: I1201 08:40:02.600405 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:02 crc kubenswrapper[4689]: I1201 08:40:02.600456 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:02 crc kubenswrapper[4689]: I1201 08:40:02.600471 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:02 crc kubenswrapper[4689]: I1201 08:40:02.600491 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:02 crc kubenswrapper[4689]: I1201 08:40:02.600503 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:02Z","lastTransitionTime":"2025-12-01T08:40:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:02 crc kubenswrapper[4689]: I1201 08:40:02.703501 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:02 crc kubenswrapper[4689]: I1201 08:40:02.704054 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:02 crc kubenswrapper[4689]: I1201 08:40:02.704073 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:02 crc kubenswrapper[4689]: I1201 08:40:02.704099 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:02 crc kubenswrapper[4689]: I1201 08:40:02.704119 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:02Z","lastTransitionTime":"2025-12-01T08:40:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:02 crc kubenswrapper[4689]: I1201 08:40:02.807506 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:02 crc kubenswrapper[4689]: I1201 08:40:02.807553 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:02 crc kubenswrapper[4689]: I1201 08:40:02.807941 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:02 crc kubenswrapper[4689]: I1201 08:40:02.807977 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:02 crc kubenswrapper[4689]: I1201 08:40:02.807990 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:02Z","lastTransitionTime":"2025-12-01T08:40:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:02 crc kubenswrapper[4689]: I1201 08:40:02.913204 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:02 crc kubenswrapper[4689]: I1201 08:40:02.913258 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:02 crc kubenswrapper[4689]: I1201 08:40:02.913266 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:02 crc kubenswrapper[4689]: I1201 08:40:02.913295 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:02 crc kubenswrapper[4689]: I1201 08:40:02.913305 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:02Z","lastTransitionTime":"2025-12-01T08:40:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:03 crc kubenswrapper[4689]: I1201 08:40:03.016203 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:03 crc kubenswrapper[4689]: I1201 08:40:03.016254 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:03 crc kubenswrapper[4689]: I1201 08:40:03.016267 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:03 crc kubenswrapper[4689]: I1201 08:40:03.016285 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:03 crc kubenswrapper[4689]: I1201 08:40:03.016297 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:03Z","lastTransitionTime":"2025-12-01T08:40:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:03 crc kubenswrapper[4689]: I1201 08:40:03.047285 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 08:40:03 crc kubenswrapper[4689]: E1201 08:40:03.048039 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 08:40:03 crc kubenswrapper[4689]: I1201 08:40:03.121087 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:03 crc kubenswrapper[4689]: I1201 08:40:03.121131 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:03 crc kubenswrapper[4689]: I1201 08:40:03.121144 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:03 crc kubenswrapper[4689]: I1201 08:40:03.121162 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:03 crc kubenswrapper[4689]: I1201 08:40:03.121174 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:03Z","lastTransitionTime":"2025-12-01T08:40:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:03 crc kubenswrapper[4689]: I1201 08:40:03.225075 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:03 crc kubenswrapper[4689]: I1201 08:40:03.225138 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:03 crc kubenswrapper[4689]: I1201 08:40:03.225156 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:03 crc kubenswrapper[4689]: I1201 08:40:03.225182 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:03 crc kubenswrapper[4689]: I1201 08:40:03.225201 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:03Z","lastTransitionTime":"2025-12-01T08:40:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:03 crc kubenswrapper[4689]: I1201 08:40:03.328705 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:03 crc kubenswrapper[4689]: I1201 08:40:03.328792 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:03 crc kubenswrapper[4689]: I1201 08:40:03.328822 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:03 crc kubenswrapper[4689]: I1201 08:40:03.328856 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:03 crc kubenswrapper[4689]: I1201 08:40:03.328881 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:03Z","lastTransitionTime":"2025-12-01T08:40:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:03 crc kubenswrapper[4689]: I1201 08:40:03.431917 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:03 crc kubenswrapper[4689]: I1201 08:40:03.431973 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:03 crc kubenswrapper[4689]: I1201 08:40:03.431991 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:03 crc kubenswrapper[4689]: I1201 08:40:03.432015 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:03 crc kubenswrapper[4689]: I1201 08:40:03.432033 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:03Z","lastTransitionTime":"2025-12-01T08:40:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:03 crc kubenswrapper[4689]: I1201 08:40:03.534431 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:03 crc kubenswrapper[4689]: I1201 08:40:03.534511 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:03 crc kubenswrapper[4689]: I1201 08:40:03.534529 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:03 crc kubenswrapper[4689]: I1201 08:40:03.534556 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:03 crc kubenswrapper[4689]: I1201 08:40:03.534576 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:03Z","lastTransitionTime":"2025-12-01T08:40:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:03 crc kubenswrapper[4689]: I1201 08:40:03.638127 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:03 crc kubenswrapper[4689]: I1201 08:40:03.638197 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:03 crc kubenswrapper[4689]: I1201 08:40:03.638215 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:03 crc kubenswrapper[4689]: I1201 08:40:03.638240 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:03 crc kubenswrapper[4689]: I1201 08:40:03.638259 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:03Z","lastTransitionTime":"2025-12-01T08:40:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:03 crc kubenswrapper[4689]: I1201 08:40:03.741851 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:03 crc kubenswrapper[4689]: I1201 08:40:03.741968 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:03 crc kubenswrapper[4689]: I1201 08:40:03.741991 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:03 crc kubenswrapper[4689]: I1201 08:40:03.742016 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:03 crc kubenswrapper[4689]: I1201 08:40:03.742044 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:03Z","lastTransitionTime":"2025-12-01T08:40:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:03 crc kubenswrapper[4689]: I1201 08:40:03.845604 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:03 crc kubenswrapper[4689]: I1201 08:40:03.845693 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:03 crc kubenswrapper[4689]: I1201 08:40:03.845710 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:03 crc kubenswrapper[4689]: I1201 08:40:03.845738 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:03 crc kubenswrapper[4689]: I1201 08:40:03.845753 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:03Z","lastTransitionTime":"2025-12-01T08:40:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:03 crc kubenswrapper[4689]: I1201 08:40:03.848099 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8zn56_988f960f-52fa-406f-9320-a8eec7a04f76/ovnkube-controller/3.log" Dec 01 08:40:03 crc kubenswrapper[4689]: I1201 08:40:03.848983 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8zn56_988f960f-52fa-406f-9320-a8eec7a04f76/ovnkube-controller/2.log" Dec 01 08:40:03 crc kubenswrapper[4689]: I1201 08:40:03.852822 4689 generic.go:334] "Generic (PLEG): container finished" podID="988f960f-52fa-406f-9320-a8eec7a04f76" containerID="f3c9833e8018d8e0b8fdf680a449297d9419da3fb486c78f21c586947f97e1a1" exitCode=1 Dec 01 08:40:03 crc kubenswrapper[4689]: I1201 08:40:03.852858 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" event={"ID":"988f960f-52fa-406f-9320-a8eec7a04f76","Type":"ContainerDied","Data":"f3c9833e8018d8e0b8fdf680a449297d9419da3fb486c78f21c586947f97e1a1"} Dec 01 08:40:03 crc kubenswrapper[4689]: I1201 08:40:03.852956 4689 scope.go:117] "RemoveContainer" containerID="2c1785e9dd655b78cb3c4139dca5280fb0d84adc92ae5f92ff69ce96b8bb82f7" Dec 01 08:40:03 crc kubenswrapper[4689]: I1201 08:40:03.853863 4689 scope.go:117] "RemoveContainer" containerID="f3c9833e8018d8e0b8fdf680a449297d9419da3fb486c78f21c586947f97e1a1" Dec 01 08:40:03 crc kubenswrapper[4689]: E1201 08:40:03.854178 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8zn56_openshift-ovn-kubernetes(988f960f-52fa-406f-9320-a8eec7a04f76)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" podUID="988f960f-52fa-406f-9320-a8eec7a04f76" Dec 01 08:40:03 crc kubenswrapper[4689]: I1201 08:40:03.873507 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:40:03Z is after 2025-08-24T17:21:41Z" Dec 01 08:40:03 crc kubenswrapper[4689]: I1201 08:40:03.888387 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dl2st" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bebcb50-c292-4bca-9299-2fdc21439b18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0a76050989ec3f5388f58967fc29b953bea67fbbf75db6ad980546718f4f034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://768aa329fc277550bd549890a176d80246fcbfa50fbc3b88c444434b980b9986\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T08:39:51Z\\\",\\\"message\\\":\\\"2025-12-01T08:39:06+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d704f42a-5127-4c68-bde9-b46f9b6549c3\\\\n2025-12-01T08:39:06+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d704f42a-5127-4c68-bde9-b46f9b6549c3 to /host/opt/cni/bin/\\\\n2025-12-01T08:39:06Z [verbose] multus-daemon started\\\\n2025-12-01T08:39:06Z [verbose] Readiness Indicator file check\\\\n2025-12-01T08:39:51Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98wj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dl2st\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:40:03Z is after 2025-08-24T17:21:41Z" Dec 01 08:40:03 crc kubenswrapper[4689]: I1201 08:40:03.903411 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7p2p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcab587f-eb9b-4dde-a0a1-75ed175999b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://761460333be2b369513cc7812afa57b580daf1e0e9add1c20f33ddf45601632c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6954a34226801e4552090e2c6e76c36de94e28ea8e175bc26392f534fb12214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6954a34226801e4552090e2c6e76c36de94e28ea8e175bc26392f534fb12214\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33d22ffe5ececc3e9210cb8c45471bd5f4b8a76e9ff2002cfc4c099b6973a7ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33d22ffe5ececc3e9210cb8c45471bd5f4b8a76e9ff2002cfc4c099b6973a7ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30e1f218df66bf86cef3d56c80b2d5bafacfabbac8868f3127a70f88431d00e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30e1f218df66bf86cef3d56c80b2d5bafacfabbac8868f3127a70f88431d00e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://548c59ec353473b69fd54aa96a077763c420647673b64000364532db2c3beefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://548c59ec353473b69fd54aa96a077763c420647673b64000364532db2c3beefe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae04920175b7c9665a739c2f390f5b21a7f8227277909132d9719eaba43c9dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae04920175b7c9665a739c2f390f5b21a7f8227277909132d9719eaba43c9dd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://512fd55cf3633a9fcdf425af76d251823e7638bd08a7f32341108470103ea67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://512fd55cf3633a9fcdf425af76d251823e7638bd08a7f32341108470103ea67d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7p2p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:40:03Z is after 2025-08-24T17:21:41Z" Dec 01 08:40:03 crc kubenswrapper[4689]: I1201 08:40:03.914490 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kg5bw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af90efaa-97be-48b4-bfe6-dc25956d2b5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e208d5bee2ddf73e62749c6865d4a6cc138365c59ab582e2b28e8c01480591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2sbc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kg5bw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:40:03Z is after 2025-08-24T17:21:41Z" Dec 01 08:40:03 crc kubenswrapper[4689]: I1201 08:40:03.926178 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c2qqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65456ad6-e7d1-4546-a977-244691fc5722\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c50390d9cf966913cfb379da199be0fe90b9085e0d76114903eb624054a7f84b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pwwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cda8fd2f87a8bee5f54685633fc64ce2dd06bfe6e5ea9fa8458345954080e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pwwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-c2qqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:40:03Z is after 2025-08-24T17:21:41Z" Dec 01 08:40:03 crc kubenswrapper[4689]: I1201 08:40:03.941958 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:40:03Z is after 2025-08-24T17:21:41Z" Dec 01 08:40:03 crc kubenswrapper[4689]: I1201 08:40:03.949211 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:03 crc kubenswrapper[4689]: I1201 08:40:03.949274 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:03 crc kubenswrapper[4689]: I1201 08:40:03.949298 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:03 crc kubenswrapper[4689]: I1201 08:40:03.949330 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:03 crc kubenswrapper[4689]: I1201 08:40:03.949353 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:03Z","lastTransitionTime":"2025-12-01T08:40:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:03 crc kubenswrapper[4689]: I1201 08:40:03.957171 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e81b25bc4fc917a5480b84f4c7d2550829f5ad209d4cac14d2cd64d5b792b9e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:40:03Z is after 2025-08-24T17:21:41Z" Dec 01 08:40:03 crc kubenswrapper[4689]: I1201 08:40:03.968588 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78a7f1a0302034697497f8a7a3d9547400a4311d7e304fdba95a19750c66658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:40:03Z is after 2025-08-24T17:21:41Z" Dec 01 08:40:03 crc kubenswrapper[4689]: I1201 08:40:03.981015 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62ff4a03d2e50e2d881287008cfedc7ff067e04d3dc24ace25b56677b0eb117c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1ed68789c1f37ddbc952b0927feba7a704b01f3ee52e057e29c989f1d475e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:40:03Z is after 2025-08-24T17:21:41Z" Dec 01 08:40:03 crc kubenswrapper[4689]: I1201 08:40:03.991509 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c050d26-238a-4a6a-9ced-fa621716bc33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2c1b6982c1f372b680ec1f056266b51f7156a1881dea415db18d5280c7bf92b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a56b83ce083a611ba49ed00ff01a20ff09d5379160397a8f3f650269825849af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a56b83ce083a611ba49ed00ff01a20ff09d5379160397a8f3f650269825849af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:38:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:40:03Z is after 2025-08-24T17:21:41Z" Dec 01 08:40:04 crc kubenswrapper[4689]: I1201 08:40:04.004268 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f71ff62f-7141-4d50-b8ad-8312dc8a80e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccd36e732198fa380968b1ca9a174e8f380c9d3c3c762c3267b2f65367c4360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d09630f7eb6a631d65473e272c4f98de5ba071d33992776b4101ac797b0854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f804833ecd52f1cd93a4df02639ff912e5c7ff38bfce3ba1c47e8cd5fca46e11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35299e45eecb4b2b98003295a39b7945d1fa2943a5a7f97301054d255687877e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:38:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:40:04Z is after 2025-08-24T17:21:41Z" Dec 01 08:40:04 crc kubenswrapper[4689]: I1201 08:40:04.015846 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:40:04Z is after 2025-08-24T17:21:41Z" Dec 01 08:40:04 crc kubenswrapper[4689]: I1201 08:40:04.025514 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4z9l8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74395c07-d5ab-45ec-a616-1d0b1b336583\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f9b1493d035a89354477e779e8277075127a85756f190d5f0c3bf98a15f15b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjzb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4z9l8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:40:04Z is after 2025-08-24T17:21:41Z" Dec 01 08:40:04 crc kubenswrapper[4689]: I1201 08:40:04.046429 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 08:40:04 crc kubenswrapper[4689]: E1201 08:40:04.046576 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 08:40:04 crc kubenswrapper[4689]: I1201 08:40:04.046778 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jtwvs" Dec 01 08:40:04 crc kubenswrapper[4689]: E1201 08:40:04.046830 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jtwvs" podUID="5d6a08d0-a948-4c69-b3f0-f5e084adb453" Dec 01 08:40:04 crc kubenswrapper[4689]: I1201 08:40:04.047085 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 08:40:04 crc kubenswrapper[4689]: E1201 08:40:04.047419 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 08:40:04 crc kubenswrapper[4689]: I1201 08:40:04.048434 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"988f960f-52fa-406f-9320-a8eec7a04f76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8be3fd2d2a091ced00ce091ea6f81df99a6718fa9ead81d56590d2c8ebc2a36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46c9dc400db69538c36365f4e2d66aae47defc4224c2881532870f875e2b30e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2afbacac849e9ed880e975ae167ea5b2329811c28516152f4d809b043ab61746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be8fa8da5eef9035e80121c75adb9d6653d8683948fa83cfb1ff87a5fb3e8b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81345c33ddfbccd0c6b7d9c505ee81d513ce884fa1326add04026fd82d441e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://211c2ff150fa2d5fb73b0df4cfc5e694bf0b6b6761eec1e6827ac983fd759829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3c9833e8018d8e0b8fdf680a449297d9419da3fb486c78f21c586947f97e1a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c1785e9dd655b78cb3c4139dca5280fb0d84adc92ae5f92ff69ce96b8bb82f7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T08:39:33Z\\\",\\\"message\\\":\\\" current time 2025-12-01T08:39:33Z is after 2025-08-24T17:21:41Z]\\\\nI1201 08:39:33.035110 6200 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nI1201 08:39:33.033847 6200 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress-canary/ingress-canary]} name:Service_openshift-ingress-canary/ingress-canary_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.34:8443: 10.217.5.34:8888:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7715118b-bb1b-400a-803e-7ab2cc3eeec0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1201 08:39:33.035122 6200 obj_retry.go:409] Going to retry *v1.Pod resource setup for 6 objects: [openshift-network-diagnostics/network-check-source-55646444c4-trplf openshift-kube-scheduler/openshift-kube-scheduler-crc openshift-multus/multus-dl2st openshift-multus/network-metrics-daemon-jtwvs openshift-machine-config-operator/machine-config-daemon-hmdnx openshift-image-registry/node-ca-kg5bw]\\\\nI1201 08:39:33.035138 6200 obj_retry.go:418] Waiting for all the *\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:32Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3c9833e8018d8e0b8fdf680a449297d9419da3fb486c78f21c586947f97e1a1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T08:40:02Z\\\",\\\"message\\\":\\\", err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:40:02Z is after 2025-08-24T17:21:41Z]\\\\nI1201 08:40:02.715699 6556 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-operator-lifecycle-manager/catalog-operator-metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"78f6184b-c7cf-436d-8cbb-4b31f8af75e8\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/catalog-operator-metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T08:40:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e96fecee94f6f25ea0bbde3c6a23854fb5dc476f4b5cfb82871358fbcf6809c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://496b7a56d1e3bb23f4d1da0fc5bbe009995c46b1f44faad80ee18a18ab301a18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://496b7a56d1e3bb23f4d1da0fc5bbe009995c46b1f44faad80ee18a18ab301a18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8zn56\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:40:04Z is after 2025-08-24T17:21:41Z" Dec 01 08:40:04 crc kubenswrapper[4689]: I1201 08:40:04.052302 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:04 crc kubenswrapper[4689]: I1201 08:40:04.052332 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:04 crc kubenswrapper[4689]: I1201 08:40:04.052341 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:04 crc kubenswrapper[4689]: I1201 08:40:04.052358 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:04 crc kubenswrapper[4689]: I1201 08:40:04.052387 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:04Z","lastTransitionTime":"2025-12-01T08:40:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:04 crc kubenswrapper[4689]: I1201 08:40:04.058390 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jtwvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d6a08d0-a948-4c69-b3f0-f5e084adb453\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkwdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkwdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jtwvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:40:04Z is after 2025-08-24T17:21:41Z" Dec 01 08:40:04 crc kubenswrapper[4689]: I1201 08:40:04.072299 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1946844b-83ee-401e-b3b6-5994ef81c85e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dbad86a3834b9ec3334de7ccc49988da1f7e42c423801c9789ce12ba70390b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://848171dd65d193193056bff78092de5ea65abaf7af4d168a9c27409765bd0ac2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad9f8bfeec0bea5ec9df44017223d633ce320bba477e99d5cc325712f92fff65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5920f4efa4e812090eb100b51bcf8e37eb82eb6b9fa495c8560ceeea3a94d55b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83ec64e82504ae546c78bdb3eb8b6cf47514373616749723ec457191b95b73c7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 08:38:54.283267 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 08:38:54.288329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1100834793/tls.crt::/tmp/serving-cert-1100834793/tls.key\\\\\\\"\\\\nI1201 08:39:00.137331 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 08:39:00.140699 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 08:39:00.140719 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 08:39:00.140740 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 08:39:00.140746 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 08:39:00.151271 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 08:39:00.151299 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 08:39:00.151305 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 08:39:00.151312 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 08:39:00.151316 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 08:39:00.151322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 08:39:00.151326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 08:39:00.151604 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 08:39:00.154622 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7190f95ad8a5cce27417903e343138ff345b97c0713609f47db91d8fe0c44a7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bba1efe2a197a5c47fc4220c1c53da0db51bcbdf317261c978b7f31348034f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bba1efe2a197a5c47fc4220c1c53da0db51bcbdf317261c978b7f31348034f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:38:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:40:04Z is after 2025-08-24T17:21:41Z" Dec 01 08:40:04 crc kubenswrapper[4689]: I1201 08:40:04.084427 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3947625d-75bf-4332-a233-1491b2ee9d96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5095b8eb2f2f6319a0221ed4d7ec1fb81540fb75f610f7b57dc23fefbd196b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5v5pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdcf58565a332d76dac9cc12df1e0cd89eee6934be4b3234f8c36e9a8e22983a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5v5pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hmdnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:40:04Z is after 2025-08-24T17:21:41Z" Dec 01 08:40:04 crc kubenswrapper[4689]: I1201 08:40:04.094905 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82556f94-5534-4aae-9690-5ce8e8d38113\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ad3697dfb9a953c1345d69e9f6c393a184bafaf121e9a62a86d05f8f26e3f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20de12d3c9020c7818be4f11a75808c7b7e81db8ae821b12284182d81e7cbceb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35c39a06af6610f8d18eac5f96e5dee0b542fb3fb0c81a102ed8b2a20d054a42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d507cfb5d0bd608d6d5ecd4105f944cdf013df7acf17c4d6237512601b4a7125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d507cfb5d0bd608d6d5ecd4105f944cdf013df7acf17c4d6237512601b4a7125\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:38:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:38:41Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:40:04Z is after 2025-08-24T17:21:41Z" Dec 01 08:40:04 crc kubenswrapper[4689]: I1201 08:40:04.155047 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:04 crc kubenswrapper[4689]: I1201 08:40:04.155473 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:04 crc kubenswrapper[4689]: I1201 08:40:04.155582 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:04 crc kubenswrapper[4689]: I1201 08:40:04.155690 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:04 crc kubenswrapper[4689]: I1201 08:40:04.155951 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:04Z","lastTransitionTime":"2025-12-01T08:40:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:04 crc kubenswrapper[4689]: I1201 08:40:04.258868 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:04 crc kubenswrapper[4689]: I1201 08:40:04.258902 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:04 crc kubenswrapper[4689]: I1201 08:40:04.258911 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:04 crc kubenswrapper[4689]: I1201 08:40:04.258926 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:04 crc kubenswrapper[4689]: I1201 08:40:04.258935 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:04Z","lastTransitionTime":"2025-12-01T08:40:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:04 crc kubenswrapper[4689]: I1201 08:40:04.362997 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:04 crc kubenswrapper[4689]: I1201 08:40:04.363059 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:04 crc kubenswrapper[4689]: I1201 08:40:04.363074 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:04 crc kubenswrapper[4689]: I1201 08:40:04.363097 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:04 crc kubenswrapper[4689]: I1201 08:40:04.363118 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:04Z","lastTransitionTime":"2025-12-01T08:40:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:04 crc kubenswrapper[4689]: I1201 08:40:04.466163 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:04 crc kubenswrapper[4689]: I1201 08:40:04.466229 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:04 crc kubenswrapper[4689]: I1201 08:40:04.466240 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:04 crc kubenswrapper[4689]: I1201 08:40:04.466255 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:04 crc kubenswrapper[4689]: I1201 08:40:04.466266 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:04Z","lastTransitionTime":"2025-12-01T08:40:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:04 crc kubenswrapper[4689]: I1201 08:40:04.572585 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:04 crc kubenswrapper[4689]: I1201 08:40:04.572656 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:04 crc kubenswrapper[4689]: I1201 08:40:04.572691 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:04 crc kubenswrapper[4689]: I1201 08:40:04.572730 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:04 crc kubenswrapper[4689]: I1201 08:40:04.572784 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:04Z","lastTransitionTime":"2025-12-01T08:40:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:04 crc kubenswrapper[4689]: I1201 08:40:04.676226 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:04 crc kubenswrapper[4689]: I1201 08:40:04.676306 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:04 crc kubenswrapper[4689]: I1201 08:40:04.676326 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:04 crc kubenswrapper[4689]: I1201 08:40:04.676355 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:04 crc kubenswrapper[4689]: I1201 08:40:04.676421 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:04Z","lastTransitionTime":"2025-12-01T08:40:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:04 crc kubenswrapper[4689]: I1201 08:40:04.736866 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:40:04 crc kubenswrapper[4689]: I1201 08:40:04.737039 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 08:40:04 crc kubenswrapper[4689]: I1201 08:40:04.737074 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 08:40:04 crc kubenswrapper[4689]: E1201 08:40:04.737188 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:08.73714219 +0000 UTC m=+148.809430094 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:40:04 crc kubenswrapper[4689]: E1201 08:40:04.737222 4689 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 01 08:40:04 crc kubenswrapper[4689]: E1201 08:40:04.737308 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-01 08:41:08.737299754 +0000 UTC m=+148.809587658 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 01 08:40:04 crc kubenswrapper[4689]: E1201 08:40:04.737486 4689 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 01 08:40:04 crc kubenswrapper[4689]: E1201 08:40:04.737547 4689 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 01 08:40:04 crc kubenswrapper[4689]: E1201 08:40:04.737574 4689 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 08:40:04 crc kubenswrapper[4689]: I1201 08:40:04.737519 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 08:40:04 crc kubenswrapper[4689]: E1201 08:40:04.737663 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-01 08:41:08.737640113 +0000 UTC m=+148.809928057 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 08:40:04 crc kubenswrapper[4689]: E1201 08:40:04.737845 4689 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 01 08:40:04 crc kubenswrapper[4689]: E1201 08:40:04.737999 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-01 08:41:08.737973412 +0000 UTC m=+148.810261406 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 01 08:40:04 crc kubenswrapper[4689]: I1201 08:40:04.779509 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:04 crc kubenswrapper[4689]: I1201 08:40:04.779571 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:04 crc kubenswrapper[4689]: I1201 08:40:04.779588 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:04 crc kubenswrapper[4689]: I1201 08:40:04.779618 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:04 crc kubenswrapper[4689]: I1201 08:40:04.779638 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:04Z","lastTransitionTime":"2025-12-01T08:40:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:04 crc kubenswrapper[4689]: I1201 08:40:04.838439 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 08:40:04 crc kubenswrapper[4689]: E1201 08:40:04.838777 4689 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 01 08:40:04 crc kubenswrapper[4689]: E1201 08:40:04.838828 4689 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 01 08:40:04 crc kubenswrapper[4689]: E1201 08:40:04.838855 4689 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 08:40:04 crc kubenswrapper[4689]: E1201 08:40:04.838974 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-01 08:41:08.838941104 +0000 UTC m=+148.911229048 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 08:40:04 crc kubenswrapper[4689]: I1201 08:40:04.863505 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8zn56_988f960f-52fa-406f-9320-a8eec7a04f76/ovnkube-controller/3.log" Dec 01 08:40:04 crc kubenswrapper[4689]: I1201 08:40:04.882241 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:04 crc kubenswrapper[4689]: I1201 08:40:04.882312 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:04 crc kubenswrapper[4689]: I1201 08:40:04.882330 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:04 crc kubenswrapper[4689]: I1201 08:40:04.882358 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:04 crc kubenswrapper[4689]: I1201 08:40:04.882434 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:04Z","lastTransitionTime":"2025-12-01T08:40:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:04 crc kubenswrapper[4689]: I1201 08:40:04.986166 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:04 crc kubenswrapper[4689]: I1201 08:40:04.986223 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:04 crc kubenswrapper[4689]: I1201 08:40:04.986233 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:04 crc kubenswrapper[4689]: I1201 08:40:04.986296 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:04 crc kubenswrapper[4689]: I1201 08:40:04.986308 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:04Z","lastTransitionTime":"2025-12-01T08:40:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:05 crc kubenswrapper[4689]: I1201 08:40:05.046821 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 08:40:05 crc kubenswrapper[4689]: E1201 08:40:05.047001 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 08:40:05 crc kubenswrapper[4689]: I1201 08:40:05.089702 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:05 crc kubenswrapper[4689]: I1201 08:40:05.089971 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:05 crc kubenswrapper[4689]: I1201 08:40:05.090042 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:05 crc kubenswrapper[4689]: I1201 08:40:05.090132 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:05 crc kubenswrapper[4689]: I1201 08:40:05.090210 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:05Z","lastTransitionTime":"2025-12-01T08:40:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:05 crc kubenswrapper[4689]: I1201 08:40:05.193972 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:05 crc kubenswrapper[4689]: I1201 08:40:05.194044 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:05 crc kubenswrapper[4689]: I1201 08:40:05.194063 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:05 crc kubenswrapper[4689]: I1201 08:40:05.194095 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:05 crc kubenswrapper[4689]: I1201 08:40:05.194115 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:05Z","lastTransitionTime":"2025-12-01T08:40:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:05 crc kubenswrapper[4689]: I1201 08:40:05.297337 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:05 crc kubenswrapper[4689]: I1201 08:40:05.297574 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:05 crc kubenswrapper[4689]: I1201 08:40:05.297603 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:05 crc kubenswrapper[4689]: I1201 08:40:05.297688 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:05 crc kubenswrapper[4689]: I1201 08:40:05.297759 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:05Z","lastTransitionTime":"2025-12-01T08:40:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:05 crc kubenswrapper[4689]: I1201 08:40:05.401295 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:05 crc kubenswrapper[4689]: I1201 08:40:05.401735 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:05 crc kubenswrapper[4689]: I1201 08:40:05.401939 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:05 crc kubenswrapper[4689]: I1201 08:40:05.402142 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:05 crc kubenswrapper[4689]: I1201 08:40:05.402441 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:05Z","lastTransitionTime":"2025-12-01T08:40:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:05 crc kubenswrapper[4689]: I1201 08:40:05.506708 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:05 crc kubenswrapper[4689]: I1201 08:40:05.506787 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:05 crc kubenswrapper[4689]: I1201 08:40:05.506806 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:05 crc kubenswrapper[4689]: I1201 08:40:05.506833 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:05 crc kubenswrapper[4689]: I1201 08:40:05.506856 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:05Z","lastTransitionTime":"2025-12-01T08:40:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:05 crc kubenswrapper[4689]: I1201 08:40:05.610809 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:05 crc kubenswrapper[4689]: I1201 08:40:05.610848 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:05 crc kubenswrapper[4689]: I1201 08:40:05.610858 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:05 crc kubenswrapper[4689]: I1201 08:40:05.610874 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:05 crc kubenswrapper[4689]: I1201 08:40:05.610884 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:05Z","lastTransitionTime":"2025-12-01T08:40:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:05 crc kubenswrapper[4689]: I1201 08:40:05.714442 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:05 crc kubenswrapper[4689]: I1201 08:40:05.714931 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:05 crc kubenswrapper[4689]: I1201 08:40:05.715221 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:05 crc kubenswrapper[4689]: I1201 08:40:05.715720 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:05 crc kubenswrapper[4689]: I1201 08:40:05.715868 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:05Z","lastTransitionTime":"2025-12-01T08:40:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:05 crc kubenswrapper[4689]: I1201 08:40:05.819831 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:05 crc kubenswrapper[4689]: I1201 08:40:05.819900 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:05 crc kubenswrapper[4689]: I1201 08:40:05.819914 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:05 crc kubenswrapper[4689]: I1201 08:40:05.819985 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:05 crc kubenswrapper[4689]: I1201 08:40:05.820001 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:05Z","lastTransitionTime":"2025-12-01T08:40:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:05 crc kubenswrapper[4689]: I1201 08:40:05.947290 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:05 crc kubenswrapper[4689]: I1201 08:40:05.947352 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:05 crc kubenswrapper[4689]: I1201 08:40:05.947399 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:05 crc kubenswrapper[4689]: I1201 08:40:05.947425 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:05 crc kubenswrapper[4689]: I1201 08:40:05.947444 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:05Z","lastTransitionTime":"2025-12-01T08:40:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:06 crc kubenswrapper[4689]: I1201 08:40:06.046405 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jtwvs" Dec 01 08:40:06 crc kubenswrapper[4689]: I1201 08:40:06.046854 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 08:40:06 crc kubenswrapper[4689]: I1201 08:40:06.046669 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 08:40:06 crc kubenswrapper[4689]: E1201 08:40:06.047041 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jtwvs" podUID="5d6a08d0-a948-4c69-b3f0-f5e084adb453" Dec 01 08:40:06 crc kubenswrapper[4689]: E1201 08:40:06.047087 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 08:40:06 crc kubenswrapper[4689]: E1201 08:40:06.047255 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 08:40:06 crc kubenswrapper[4689]: I1201 08:40:06.050443 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:06 crc kubenswrapper[4689]: I1201 08:40:06.050505 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:06 crc kubenswrapper[4689]: I1201 08:40:06.050528 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:06 crc kubenswrapper[4689]: I1201 08:40:06.050555 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:06 crc kubenswrapper[4689]: I1201 08:40:06.050579 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:06Z","lastTransitionTime":"2025-12-01T08:40:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:06 crc kubenswrapper[4689]: I1201 08:40:06.154235 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:06 crc kubenswrapper[4689]: I1201 08:40:06.154305 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:06 crc kubenswrapper[4689]: I1201 08:40:06.154325 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:06 crc kubenswrapper[4689]: I1201 08:40:06.154358 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:06 crc kubenswrapper[4689]: I1201 08:40:06.154412 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:06Z","lastTransitionTime":"2025-12-01T08:40:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:06 crc kubenswrapper[4689]: I1201 08:40:06.240506 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:06 crc kubenswrapper[4689]: I1201 08:40:06.240591 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:06 crc kubenswrapper[4689]: I1201 08:40:06.240610 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:06 crc kubenswrapper[4689]: I1201 08:40:06.240639 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:06 crc kubenswrapper[4689]: I1201 08:40:06.240658 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:06Z","lastTransitionTime":"2025-12-01T08:40:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:06 crc kubenswrapper[4689]: E1201 08:40:06.257475 4689 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:40:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:40:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:40:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:40:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:40:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:40:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:40:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:40:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"87f9359d-17aa-499d-90bc-b05146c26f0f\\\",\\\"systemUUID\\\":\\\"1b1c64ae-9dbc-417f-9fac-7f3e657b08f7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:40:06Z is after 2025-08-24T17:21:41Z" Dec 01 08:40:06 crc kubenswrapper[4689]: I1201 08:40:06.262497 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:06 crc kubenswrapper[4689]: I1201 08:40:06.262535 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:06 crc kubenswrapper[4689]: I1201 08:40:06.262548 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:06 crc kubenswrapper[4689]: I1201 08:40:06.262567 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:06 crc kubenswrapper[4689]: I1201 08:40:06.262581 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:06Z","lastTransitionTime":"2025-12-01T08:40:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:06 crc kubenswrapper[4689]: E1201 08:40:06.278517 4689 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:40:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:40:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:40:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:40:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:40:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:40:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:40:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:40:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"87f9359d-17aa-499d-90bc-b05146c26f0f\\\",\\\"systemUUID\\\":\\\"1b1c64ae-9dbc-417f-9fac-7f3e657b08f7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:40:06Z is after 2025-08-24T17:21:41Z" Dec 01 08:40:06 crc kubenswrapper[4689]: I1201 08:40:06.283554 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:06 crc kubenswrapper[4689]: I1201 08:40:06.283606 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:06 crc kubenswrapper[4689]: I1201 08:40:06.283624 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:06 crc kubenswrapper[4689]: I1201 08:40:06.283650 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:06 crc kubenswrapper[4689]: I1201 08:40:06.283669 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:06Z","lastTransitionTime":"2025-12-01T08:40:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:06 crc kubenswrapper[4689]: E1201 08:40:06.299964 4689 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:40:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:40:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:40:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:40:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:40:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:40:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:40:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:40:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"87f9359d-17aa-499d-90bc-b05146c26f0f\\\",\\\"systemUUID\\\":\\\"1b1c64ae-9dbc-417f-9fac-7f3e657b08f7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:40:06Z is after 2025-08-24T17:21:41Z" Dec 01 08:40:06 crc kubenswrapper[4689]: I1201 08:40:06.304690 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:06 crc kubenswrapper[4689]: I1201 08:40:06.304775 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:06 crc kubenswrapper[4689]: I1201 08:40:06.304812 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:06 crc kubenswrapper[4689]: I1201 08:40:06.304834 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:06 crc kubenswrapper[4689]: I1201 08:40:06.304844 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:06Z","lastTransitionTime":"2025-12-01T08:40:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:06 crc kubenswrapper[4689]: E1201 08:40:06.318550 4689 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:40:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:40:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:40:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:40:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:40:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:40:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:40:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:40:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"87f9359d-17aa-499d-90bc-b05146c26f0f\\\",\\\"systemUUID\\\":\\\"1b1c64ae-9dbc-417f-9fac-7f3e657b08f7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:40:06Z is after 2025-08-24T17:21:41Z" Dec 01 08:40:06 crc kubenswrapper[4689]: I1201 08:40:06.323584 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:06 crc kubenswrapper[4689]: I1201 08:40:06.323645 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:06 crc kubenswrapper[4689]: I1201 08:40:06.323665 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:06 crc kubenswrapper[4689]: I1201 08:40:06.323691 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:06 crc kubenswrapper[4689]: I1201 08:40:06.323710 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:06Z","lastTransitionTime":"2025-12-01T08:40:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:06 crc kubenswrapper[4689]: E1201 08:40:06.340165 4689 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:40:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:40:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:40:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:40:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:40:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:40:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T08:40:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T08:40:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"87f9359d-17aa-499d-90bc-b05146c26f0f\\\",\\\"systemUUID\\\":\\\"1b1c64ae-9dbc-417f-9fac-7f3e657b08f7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:40:06Z is after 2025-08-24T17:21:41Z" Dec 01 08:40:06 crc kubenswrapper[4689]: E1201 08:40:06.340498 4689 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 01 08:40:06 crc kubenswrapper[4689]: I1201 08:40:06.342759 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:06 crc kubenswrapper[4689]: I1201 08:40:06.342811 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:06 crc kubenswrapper[4689]: I1201 08:40:06.342823 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:06 crc kubenswrapper[4689]: I1201 08:40:06.342846 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:06 crc kubenswrapper[4689]: I1201 08:40:06.342858 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:06Z","lastTransitionTime":"2025-12-01T08:40:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:06 crc kubenswrapper[4689]: I1201 08:40:06.446062 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:06 crc kubenswrapper[4689]: I1201 08:40:06.446125 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:06 crc kubenswrapper[4689]: I1201 08:40:06.446153 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:06 crc kubenswrapper[4689]: I1201 08:40:06.446177 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:06 crc kubenswrapper[4689]: I1201 08:40:06.446197 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:06Z","lastTransitionTime":"2025-12-01T08:40:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:06 crc kubenswrapper[4689]: I1201 08:40:06.549768 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:06 crc kubenswrapper[4689]: I1201 08:40:06.550439 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:06 crc kubenswrapper[4689]: I1201 08:40:06.550598 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:06 crc kubenswrapper[4689]: I1201 08:40:06.551286 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:06 crc kubenswrapper[4689]: I1201 08:40:06.552324 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:06Z","lastTransitionTime":"2025-12-01T08:40:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:06 crc kubenswrapper[4689]: I1201 08:40:06.657781 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:06 crc kubenswrapper[4689]: I1201 08:40:06.657865 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:06 crc kubenswrapper[4689]: I1201 08:40:06.657885 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:06 crc kubenswrapper[4689]: I1201 08:40:06.657914 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:06 crc kubenswrapper[4689]: I1201 08:40:06.657935 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:06Z","lastTransitionTime":"2025-12-01T08:40:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:06 crc kubenswrapper[4689]: I1201 08:40:06.761162 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:06 crc kubenswrapper[4689]: I1201 08:40:06.761218 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:06 crc kubenswrapper[4689]: I1201 08:40:06.761227 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:06 crc kubenswrapper[4689]: I1201 08:40:06.761247 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:06 crc kubenswrapper[4689]: I1201 08:40:06.761258 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:06Z","lastTransitionTime":"2025-12-01T08:40:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:06 crc kubenswrapper[4689]: I1201 08:40:06.864566 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:06 crc kubenswrapper[4689]: I1201 08:40:06.864633 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:06 crc kubenswrapper[4689]: I1201 08:40:06.864698 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:06 crc kubenswrapper[4689]: I1201 08:40:06.864726 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:06 crc kubenswrapper[4689]: I1201 08:40:06.864747 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:06Z","lastTransitionTime":"2025-12-01T08:40:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:06 crc kubenswrapper[4689]: I1201 08:40:06.967550 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:06 crc kubenswrapper[4689]: I1201 08:40:06.967622 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:06 crc kubenswrapper[4689]: I1201 08:40:06.967650 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:06 crc kubenswrapper[4689]: I1201 08:40:06.967684 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:06 crc kubenswrapper[4689]: I1201 08:40:06.967707 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:06Z","lastTransitionTime":"2025-12-01T08:40:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:07 crc kubenswrapper[4689]: I1201 08:40:07.047501 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 08:40:07 crc kubenswrapper[4689]: E1201 08:40:07.047778 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 08:40:07 crc kubenswrapper[4689]: I1201 08:40:07.074568 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:07 crc kubenswrapper[4689]: I1201 08:40:07.074747 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:07 crc kubenswrapper[4689]: I1201 08:40:07.074841 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:07 crc kubenswrapper[4689]: I1201 08:40:07.074896 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:07 crc kubenswrapper[4689]: I1201 08:40:07.074925 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:07Z","lastTransitionTime":"2025-12-01T08:40:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:07 crc kubenswrapper[4689]: I1201 08:40:07.178827 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:07 crc kubenswrapper[4689]: I1201 08:40:07.178890 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:07 crc kubenswrapper[4689]: I1201 08:40:07.178907 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:07 crc kubenswrapper[4689]: I1201 08:40:07.178927 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:07 crc kubenswrapper[4689]: I1201 08:40:07.178942 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:07Z","lastTransitionTime":"2025-12-01T08:40:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:07 crc kubenswrapper[4689]: I1201 08:40:07.282778 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:07 crc kubenswrapper[4689]: I1201 08:40:07.282829 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:07 crc kubenswrapper[4689]: I1201 08:40:07.282843 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:07 crc kubenswrapper[4689]: I1201 08:40:07.282863 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:07 crc kubenswrapper[4689]: I1201 08:40:07.282881 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:07Z","lastTransitionTime":"2025-12-01T08:40:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:07 crc kubenswrapper[4689]: I1201 08:40:07.385589 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:07 crc kubenswrapper[4689]: I1201 08:40:07.385664 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:07 crc kubenswrapper[4689]: I1201 08:40:07.385673 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:07 crc kubenswrapper[4689]: I1201 08:40:07.385689 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:07 crc kubenswrapper[4689]: I1201 08:40:07.385700 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:07Z","lastTransitionTime":"2025-12-01T08:40:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:07 crc kubenswrapper[4689]: I1201 08:40:07.488710 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:07 crc kubenswrapper[4689]: I1201 08:40:07.488863 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:07 crc kubenswrapper[4689]: I1201 08:40:07.488887 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:07 crc kubenswrapper[4689]: I1201 08:40:07.488912 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:07 crc kubenswrapper[4689]: I1201 08:40:07.488931 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:07Z","lastTransitionTime":"2025-12-01T08:40:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:07 crc kubenswrapper[4689]: I1201 08:40:07.592160 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:07 crc kubenswrapper[4689]: I1201 08:40:07.592233 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:07 crc kubenswrapper[4689]: I1201 08:40:07.592251 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:07 crc kubenswrapper[4689]: I1201 08:40:07.592278 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:07 crc kubenswrapper[4689]: I1201 08:40:07.592297 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:07Z","lastTransitionTime":"2025-12-01T08:40:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:07 crc kubenswrapper[4689]: I1201 08:40:07.695955 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:07 crc kubenswrapper[4689]: I1201 08:40:07.696028 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:07 crc kubenswrapper[4689]: I1201 08:40:07.696046 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:07 crc kubenswrapper[4689]: I1201 08:40:07.696073 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:07 crc kubenswrapper[4689]: I1201 08:40:07.696093 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:07Z","lastTransitionTime":"2025-12-01T08:40:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:07 crc kubenswrapper[4689]: I1201 08:40:07.798993 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:07 crc kubenswrapper[4689]: I1201 08:40:07.799049 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:07 crc kubenswrapper[4689]: I1201 08:40:07.799064 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:07 crc kubenswrapper[4689]: I1201 08:40:07.799087 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:07 crc kubenswrapper[4689]: I1201 08:40:07.799106 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:07Z","lastTransitionTime":"2025-12-01T08:40:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:07 crc kubenswrapper[4689]: I1201 08:40:07.901941 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:07 crc kubenswrapper[4689]: I1201 08:40:07.901990 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:07 crc kubenswrapper[4689]: I1201 08:40:07.902008 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:07 crc kubenswrapper[4689]: I1201 08:40:07.902032 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:07 crc kubenswrapper[4689]: I1201 08:40:07.902051 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:07Z","lastTransitionTime":"2025-12-01T08:40:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:08 crc kubenswrapper[4689]: I1201 08:40:08.005291 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:08 crc kubenswrapper[4689]: I1201 08:40:08.005340 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:08 crc kubenswrapper[4689]: I1201 08:40:08.005356 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:08 crc kubenswrapper[4689]: I1201 08:40:08.005420 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:08 crc kubenswrapper[4689]: I1201 08:40:08.005439 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:08Z","lastTransitionTime":"2025-12-01T08:40:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:08 crc kubenswrapper[4689]: I1201 08:40:08.046940 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 08:40:08 crc kubenswrapper[4689]: I1201 08:40:08.047541 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jtwvs" Dec 01 08:40:08 crc kubenswrapper[4689]: I1201 08:40:08.047665 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 08:40:08 crc kubenswrapper[4689]: E1201 08:40:08.047801 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 08:40:08 crc kubenswrapper[4689]: E1201 08:40:08.048008 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jtwvs" podUID="5d6a08d0-a948-4c69-b3f0-f5e084adb453" Dec 01 08:40:08 crc kubenswrapper[4689]: E1201 08:40:08.048065 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 08:40:08 crc kubenswrapper[4689]: I1201 08:40:08.108956 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:08 crc kubenswrapper[4689]: I1201 08:40:08.109476 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:08 crc kubenswrapper[4689]: I1201 08:40:08.109563 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:08 crc kubenswrapper[4689]: I1201 08:40:08.109640 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:08 crc kubenswrapper[4689]: I1201 08:40:08.110021 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:08Z","lastTransitionTime":"2025-12-01T08:40:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:08 crc kubenswrapper[4689]: I1201 08:40:08.215540 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:08 crc kubenswrapper[4689]: I1201 08:40:08.216007 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:08 crc kubenswrapper[4689]: I1201 08:40:08.216092 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:08 crc kubenswrapper[4689]: I1201 08:40:08.216198 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:08 crc kubenswrapper[4689]: I1201 08:40:08.216305 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:08Z","lastTransitionTime":"2025-12-01T08:40:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:08 crc kubenswrapper[4689]: I1201 08:40:08.319908 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:08 crc kubenswrapper[4689]: I1201 08:40:08.319970 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:08 crc kubenswrapper[4689]: I1201 08:40:08.319983 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:08 crc kubenswrapper[4689]: I1201 08:40:08.320005 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:08 crc kubenswrapper[4689]: I1201 08:40:08.320018 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:08Z","lastTransitionTime":"2025-12-01T08:40:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:08 crc kubenswrapper[4689]: I1201 08:40:08.423897 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:08 crc kubenswrapper[4689]: I1201 08:40:08.423979 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:08 crc kubenswrapper[4689]: I1201 08:40:08.423993 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:08 crc kubenswrapper[4689]: I1201 08:40:08.424019 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:08 crc kubenswrapper[4689]: I1201 08:40:08.424037 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:08Z","lastTransitionTime":"2025-12-01T08:40:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:08 crc kubenswrapper[4689]: I1201 08:40:08.527668 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:08 crc kubenswrapper[4689]: I1201 08:40:08.527716 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:08 crc kubenswrapper[4689]: I1201 08:40:08.527727 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:08 crc kubenswrapper[4689]: I1201 08:40:08.527746 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:08 crc kubenswrapper[4689]: I1201 08:40:08.527755 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:08Z","lastTransitionTime":"2025-12-01T08:40:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:08 crc kubenswrapper[4689]: I1201 08:40:08.631110 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:08 crc kubenswrapper[4689]: I1201 08:40:08.631642 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:08 crc kubenswrapper[4689]: I1201 08:40:08.631660 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:08 crc kubenswrapper[4689]: I1201 08:40:08.631688 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:08 crc kubenswrapper[4689]: I1201 08:40:08.631707 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:08Z","lastTransitionTime":"2025-12-01T08:40:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:08 crc kubenswrapper[4689]: I1201 08:40:08.734248 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:08 crc kubenswrapper[4689]: I1201 08:40:08.734307 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:08 crc kubenswrapper[4689]: I1201 08:40:08.734325 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:08 crc kubenswrapper[4689]: I1201 08:40:08.734351 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:08 crc kubenswrapper[4689]: I1201 08:40:08.734397 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:08Z","lastTransitionTime":"2025-12-01T08:40:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:08 crc kubenswrapper[4689]: I1201 08:40:08.838289 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:08 crc kubenswrapper[4689]: I1201 08:40:08.838402 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:08 crc kubenswrapper[4689]: I1201 08:40:08.838432 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:08 crc kubenswrapper[4689]: I1201 08:40:08.838466 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:08 crc kubenswrapper[4689]: I1201 08:40:08.838488 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:08Z","lastTransitionTime":"2025-12-01T08:40:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:08 crc kubenswrapper[4689]: I1201 08:40:08.941854 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:08 crc kubenswrapper[4689]: I1201 08:40:08.941919 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:08 crc kubenswrapper[4689]: I1201 08:40:08.941939 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:08 crc kubenswrapper[4689]: I1201 08:40:08.941963 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:08 crc kubenswrapper[4689]: I1201 08:40:08.941986 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:08Z","lastTransitionTime":"2025-12-01T08:40:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:09 crc kubenswrapper[4689]: I1201 08:40:09.045481 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:09 crc kubenswrapper[4689]: I1201 08:40:09.045565 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:09 crc kubenswrapper[4689]: I1201 08:40:09.045599 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:09 crc kubenswrapper[4689]: I1201 08:40:09.045634 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:09 crc kubenswrapper[4689]: I1201 08:40:09.045658 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:09Z","lastTransitionTime":"2025-12-01T08:40:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:09 crc kubenswrapper[4689]: I1201 08:40:09.046573 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 08:40:09 crc kubenswrapper[4689]: E1201 08:40:09.046794 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 08:40:09 crc kubenswrapper[4689]: I1201 08:40:09.150617 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:09 crc kubenswrapper[4689]: I1201 08:40:09.150748 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:09 crc kubenswrapper[4689]: I1201 08:40:09.150819 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:09 crc kubenswrapper[4689]: I1201 08:40:09.150853 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:09 crc kubenswrapper[4689]: I1201 08:40:09.150918 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:09Z","lastTransitionTime":"2025-12-01T08:40:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:09 crc kubenswrapper[4689]: I1201 08:40:09.254994 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:09 crc kubenswrapper[4689]: I1201 08:40:09.255063 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:09 crc kubenswrapper[4689]: I1201 08:40:09.255080 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:09 crc kubenswrapper[4689]: I1201 08:40:09.255103 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:09 crc kubenswrapper[4689]: I1201 08:40:09.255117 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:09Z","lastTransitionTime":"2025-12-01T08:40:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:09 crc kubenswrapper[4689]: I1201 08:40:09.359080 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:09 crc kubenswrapper[4689]: I1201 08:40:09.359141 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:09 crc kubenswrapper[4689]: I1201 08:40:09.359160 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:09 crc kubenswrapper[4689]: I1201 08:40:09.359209 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:09 crc kubenswrapper[4689]: I1201 08:40:09.359228 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:09Z","lastTransitionTime":"2025-12-01T08:40:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:09 crc kubenswrapper[4689]: I1201 08:40:09.463098 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:09 crc kubenswrapper[4689]: I1201 08:40:09.463168 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:09 crc kubenswrapper[4689]: I1201 08:40:09.463179 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:09 crc kubenswrapper[4689]: I1201 08:40:09.463202 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:09 crc kubenswrapper[4689]: I1201 08:40:09.463215 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:09Z","lastTransitionTime":"2025-12-01T08:40:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:09 crc kubenswrapper[4689]: I1201 08:40:09.567001 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:09 crc kubenswrapper[4689]: I1201 08:40:09.567086 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:09 crc kubenswrapper[4689]: I1201 08:40:09.567109 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:09 crc kubenswrapper[4689]: I1201 08:40:09.567135 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:09 crc kubenswrapper[4689]: I1201 08:40:09.567160 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:09Z","lastTransitionTime":"2025-12-01T08:40:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:09 crc kubenswrapper[4689]: I1201 08:40:09.670718 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:09 crc kubenswrapper[4689]: I1201 08:40:09.670774 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:09 crc kubenswrapper[4689]: I1201 08:40:09.670793 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:09 crc kubenswrapper[4689]: I1201 08:40:09.670824 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:09 crc kubenswrapper[4689]: I1201 08:40:09.670848 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:09Z","lastTransitionTime":"2025-12-01T08:40:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:09 crc kubenswrapper[4689]: I1201 08:40:09.781763 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:09 crc kubenswrapper[4689]: I1201 08:40:09.781823 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:09 crc kubenswrapper[4689]: I1201 08:40:09.782064 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:09 crc kubenswrapper[4689]: I1201 08:40:09.782101 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:09 crc kubenswrapper[4689]: I1201 08:40:09.782122 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:09Z","lastTransitionTime":"2025-12-01T08:40:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:09 crc kubenswrapper[4689]: I1201 08:40:09.887005 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:09 crc kubenswrapper[4689]: I1201 08:40:09.887053 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:09 crc kubenswrapper[4689]: I1201 08:40:09.887064 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:09 crc kubenswrapper[4689]: I1201 08:40:09.887080 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:09 crc kubenswrapper[4689]: I1201 08:40:09.887113 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:09Z","lastTransitionTime":"2025-12-01T08:40:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:09 crc kubenswrapper[4689]: I1201 08:40:09.991823 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:09 crc kubenswrapper[4689]: I1201 08:40:09.991904 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:09 crc kubenswrapper[4689]: I1201 08:40:09.991923 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:09 crc kubenswrapper[4689]: I1201 08:40:09.992289 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:09 crc kubenswrapper[4689]: I1201 08:40:09.992590 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:09Z","lastTransitionTime":"2025-12-01T08:40:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:10 crc kubenswrapper[4689]: I1201 08:40:10.046815 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 08:40:10 crc kubenswrapper[4689]: I1201 08:40:10.046892 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jtwvs" Dec 01 08:40:10 crc kubenswrapper[4689]: E1201 08:40:10.047057 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 08:40:10 crc kubenswrapper[4689]: I1201 08:40:10.046846 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 08:40:10 crc kubenswrapper[4689]: E1201 08:40:10.047231 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 08:40:10 crc kubenswrapper[4689]: E1201 08:40:10.047314 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jtwvs" podUID="5d6a08d0-a948-4c69-b3f0-f5e084adb453" Dec 01 08:40:10 crc kubenswrapper[4689]: I1201 08:40:10.096283 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:10 crc kubenswrapper[4689]: I1201 08:40:10.096321 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:10 crc kubenswrapper[4689]: I1201 08:40:10.096336 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:10 crc kubenswrapper[4689]: I1201 08:40:10.096352 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:10 crc kubenswrapper[4689]: I1201 08:40:10.096400 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:10Z","lastTransitionTime":"2025-12-01T08:40:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:10 crc kubenswrapper[4689]: I1201 08:40:10.199142 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:10 crc kubenswrapper[4689]: I1201 08:40:10.199207 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:10 crc kubenswrapper[4689]: I1201 08:40:10.199224 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:10 crc kubenswrapper[4689]: I1201 08:40:10.199244 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:10 crc kubenswrapper[4689]: I1201 08:40:10.199267 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:10Z","lastTransitionTime":"2025-12-01T08:40:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:10 crc kubenswrapper[4689]: I1201 08:40:10.302186 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:10 crc kubenswrapper[4689]: I1201 08:40:10.302230 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:10 crc kubenswrapper[4689]: I1201 08:40:10.302248 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:10 crc kubenswrapper[4689]: I1201 08:40:10.302272 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:10 crc kubenswrapper[4689]: I1201 08:40:10.302294 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:10Z","lastTransitionTime":"2025-12-01T08:40:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:10 crc kubenswrapper[4689]: I1201 08:40:10.406325 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:10 crc kubenswrapper[4689]: I1201 08:40:10.406466 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:10 crc kubenswrapper[4689]: I1201 08:40:10.406487 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:10 crc kubenswrapper[4689]: I1201 08:40:10.406516 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:10 crc kubenswrapper[4689]: I1201 08:40:10.406537 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:10Z","lastTransitionTime":"2025-12-01T08:40:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:10 crc kubenswrapper[4689]: I1201 08:40:10.509602 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:10 crc kubenswrapper[4689]: I1201 08:40:10.510179 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:10 crc kubenswrapper[4689]: I1201 08:40:10.510254 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:10 crc kubenswrapper[4689]: I1201 08:40:10.510355 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:10 crc kubenswrapper[4689]: I1201 08:40:10.510483 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:10Z","lastTransitionTime":"2025-12-01T08:40:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:10 crc kubenswrapper[4689]: I1201 08:40:10.613398 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:10 crc kubenswrapper[4689]: I1201 08:40:10.613442 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:10 crc kubenswrapper[4689]: I1201 08:40:10.613454 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:10 crc kubenswrapper[4689]: I1201 08:40:10.613472 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:10 crc kubenswrapper[4689]: I1201 08:40:10.613484 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:10Z","lastTransitionTime":"2025-12-01T08:40:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:10 crc kubenswrapper[4689]: I1201 08:40:10.716164 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:10 crc kubenswrapper[4689]: I1201 08:40:10.716553 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:10 crc kubenswrapper[4689]: I1201 08:40:10.716673 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:10 crc kubenswrapper[4689]: I1201 08:40:10.716780 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:10 crc kubenswrapper[4689]: I1201 08:40:10.716884 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:10Z","lastTransitionTime":"2025-12-01T08:40:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:10 crc kubenswrapper[4689]: I1201 08:40:10.820516 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:10 crc kubenswrapper[4689]: I1201 08:40:10.820945 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:10 crc kubenswrapper[4689]: I1201 08:40:10.821035 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:10 crc kubenswrapper[4689]: I1201 08:40:10.821120 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:10 crc kubenswrapper[4689]: I1201 08:40:10.821309 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:10Z","lastTransitionTime":"2025-12-01T08:40:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:10 crc kubenswrapper[4689]: I1201 08:40:10.924267 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:10 crc kubenswrapper[4689]: I1201 08:40:10.924345 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:10 crc kubenswrapper[4689]: I1201 08:40:10.924356 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:10 crc kubenswrapper[4689]: I1201 08:40:10.924400 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:10 crc kubenswrapper[4689]: I1201 08:40:10.924415 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:10Z","lastTransitionTime":"2025-12-01T08:40:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:11 crc kubenswrapper[4689]: I1201 08:40:11.028459 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:11 crc kubenswrapper[4689]: I1201 08:40:11.028547 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:11 crc kubenswrapper[4689]: I1201 08:40:11.028571 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:11 crc kubenswrapper[4689]: I1201 08:40:11.028604 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:11 crc kubenswrapper[4689]: I1201 08:40:11.028629 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:11Z","lastTransitionTime":"2025-12-01T08:40:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:11 crc kubenswrapper[4689]: I1201 08:40:11.047029 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 08:40:11 crc kubenswrapper[4689]: E1201 08:40:11.047726 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 08:40:11 crc kubenswrapper[4689]: I1201 08:40:11.070186 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a78a7f1a0302034697497f8a7a3d9547400a4311d7e304fdba95a19750c66658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:40:11Z is after 2025-08-24T17:21:41Z" Dec 01 08:40:11 crc kubenswrapper[4689]: I1201 08:40:11.087881 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62ff4a03d2e50e2d881287008cfedc7ff067e04d3dc24ace25b56677b0eb117c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1ed68789c1f37ddbc952b0927feba7a704b01f3ee52e057e29c989f1d475e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:40:11Z is after 2025-08-24T17:21:41Z" Dec 01 08:40:11 crc kubenswrapper[4689]: I1201 08:40:11.102006 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c050d26-238a-4a6a-9ced-fa621716bc33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2c1b6982c1f372b680ec1f056266b51f7156a1881dea415db18d5280c7bf92b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a56b83ce083a611ba49ed00ff01a20ff09d5379160397a8f3f650269825849af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a56b83ce083a611ba49ed00ff01a20ff09d5379160397a8f3f650269825849af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:38:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:40:11Z is after 2025-08-24T17:21:41Z" Dec 01 08:40:11 crc kubenswrapper[4689]: I1201 08:40:11.124237 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e81b25bc4fc917a5480b84f4c7d2550829f5ad209d4cac14d2cd64d5b792b9e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:40:11Z is after 2025-08-24T17:21:41Z" Dec 01 08:40:11 crc kubenswrapper[4689]: I1201 08:40:11.131467 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:11 crc kubenswrapper[4689]: I1201 08:40:11.131558 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:11 crc kubenswrapper[4689]: I1201 08:40:11.131604 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:11 crc kubenswrapper[4689]: I1201 08:40:11.131631 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:11 crc kubenswrapper[4689]: I1201 08:40:11.131676 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:11Z","lastTransitionTime":"2025-12-01T08:40:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:11 crc kubenswrapper[4689]: I1201 08:40:11.147778 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:40:11Z is after 2025-08-24T17:21:41Z" Dec 01 08:40:11 crc kubenswrapper[4689]: I1201 08:40:11.166969 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4z9l8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74395c07-d5ab-45ec-a616-1d0b1b336583\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2f9b1493d035a89354477e779e8277075127a85756f190d5f0c3bf98a15f15b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjzb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4z9l8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:40:11Z is after 2025-08-24T17:21:41Z" Dec 01 08:40:11 crc kubenswrapper[4689]: I1201 08:40:11.191859 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"988f960f-52fa-406f-9320-a8eec7a04f76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8be3fd2d2a091ced00ce091ea6f81df99a6718fa9ead81d56590d2c8ebc2a36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46c9dc400db69538c36365f4e2d66aae47defc4224c2881532870f875e2b30e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2afbacac849e9ed880e975ae167ea5b2329811c28516152f4d809b043ab61746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be8fa8da5eef9035e80121c75adb9d6653d8683948fa83cfb1ff87a5fb3e8b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81345c33ddfbccd0c6b7d9c505ee81d513ce884fa1326add04026fd82d441e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://211c2ff150fa2d5fb73b0df4cfc5e694bf0b6b6761eec1e6827ac983fd759829\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3c9833e8018d8e0b8fdf680a449297d9419da3fb486c78f21c586947f97e1a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c1785e9dd655b78cb3c4139dca5280fb0d84adc92ae5f92ff69ce96b8bb82f7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T08:39:33Z\\\",\\\"message\\\":\\\" current time 2025-12-01T08:39:33Z is after 2025-08-24T17:21:41Z]\\\\nI1201 08:39:33.035110 6200 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nI1201 08:39:33.033847 6200 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress-canary/ingress-canary]} name:Service_openshift-ingress-canary/ingress-canary_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.34:8443: 10.217.5.34:8888:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7715118b-bb1b-400a-803e-7ab2cc3eeec0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1201 08:39:33.035122 6200 obj_retry.go:409] Going to retry *v1.Pod resource setup for 6 objects: [openshift-network-diagnostics/network-check-source-55646444c4-trplf openshift-kube-scheduler/openshift-kube-scheduler-crc openshift-multus/multus-dl2st openshift-multus/network-metrics-daemon-jtwvs openshift-machine-config-operator/machine-config-daemon-hmdnx openshift-image-registry/node-ca-kg5bw]\\\\nI1201 08:39:33.035138 6200 obj_retry.go:418] Waiting for all the *\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:32Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3c9833e8018d8e0b8fdf680a449297d9419da3fb486c78f21c586947f97e1a1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T08:40:02Z\\\",\\\"message\\\":\\\", err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:40:02Z is after 2025-08-24T17:21:41Z]\\\\nI1201 08:40:02.715699 6556 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-operator-lifecycle-manager/catalog-operator-metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"78f6184b-c7cf-436d-8cbb-4b31f8af75e8\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/catalog-operator-metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T08:40:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e96fecee94f6f25ea0bbde3c6a23854fb5dc476f4b5cfb82871358fbcf6809c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://496b7a56d1e3bb23f4d1da0fc5bbe009995c46b1f44faad80ee18a18ab301a18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://496b7a56d1e3bb23f4d1da0fc5bbe009995c46b1f44faad80ee18a18ab301a18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcm2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8zn56\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:40:11Z is after 2025-08-24T17:21:41Z" Dec 01 08:40:11 crc kubenswrapper[4689]: I1201 08:40:11.205672 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jtwvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d6a08d0-a948-4c69-b3f0-f5e084adb453\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkwdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkwdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jtwvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:40:11Z is after 2025-08-24T17:21:41Z" Dec 01 08:40:11 crc kubenswrapper[4689]: I1201 08:40:11.225040 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f71ff62f-7141-4d50-b8ad-8312dc8a80e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ccd36e732198fa380968b1ca9a174e8f380c9d3c3c762c3267b2f65367c4360\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16d09630f7eb6a631d65473e272c4f98de5ba071d33992776b4101ac797b0854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f804833ecd52f1cd93a4df02639ff912e5c7ff38bfce3ba1c47e8cd5fca46e11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35299e45eecb4b2b98003295a39b7945d1fa2943a5a7f97301054d255687877e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:38:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:40:11Z is after 2025-08-24T17:21:41Z" Dec 01 08:40:11 crc kubenswrapper[4689]: I1201 08:40:11.234301 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:11 crc kubenswrapper[4689]: I1201 08:40:11.234348 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:11 crc kubenswrapper[4689]: I1201 08:40:11.234360 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:11 crc kubenswrapper[4689]: I1201 08:40:11.234411 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:11 crc kubenswrapper[4689]: I1201 08:40:11.234424 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:11Z","lastTransitionTime":"2025-12-01T08:40:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:11 crc kubenswrapper[4689]: I1201 08:40:11.239748 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3947625d-75bf-4332-a233-1491b2ee9d96\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5095b8eb2f2f6319a0221ed4d7ec1fb81540fb75f610f7b57dc23fefbd196b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5v5pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdcf58565a332d76dac9cc12df1e0cd89eee6934be4b3234f8c36e9a8e22983a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5v5pz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hmdnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:40:11Z is after 2025-08-24T17:21:41Z" Dec 01 08:40:11 crc kubenswrapper[4689]: I1201 08:40:11.254661 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82556f94-5534-4aae-9690-5ce8e8d38113\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ad3697dfb9a953c1345d69e9f6c393a184bafaf121e9a62a86d05f8f26e3f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20de12d3c9020c7818be4f11a75808c7b7e81db8ae821b12284182d81e7cbceb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35c39a06af6610f8d18eac5f96e5dee0b542fb3fb0c81a102ed8b2a20d054a42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d507cfb5d0bd608d6d5ecd4105f944cdf013df7acf17c4d6237512601b4a7125\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d507cfb5d0bd608d6d5ecd4105f944cdf013df7acf17c4d6237512601b4a7125\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:38:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:38:41Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:40:11Z is after 2025-08-24T17:21:41Z" Dec 01 08:40:11 crc kubenswrapper[4689]: I1201 08:40:11.273348 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1946844b-83ee-401e-b3b6-5994ef81c85e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dbad86a3834b9ec3334de7ccc49988da1f7e42c423801c9789ce12ba70390b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://848171dd65d193193056bff78092de5ea65abaf7af4d168a9c27409765bd0ac2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad9f8bfeec0bea5ec9df44017223d633ce320bba477e99d5cc325712f92fff65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5920f4efa4e812090eb100b51bcf8e37eb82eb6b9fa495c8560ceeea3a94d55b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83ec64e82504ae546c78bdb3eb8b6cf47514373616749723ec457191b95b73c7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1201 08:38:54.283267 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1201 08:38:54.288329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1100834793/tls.crt::/tmp/serving-cert-1100834793/tls.key\\\\\\\"\\\\nI1201 08:39:00.137331 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1201 08:39:00.140699 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1201 08:39:00.140719 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1201 08:39:00.140740 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1201 08:39:00.140746 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 08:39:00.151271 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 08:39:00.151299 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 08:39:00.151305 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 08:39:00.151312 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 08:39:00.151316 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 08:39:00.151322 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 08:39:00.151326 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 08:39:00.151604 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1201 08:39:00.154622 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7190f95ad8a5cce27417903e343138ff345b97c0713609f47db91d8fe0c44a7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:38:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bba1efe2a197a5c47fc4220c1c53da0db51bcbdf317261c978b7f31348034f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bba1efe2a197a5c47fc4220c1c53da0db51bcbdf317261c978b7f31348034f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:38:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:38:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:38:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:40:11Z is after 2025-08-24T17:21:41Z" Dec 01 08:40:11 crc kubenswrapper[4689]: I1201 08:40:11.289292 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dl2st" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bebcb50-c292-4bca-9299-2fdc21439b18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0a76050989ec3f5388f58967fc29b953bea67fbbf75db6ad980546718f4f034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://768aa329fc277550bd549890a176d80246fcbfa50fbc3b88c444434b980b9986\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T08:39:51Z\\\",\\\"message\\\":\\\"2025-12-01T08:39:06+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d704f42a-5127-4c68-bde9-b46f9b6549c3\\\\n2025-12-01T08:39:06+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d704f42a-5127-4c68-bde9-b46f9b6549c3 to /host/opt/cni/bin/\\\\n2025-12-01T08:39:06Z [verbose] multus-daemon started\\\\n2025-12-01T08:39:06Z [verbose] Readiness Indicator file check\\\\n2025-12-01T08:39:51Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98wj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dl2st\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:40:11Z is after 2025-08-24T17:21:41Z" Dec 01 08:40:11 crc kubenswrapper[4689]: I1201 08:40:11.304703 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7p2p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcab587f-eb9b-4dde-a0a1-75ed175999b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://761460333be2b369513cc7812afa57b580daf1e0e9add1c20f33ddf45601632c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6954a34226801e4552090e2c6e76c36de94e28ea8e175bc26392f534fb12214\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6954a34226801e4552090e2c6e76c36de94e28ea8e175bc26392f534fb12214\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33d22ffe5ececc3e9210cb8c45471bd5f4b8a76e9ff2002cfc4c099b6973a7ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33d22ffe5ececc3e9210cb8c45471bd5f4b8a76e9ff2002cfc4c099b6973a7ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30e1f218df66bf86cef3d56c80b2d5bafacfabbac8868f3127a70f88431d00e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30e1f218df66bf86cef3d56c80b2d5bafacfabbac8868f3127a70f88431d00e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://548c59ec353473b69fd54aa96a077763c420647673b64000364532db2c3beefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://548c59ec353473b69fd54aa96a077763c420647673b64000364532db2c3beefe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae04920175b7c9665a739c2f390f5b21a7f8227277909132d9719eaba43c9dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae04920175b7c9665a739c2f390f5b21a7f8227277909132d9719eaba43c9dd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://512fd55cf3633a9fcdf425af76d251823e7638bd08a7f32341108470103ea67d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://512fd55cf3633a9fcdf425af76d251823e7638bd08a7f32341108470103ea67d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T08:39:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T08:39:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-flskq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7p2p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:40:11Z is after 2025-08-24T17:21:41Z" Dec 01 08:40:11 crc kubenswrapper[4689]: I1201 08:40:11.315430 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kg5bw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af90efaa-97be-48b4-bfe6-dc25956d2b5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15e208d5bee2ddf73e62749c6865d4a6cc138365c59ab582e2b28e8c01480591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t2sbc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kg5bw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:40:11Z is after 2025-08-24T17:21:41Z" Dec 01 08:40:11 crc kubenswrapper[4689]: I1201 08:40:11.328049 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c2qqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65456ad6-e7d1-4546-a977-244691fc5722\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c50390d9cf966913cfb379da199be0fe90b9085e0d76114903eb624054a7f84b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pwwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cda8fd2f87a8bee5f54685633fc64ce2dd06bfe6e5ea9fa8458345954080e37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T08:39:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pwwx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T08:39:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-c2qqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:40:11Z is after 2025-08-24T17:21:41Z" Dec 01 08:40:11 crc kubenswrapper[4689]: I1201 08:40:11.338565 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:11 crc kubenswrapper[4689]: I1201 08:40:11.338615 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:11 crc kubenswrapper[4689]: I1201 08:40:11.338625 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:11 crc kubenswrapper[4689]: I1201 08:40:11.338643 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:11 crc kubenswrapper[4689]: I1201 08:40:11.338654 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:11Z","lastTransitionTime":"2025-12-01T08:40:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:11 crc kubenswrapper[4689]: I1201 08:40:11.341960 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:40:11Z is after 2025-08-24T17:21:41Z" Dec 01 08:40:11 crc kubenswrapper[4689]: I1201 08:40:11.357297 4689 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T08:39:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T08:40:11Z is after 2025-08-24T17:21:41Z" Dec 01 08:40:11 crc kubenswrapper[4689]: I1201 08:40:11.441780 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:11 crc kubenswrapper[4689]: I1201 08:40:11.441850 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:11 crc kubenswrapper[4689]: I1201 08:40:11.441861 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:11 crc kubenswrapper[4689]: I1201 08:40:11.441877 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:11 crc kubenswrapper[4689]: I1201 08:40:11.441889 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:11Z","lastTransitionTime":"2025-12-01T08:40:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:11 crc kubenswrapper[4689]: I1201 08:40:11.544646 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:11 crc kubenswrapper[4689]: I1201 08:40:11.544715 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:11 crc kubenswrapper[4689]: I1201 08:40:11.544733 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:11 crc kubenswrapper[4689]: I1201 08:40:11.544762 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:11 crc kubenswrapper[4689]: I1201 08:40:11.544781 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:11Z","lastTransitionTime":"2025-12-01T08:40:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:11 crc kubenswrapper[4689]: I1201 08:40:11.647648 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:11 crc kubenswrapper[4689]: I1201 08:40:11.647687 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:11 crc kubenswrapper[4689]: I1201 08:40:11.647698 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:11 crc kubenswrapper[4689]: I1201 08:40:11.647715 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:11 crc kubenswrapper[4689]: I1201 08:40:11.647726 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:11Z","lastTransitionTime":"2025-12-01T08:40:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:11 crc kubenswrapper[4689]: I1201 08:40:11.750138 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:11 crc kubenswrapper[4689]: I1201 08:40:11.750540 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:11 crc kubenswrapper[4689]: I1201 08:40:11.750680 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:11 crc kubenswrapper[4689]: I1201 08:40:11.750777 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:11 crc kubenswrapper[4689]: I1201 08:40:11.750893 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:11Z","lastTransitionTime":"2025-12-01T08:40:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:11 crc kubenswrapper[4689]: I1201 08:40:11.906794 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:11 crc kubenswrapper[4689]: I1201 08:40:11.907044 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:11 crc kubenswrapper[4689]: I1201 08:40:11.907061 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:11 crc kubenswrapper[4689]: I1201 08:40:11.907083 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:11 crc kubenswrapper[4689]: I1201 08:40:11.907096 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:11Z","lastTransitionTime":"2025-12-01T08:40:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:12 crc kubenswrapper[4689]: I1201 08:40:12.010936 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:12 crc kubenswrapper[4689]: I1201 08:40:12.010978 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:12 crc kubenswrapper[4689]: I1201 08:40:12.010990 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:12 crc kubenswrapper[4689]: I1201 08:40:12.011010 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:12 crc kubenswrapper[4689]: I1201 08:40:12.011021 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:12Z","lastTransitionTime":"2025-12-01T08:40:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:12 crc kubenswrapper[4689]: I1201 08:40:12.047405 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 08:40:12 crc kubenswrapper[4689]: I1201 08:40:12.047534 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 08:40:12 crc kubenswrapper[4689]: I1201 08:40:12.047580 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jtwvs" Dec 01 08:40:12 crc kubenswrapper[4689]: E1201 08:40:12.047609 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 08:40:12 crc kubenswrapper[4689]: E1201 08:40:12.047821 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 08:40:12 crc kubenswrapper[4689]: E1201 08:40:12.047956 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jtwvs" podUID="5d6a08d0-a948-4c69-b3f0-f5e084adb453" Dec 01 08:40:12 crc kubenswrapper[4689]: I1201 08:40:12.114041 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:12 crc kubenswrapper[4689]: I1201 08:40:12.114108 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:12 crc kubenswrapper[4689]: I1201 08:40:12.114122 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:12 crc kubenswrapper[4689]: I1201 08:40:12.114145 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:12 crc kubenswrapper[4689]: I1201 08:40:12.114160 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:12Z","lastTransitionTime":"2025-12-01T08:40:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:12 crc kubenswrapper[4689]: I1201 08:40:12.218661 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:12 crc kubenswrapper[4689]: I1201 08:40:12.218725 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:12 crc kubenswrapper[4689]: I1201 08:40:12.218747 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:12 crc kubenswrapper[4689]: I1201 08:40:12.218771 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:12 crc kubenswrapper[4689]: I1201 08:40:12.218783 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:12Z","lastTransitionTime":"2025-12-01T08:40:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:12 crc kubenswrapper[4689]: I1201 08:40:12.321900 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:12 crc kubenswrapper[4689]: I1201 08:40:12.321946 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:12 crc kubenswrapper[4689]: I1201 08:40:12.321958 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:12 crc kubenswrapper[4689]: I1201 08:40:12.321974 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:12 crc kubenswrapper[4689]: I1201 08:40:12.321983 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:12Z","lastTransitionTime":"2025-12-01T08:40:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:12 crc kubenswrapper[4689]: I1201 08:40:12.424817 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:12 crc kubenswrapper[4689]: I1201 08:40:12.424859 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:12 crc kubenswrapper[4689]: I1201 08:40:12.424872 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:12 crc kubenswrapper[4689]: I1201 08:40:12.424888 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:12 crc kubenswrapper[4689]: I1201 08:40:12.424896 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:12Z","lastTransitionTime":"2025-12-01T08:40:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:12 crc kubenswrapper[4689]: I1201 08:40:12.528777 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:12 crc kubenswrapper[4689]: I1201 08:40:12.529382 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:12 crc kubenswrapper[4689]: I1201 08:40:12.529507 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:12 crc kubenswrapper[4689]: I1201 08:40:12.529587 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:12 crc kubenswrapper[4689]: I1201 08:40:12.529662 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:12Z","lastTransitionTime":"2025-12-01T08:40:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:12 crc kubenswrapper[4689]: I1201 08:40:12.632852 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:12 crc kubenswrapper[4689]: I1201 08:40:12.632935 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:12 crc kubenswrapper[4689]: I1201 08:40:12.632948 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:12 crc kubenswrapper[4689]: I1201 08:40:12.632971 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:12 crc kubenswrapper[4689]: I1201 08:40:12.632986 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:12Z","lastTransitionTime":"2025-12-01T08:40:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:12 crc kubenswrapper[4689]: I1201 08:40:12.735511 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:12 crc kubenswrapper[4689]: I1201 08:40:12.735554 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:12 crc kubenswrapper[4689]: I1201 08:40:12.735564 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:12 crc kubenswrapper[4689]: I1201 08:40:12.735582 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:12 crc kubenswrapper[4689]: I1201 08:40:12.735594 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:12Z","lastTransitionTime":"2025-12-01T08:40:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:12 crc kubenswrapper[4689]: I1201 08:40:12.838104 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:12 crc kubenswrapper[4689]: I1201 08:40:12.838139 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:12 crc kubenswrapper[4689]: I1201 08:40:12.838150 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:12 crc kubenswrapper[4689]: I1201 08:40:12.838167 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:12 crc kubenswrapper[4689]: I1201 08:40:12.838175 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:12Z","lastTransitionTime":"2025-12-01T08:40:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:12 crc kubenswrapper[4689]: I1201 08:40:12.941022 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:12 crc kubenswrapper[4689]: I1201 08:40:12.941070 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:12 crc kubenswrapper[4689]: I1201 08:40:12.941081 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:12 crc kubenswrapper[4689]: I1201 08:40:12.941097 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:12 crc kubenswrapper[4689]: I1201 08:40:12.941107 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:12Z","lastTransitionTime":"2025-12-01T08:40:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:13 crc kubenswrapper[4689]: I1201 08:40:13.043869 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:13 crc kubenswrapper[4689]: I1201 08:40:13.043912 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:13 crc kubenswrapper[4689]: I1201 08:40:13.043921 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:13 crc kubenswrapper[4689]: I1201 08:40:13.043941 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:13 crc kubenswrapper[4689]: I1201 08:40:13.043951 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:13Z","lastTransitionTime":"2025-12-01T08:40:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:13 crc kubenswrapper[4689]: I1201 08:40:13.047280 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 08:40:13 crc kubenswrapper[4689]: E1201 08:40:13.047820 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 08:40:13 crc kubenswrapper[4689]: I1201 08:40:13.149874 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:13 crc kubenswrapper[4689]: I1201 08:40:13.149934 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:13 crc kubenswrapper[4689]: I1201 08:40:13.149948 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:13 crc kubenswrapper[4689]: I1201 08:40:13.149969 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:13 crc kubenswrapper[4689]: I1201 08:40:13.149980 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:13Z","lastTransitionTime":"2025-12-01T08:40:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:13 crc kubenswrapper[4689]: I1201 08:40:13.253615 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:13 crc kubenswrapper[4689]: I1201 08:40:13.253661 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:13 crc kubenswrapper[4689]: I1201 08:40:13.253673 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:13 crc kubenswrapper[4689]: I1201 08:40:13.253693 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:13 crc kubenswrapper[4689]: I1201 08:40:13.253705 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:13Z","lastTransitionTime":"2025-12-01T08:40:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:13 crc kubenswrapper[4689]: I1201 08:40:13.356565 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:13 crc kubenswrapper[4689]: I1201 08:40:13.356643 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:13 crc kubenswrapper[4689]: I1201 08:40:13.356654 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:13 crc kubenswrapper[4689]: I1201 08:40:13.356674 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:13 crc kubenswrapper[4689]: I1201 08:40:13.356689 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:13Z","lastTransitionTime":"2025-12-01T08:40:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:13 crc kubenswrapper[4689]: I1201 08:40:13.459866 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:13 crc kubenswrapper[4689]: I1201 08:40:13.459919 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:13 crc kubenswrapper[4689]: I1201 08:40:13.459931 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:13 crc kubenswrapper[4689]: I1201 08:40:13.459951 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:13 crc kubenswrapper[4689]: I1201 08:40:13.459962 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:13Z","lastTransitionTime":"2025-12-01T08:40:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:13 crc kubenswrapper[4689]: I1201 08:40:13.571347 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:13 crc kubenswrapper[4689]: I1201 08:40:13.571430 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:13 crc kubenswrapper[4689]: I1201 08:40:13.571444 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:13 crc kubenswrapper[4689]: I1201 08:40:13.571466 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:13 crc kubenswrapper[4689]: I1201 08:40:13.571480 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:13Z","lastTransitionTime":"2025-12-01T08:40:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:13 crc kubenswrapper[4689]: I1201 08:40:13.674305 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:13 crc kubenswrapper[4689]: I1201 08:40:13.674385 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:13 crc kubenswrapper[4689]: I1201 08:40:13.674398 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:13 crc kubenswrapper[4689]: I1201 08:40:13.674423 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:13 crc kubenswrapper[4689]: I1201 08:40:13.674434 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:13Z","lastTransitionTime":"2025-12-01T08:40:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:13 crc kubenswrapper[4689]: I1201 08:40:13.777537 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:13 crc kubenswrapper[4689]: I1201 08:40:13.777589 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:13 crc kubenswrapper[4689]: I1201 08:40:13.777599 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:13 crc kubenswrapper[4689]: I1201 08:40:13.777616 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:13 crc kubenswrapper[4689]: I1201 08:40:13.777625 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:13Z","lastTransitionTime":"2025-12-01T08:40:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:13 crc kubenswrapper[4689]: I1201 08:40:13.880601 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:13 crc kubenswrapper[4689]: I1201 08:40:13.880970 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:13 crc kubenswrapper[4689]: I1201 08:40:13.881051 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:13 crc kubenswrapper[4689]: I1201 08:40:13.881118 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:13 crc kubenswrapper[4689]: I1201 08:40:13.881182 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:13Z","lastTransitionTime":"2025-12-01T08:40:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:13 crc kubenswrapper[4689]: I1201 08:40:13.984066 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:13 crc kubenswrapper[4689]: I1201 08:40:13.984128 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:13 crc kubenswrapper[4689]: I1201 08:40:13.984139 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:13 crc kubenswrapper[4689]: I1201 08:40:13.984171 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:13 crc kubenswrapper[4689]: I1201 08:40:13.984185 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:13Z","lastTransitionTime":"2025-12-01T08:40:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:14 crc kubenswrapper[4689]: I1201 08:40:14.047211 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 08:40:14 crc kubenswrapper[4689]: I1201 08:40:14.047279 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 08:40:14 crc kubenswrapper[4689]: I1201 08:40:14.047292 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jtwvs" Dec 01 08:40:14 crc kubenswrapper[4689]: E1201 08:40:14.047462 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 08:40:14 crc kubenswrapper[4689]: E1201 08:40:14.047528 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 08:40:14 crc kubenswrapper[4689]: E1201 08:40:14.047687 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jtwvs" podUID="5d6a08d0-a948-4c69-b3f0-f5e084adb453" Dec 01 08:40:14 crc kubenswrapper[4689]: I1201 08:40:14.087554 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:14 crc kubenswrapper[4689]: I1201 08:40:14.087611 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:14 crc kubenswrapper[4689]: I1201 08:40:14.087630 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:14 crc kubenswrapper[4689]: I1201 08:40:14.087657 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:14 crc kubenswrapper[4689]: I1201 08:40:14.087678 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:14Z","lastTransitionTime":"2025-12-01T08:40:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:14 crc kubenswrapper[4689]: I1201 08:40:14.191075 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:14 crc kubenswrapper[4689]: I1201 08:40:14.191131 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:14 crc kubenswrapper[4689]: I1201 08:40:14.191148 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:14 crc kubenswrapper[4689]: I1201 08:40:14.191174 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:14 crc kubenswrapper[4689]: I1201 08:40:14.191190 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:14Z","lastTransitionTime":"2025-12-01T08:40:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:14 crc kubenswrapper[4689]: I1201 08:40:14.293441 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:14 crc kubenswrapper[4689]: I1201 08:40:14.293489 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:14 crc kubenswrapper[4689]: I1201 08:40:14.293500 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:14 crc kubenswrapper[4689]: I1201 08:40:14.293519 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:14 crc kubenswrapper[4689]: I1201 08:40:14.293529 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:14Z","lastTransitionTime":"2025-12-01T08:40:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:14 crc kubenswrapper[4689]: I1201 08:40:14.396791 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:14 crc kubenswrapper[4689]: I1201 08:40:14.396837 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:14 crc kubenswrapper[4689]: I1201 08:40:14.396848 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:14 crc kubenswrapper[4689]: I1201 08:40:14.396870 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:14 crc kubenswrapper[4689]: I1201 08:40:14.396880 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:14Z","lastTransitionTime":"2025-12-01T08:40:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:14 crc kubenswrapper[4689]: I1201 08:40:14.500499 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:14 crc kubenswrapper[4689]: I1201 08:40:14.500577 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:14 crc kubenswrapper[4689]: I1201 08:40:14.500588 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:14 crc kubenswrapper[4689]: I1201 08:40:14.500608 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:14 crc kubenswrapper[4689]: I1201 08:40:14.500619 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:14Z","lastTransitionTime":"2025-12-01T08:40:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:14 crc kubenswrapper[4689]: I1201 08:40:14.604767 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:14 crc kubenswrapper[4689]: I1201 08:40:14.604823 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:14 crc kubenswrapper[4689]: I1201 08:40:14.604835 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:14 crc kubenswrapper[4689]: I1201 08:40:14.604855 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:14 crc kubenswrapper[4689]: I1201 08:40:14.604867 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:14Z","lastTransitionTime":"2025-12-01T08:40:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:14 crc kubenswrapper[4689]: I1201 08:40:14.707742 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:14 crc kubenswrapper[4689]: I1201 08:40:14.707795 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:14 crc kubenswrapper[4689]: I1201 08:40:14.707806 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:14 crc kubenswrapper[4689]: I1201 08:40:14.707821 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:14 crc kubenswrapper[4689]: I1201 08:40:14.707831 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:14Z","lastTransitionTime":"2025-12-01T08:40:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:14 crc kubenswrapper[4689]: I1201 08:40:14.811321 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:14 crc kubenswrapper[4689]: I1201 08:40:14.811392 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:14 crc kubenswrapper[4689]: I1201 08:40:14.811408 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:14 crc kubenswrapper[4689]: I1201 08:40:14.811427 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:14 crc kubenswrapper[4689]: I1201 08:40:14.811442 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:14Z","lastTransitionTime":"2025-12-01T08:40:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:14 crc kubenswrapper[4689]: I1201 08:40:14.914231 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:14 crc kubenswrapper[4689]: I1201 08:40:14.914490 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:14 crc kubenswrapper[4689]: I1201 08:40:14.914534 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:14 crc kubenswrapper[4689]: I1201 08:40:14.914575 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:14 crc kubenswrapper[4689]: I1201 08:40:14.914603 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:14Z","lastTransitionTime":"2025-12-01T08:40:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:15 crc kubenswrapper[4689]: I1201 08:40:15.017974 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:15 crc kubenswrapper[4689]: I1201 08:40:15.018046 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:15 crc kubenswrapper[4689]: I1201 08:40:15.018063 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:15 crc kubenswrapper[4689]: I1201 08:40:15.018083 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:15 crc kubenswrapper[4689]: I1201 08:40:15.018098 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:15Z","lastTransitionTime":"2025-12-01T08:40:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:15 crc kubenswrapper[4689]: I1201 08:40:15.047261 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 08:40:15 crc kubenswrapper[4689]: E1201 08:40:15.047521 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 08:40:15 crc kubenswrapper[4689]: I1201 08:40:15.120755 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:15 crc kubenswrapper[4689]: I1201 08:40:15.120831 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:15 crc kubenswrapper[4689]: I1201 08:40:15.120847 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:15 crc kubenswrapper[4689]: I1201 08:40:15.120873 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:15 crc kubenswrapper[4689]: I1201 08:40:15.120888 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:15Z","lastTransitionTime":"2025-12-01T08:40:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:15 crc kubenswrapper[4689]: I1201 08:40:15.224055 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:15 crc kubenswrapper[4689]: I1201 08:40:15.224116 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:15 crc kubenswrapper[4689]: I1201 08:40:15.224132 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:15 crc kubenswrapper[4689]: I1201 08:40:15.224154 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:15 crc kubenswrapper[4689]: I1201 08:40:15.224171 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:15Z","lastTransitionTime":"2025-12-01T08:40:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:15 crc kubenswrapper[4689]: I1201 08:40:15.326857 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:15 crc kubenswrapper[4689]: I1201 08:40:15.326913 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:15 crc kubenswrapper[4689]: I1201 08:40:15.326928 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:15 crc kubenswrapper[4689]: I1201 08:40:15.326946 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:15 crc kubenswrapper[4689]: I1201 08:40:15.326961 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:15Z","lastTransitionTime":"2025-12-01T08:40:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:15 crc kubenswrapper[4689]: I1201 08:40:15.431483 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:15 crc kubenswrapper[4689]: I1201 08:40:15.431560 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:15 crc kubenswrapper[4689]: I1201 08:40:15.431585 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:15 crc kubenswrapper[4689]: I1201 08:40:15.431613 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:15 crc kubenswrapper[4689]: I1201 08:40:15.431636 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:15Z","lastTransitionTime":"2025-12-01T08:40:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:15 crc kubenswrapper[4689]: I1201 08:40:15.535495 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:15 crc kubenswrapper[4689]: I1201 08:40:15.535551 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:15 crc kubenswrapper[4689]: I1201 08:40:15.535567 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:15 crc kubenswrapper[4689]: I1201 08:40:15.535589 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:15 crc kubenswrapper[4689]: I1201 08:40:15.535607 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:15Z","lastTransitionTime":"2025-12-01T08:40:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:15 crc kubenswrapper[4689]: I1201 08:40:15.638724 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:15 crc kubenswrapper[4689]: I1201 08:40:15.638806 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:15 crc kubenswrapper[4689]: I1201 08:40:15.638826 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:15 crc kubenswrapper[4689]: I1201 08:40:15.638866 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:15 crc kubenswrapper[4689]: I1201 08:40:15.638887 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:15Z","lastTransitionTime":"2025-12-01T08:40:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:15 crc kubenswrapper[4689]: I1201 08:40:15.742714 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:15 crc kubenswrapper[4689]: I1201 08:40:15.742757 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:15 crc kubenswrapper[4689]: I1201 08:40:15.742777 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:15 crc kubenswrapper[4689]: I1201 08:40:15.742803 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:15 crc kubenswrapper[4689]: I1201 08:40:15.742821 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:15Z","lastTransitionTime":"2025-12-01T08:40:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:15 crc kubenswrapper[4689]: I1201 08:40:15.845596 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:15 crc kubenswrapper[4689]: I1201 08:40:15.845674 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:15 crc kubenswrapper[4689]: I1201 08:40:15.845693 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:15 crc kubenswrapper[4689]: I1201 08:40:15.845723 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:15 crc kubenswrapper[4689]: I1201 08:40:15.845748 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:15Z","lastTransitionTime":"2025-12-01T08:40:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:15 crc kubenswrapper[4689]: I1201 08:40:15.949214 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:15 crc kubenswrapper[4689]: I1201 08:40:15.949251 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:15 crc kubenswrapper[4689]: I1201 08:40:15.949262 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:15 crc kubenswrapper[4689]: I1201 08:40:15.949291 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:15 crc kubenswrapper[4689]: I1201 08:40:15.949308 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:15Z","lastTransitionTime":"2025-12-01T08:40:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:16 crc kubenswrapper[4689]: I1201 08:40:16.046436 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jtwvs" Dec 01 08:40:16 crc kubenswrapper[4689]: I1201 08:40:16.046515 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 08:40:16 crc kubenswrapper[4689]: I1201 08:40:16.046589 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 08:40:16 crc kubenswrapper[4689]: E1201 08:40:16.046674 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jtwvs" podUID="5d6a08d0-a948-4c69-b3f0-f5e084adb453" Dec 01 08:40:16 crc kubenswrapper[4689]: E1201 08:40:16.046833 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 08:40:16 crc kubenswrapper[4689]: E1201 08:40:16.047048 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 08:40:16 crc kubenswrapper[4689]: I1201 08:40:16.053278 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:16 crc kubenswrapper[4689]: I1201 08:40:16.053333 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:16 crc kubenswrapper[4689]: I1201 08:40:16.053355 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:16 crc kubenswrapper[4689]: I1201 08:40:16.053450 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:16 crc kubenswrapper[4689]: I1201 08:40:16.053476 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:16Z","lastTransitionTime":"2025-12-01T08:40:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:16 crc kubenswrapper[4689]: I1201 08:40:16.157572 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:16 crc kubenswrapper[4689]: I1201 08:40:16.157635 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:16 crc kubenswrapper[4689]: I1201 08:40:16.157649 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:16 crc kubenswrapper[4689]: I1201 08:40:16.157703 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:16 crc kubenswrapper[4689]: I1201 08:40:16.157723 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:16Z","lastTransitionTime":"2025-12-01T08:40:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:16 crc kubenswrapper[4689]: I1201 08:40:16.260990 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:16 crc kubenswrapper[4689]: I1201 08:40:16.261033 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:16 crc kubenswrapper[4689]: I1201 08:40:16.261044 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:16 crc kubenswrapper[4689]: I1201 08:40:16.261063 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:16 crc kubenswrapper[4689]: I1201 08:40:16.261077 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:16Z","lastTransitionTime":"2025-12-01T08:40:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:16 crc kubenswrapper[4689]: I1201 08:40:16.364861 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:16 crc kubenswrapper[4689]: I1201 08:40:16.365104 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:16 crc kubenswrapper[4689]: I1201 08:40:16.365133 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:16 crc kubenswrapper[4689]: I1201 08:40:16.365211 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:16 crc kubenswrapper[4689]: I1201 08:40:16.365235 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:16Z","lastTransitionTime":"2025-12-01T08:40:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:16 crc kubenswrapper[4689]: I1201 08:40:16.469284 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:16 crc kubenswrapper[4689]: I1201 08:40:16.469359 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:16 crc kubenswrapper[4689]: I1201 08:40:16.469436 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:16 crc kubenswrapper[4689]: I1201 08:40:16.469477 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:16 crc kubenswrapper[4689]: I1201 08:40:16.469505 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:16Z","lastTransitionTime":"2025-12-01T08:40:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:16 crc kubenswrapper[4689]: I1201 08:40:16.573585 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:16 crc kubenswrapper[4689]: I1201 08:40:16.573651 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:16 crc kubenswrapper[4689]: I1201 08:40:16.573669 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:16 crc kubenswrapper[4689]: I1201 08:40:16.573693 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:16 crc kubenswrapper[4689]: I1201 08:40:16.573711 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:16Z","lastTransitionTime":"2025-12-01T08:40:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:16 crc kubenswrapper[4689]: I1201 08:40:16.651480 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:16 crc kubenswrapper[4689]: I1201 08:40:16.651561 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:16 crc kubenswrapper[4689]: I1201 08:40:16.651579 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:16 crc kubenswrapper[4689]: I1201 08:40:16.651607 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:16 crc kubenswrapper[4689]: I1201 08:40:16.651630 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:16Z","lastTransitionTime":"2025-12-01T08:40:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:16 crc kubenswrapper[4689]: I1201 08:40:16.694955 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 08:40:16 crc kubenswrapper[4689]: I1201 08:40:16.695004 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 08:40:16 crc kubenswrapper[4689]: I1201 08:40:16.695020 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 08:40:16 crc kubenswrapper[4689]: I1201 08:40:16.695043 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 08:40:16 crc kubenswrapper[4689]: I1201 08:40:16.695059 4689 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T08:40:16Z","lastTransitionTime":"2025-12-01T08:40:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 08:40:16 crc kubenswrapper[4689]: I1201 08:40:16.721874 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-5xv4t"] Dec 01 08:40:16 crc kubenswrapper[4689]: I1201 08:40:16.722988 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5xv4t" Dec 01 08:40:16 crc kubenswrapper[4689]: I1201 08:40:16.727480 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 01 08:40:16 crc kubenswrapper[4689]: I1201 08:40:16.728738 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 01 08:40:16 crc kubenswrapper[4689]: I1201 08:40:16.731010 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 01 08:40:16 crc kubenswrapper[4689]: I1201 08:40:16.732633 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 01 08:40:16 crc kubenswrapper[4689]: I1201 08:40:16.753664 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=76.753503789 podStartE2EDuration="1m16.753503789s" podCreationTimestamp="2025-12-01 08:39:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:40:16.753111108 +0000 UTC m=+96.825399012" watchObservedRunningTime="2025-12-01 08:40:16.753503789 +0000 UTC m=+96.825791733" Dec 01 08:40:16 crc kubenswrapper[4689]: I1201 08:40:16.810352 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/017da098-350a-458e-a64b-396c5ab6a0e7-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-5xv4t\" (UID: \"017da098-350a-458e-a64b-396c5ab6a0e7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5xv4t" Dec 01 08:40:16 crc kubenswrapper[4689]: I1201 08:40:16.810546 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/017da098-350a-458e-a64b-396c5ab6a0e7-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-5xv4t\" (UID: \"017da098-350a-458e-a64b-396c5ab6a0e7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5xv4t" Dec 01 08:40:16 crc kubenswrapper[4689]: I1201 08:40:16.810639 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/017da098-350a-458e-a64b-396c5ab6a0e7-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-5xv4t\" (UID: \"017da098-350a-458e-a64b-396c5ab6a0e7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5xv4t" Dec 01 08:40:16 crc kubenswrapper[4689]: I1201 08:40:16.810747 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/017da098-350a-458e-a64b-396c5ab6a0e7-service-ca\") pod \"cluster-version-operator-5c965bbfc6-5xv4t\" (UID: \"017da098-350a-458e-a64b-396c5ab6a0e7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5xv4t" Dec 01 08:40:16 crc kubenswrapper[4689]: I1201 08:40:16.810805 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/017da098-350a-458e-a64b-396c5ab6a0e7-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-5xv4t\" (UID: \"017da098-350a-458e-a64b-396c5ab6a0e7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5xv4t" Dec 01 08:40:16 crc kubenswrapper[4689]: I1201 08:40:16.816829 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-4z9l8" podStartSLOduration=75.816800173 podStartE2EDuration="1m15.816800173s" podCreationTimestamp="2025-12-01 08:39:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:40:16.795088632 +0000 UTC m=+96.867376576" watchObservedRunningTime="2025-12-01 08:40:16.816800173 +0000 UTC m=+96.889088077" Dec 01 08:40:16 crc kubenswrapper[4689]: I1201 08:40:16.844575 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=49.844536735 podStartE2EDuration="49.844536735s" podCreationTimestamp="2025-12-01 08:39:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:40:16.842939762 +0000 UTC m=+96.915227696" watchObservedRunningTime="2025-12-01 08:40:16.844536735 +0000 UTC m=+96.916824659" Dec 01 08:40:16 crc kubenswrapper[4689]: I1201 08:40:16.876002 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=75.875972647 podStartE2EDuration="1m15.875972647s" podCreationTimestamp="2025-12-01 08:39:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:40:16.863214375 +0000 UTC m=+96.935502309" watchObservedRunningTime="2025-12-01 08:40:16.875972647 +0000 UTC m=+96.948260571" Dec 01 08:40:16 crc kubenswrapper[4689]: I1201 08:40:16.894711 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podStartSLOduration=75.894684627 podStartE2EDuration="1m15.894684627s" podCreationTimestamp="2025-12-01 08:39:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:40:16.8768412 +0000 UTC m=+96.949129154" watchObservedRunningTime="2025-12-01 08:40:16.894684627 +0000 UTC m=+96.966972541" Dec 01 08:40:16 crc kubenswrapper[4689]: I1201 08:40:16.911884 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/017da098-350a-458e-a64b-396c5ab6a0e7-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-5xv4t\" (UID: \"017da098-350a-458e-a64b-396c5ab6a0e7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5xv4t" Dec 01 08:40:16 crc kubenswrapper[4689]: I1201 08:40:16.911950 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/017da098-350a-458e-a64b-396c5ab6a0e7-service-ca\") pod \"cluster-version-operator-5c965bbfc6-5xv4t\" (UID: \"017da098-350a-458e-a64b-396c5ab6a0e7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5xv4t" Dec 01 08:40:16 crc kubenswrapper[4689]: I1201 08:40:16.911986 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/017da098-350a-458e-a64b-396c5ab6a0e7-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-5xv4t\" (UID: \"017da098-350a-458e-a64b-396c5ab6a0e7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5xv4t" Dec 01 08:40:16 crc kubenswrapper[4689]: I1201 08:40:16.912003 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/017da098-350a-458e-a64b-396c5ab6a0e7-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-5xv4t\" (UID: \"017da098-350a-458e-a64b-396c5ab6a0e7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5xv4t" Dec 01 08:40:16 crc kubenswrapper[4689]: I1201 08:40:16.912035 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/017da098-350a-458e-a64b-396c5ab6a0e7-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-5xv4t\" (UID: \"017da098-350a-458e-a64b-396c5ab6a0e7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5xv4t" Dec 01 08:40:16 crc kubenswrapper[4689]: I1201 08:40:16.912198 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/017da098-350a-458e-a64b-396c5ab6a0e7-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-5xv4t\" (UID: \"017da098-350a-458e-a64b-396c5ab6a0e7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5xv4t" Dec 01 08:40:16 crc kubenswrapper[4689]: I1201 08:40:16.912245 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/017da098-350a-458e-a64b-396c5ab6a0e7-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-5xv4t\" (UID: \"017da098-350a-458e-a64b-396c5ab6a0e7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5xv4t" Dec 01 08:40:16 crc kubenswrapper[4689]: I1201 08:40:16.913873 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/017da098-350a-458e-a64b-396c5ab6a0e7-service-ca\") pod \"cluster-version-operator-5c965bbfc6-5xv4t\" (UID: \"017da098-350a-458e-a64b-396c5ab6a0e7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5xv4t" Dec 01 08:40:16 crc kubenswrapper[4689]: I1201 08:40:16.919126 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/017da098-350a-458e-a64b-396c5ab6a0e7-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-5xv4t\" (UID: \"017da098-350a-458e-a64b-396c5ab6a0e7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5xv4t" Dec 01 08:40:16 crc kubenswrapper[4689]: I1201 08:40:16.935675 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/017da098-350a-458e-a64b-396c5ab6a0e7-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-5xv4t\" (UID: \"017da098-350a-458e-a64b-396c5ab6a0e7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5xv4t" Dec 01 08:40:16 crc kubenswrapper[4689]: I1201 08:40:16.959311 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-7p2p7" podStartSLOduration=75.959279185 podStartE2EDuration="1m15.959279185s" podCreationTimestamp="2025-12-01 08:39:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:40:16.957707454 +0000 UTC m=+97.029995378" watchObservedRunningTime="2025-12-01 08:40:16.959279185 +0000 UTC m=+97.031567089" Dec 01 08:40:16 crc kubenswrapper[4689]: I1201 08:40:16.959697 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-dl2st" podStartSLOduration=75.959691987 podStartE2EDuration="1m15.959691987s" podCreationTimestamp="2025-12-01 08:39:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:40:16.934492253 +0000 UTC m=+97.006780207" watchObservedRunningTime="2025-12-01 08:40:16.959691987 +0000 UTC m=+97.031979891" Dec 01 08:40:16 crc kubenswrapper[4689]: I1201 08:40:16.988329 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-kg5bw" podStartSLOduration=75.988306423 podStartE2EDuration="1m15.988306423s" podCreationTimestamp="2025-12-01 08:39:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:40:16.973770594 +0000 UTC m=+97.046058498" watchObservedRunningTime="2025-12-01 08:40:16.988306423 +0000 UTC m=+97.060594327" Dec 01 08:40:17 crc kubenswrapper[4689]: I1201 08:40:17.003210 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-c2qqj" podStartSLOduration=75.003185561 podStartE2EDuration="1m15.003185561s" podCreationTimestamp="2025-12-01 08:39:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:40:16.989432693 +0000 UTC m=+97.061720617" watchObservedRunningTime="2025-12-01 08:40:17.003185561 +0000 UTC m=+97.075473465" Dec 01 08:40:17 crc kubenswrapper[4689]: I1201 08:40:17.017359 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=22.017335119 podStartE2EDuration="22.017335119s" podCreationTimestamp="2025-12-01 08:39:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:40:17.00390312 +0000 UTC m=+97.076191044" watchObservedRunningTime="2025-12-01 08:40:17.017335119 +0000 UTC m=+97.089623023" Dec 01 08:40:17 crc kubenswrapper[4689]: I1201 08:40:17.046840 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 08:40:17 crc kubenswrapper[4689]: E1201 08:40:17.047140 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 08:40:17 crc kubenswrapper[4689]: I1201 08:40:17.049028 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5xv4t" Dec 01 08:40:17 crc kubenswrapper[4689]: I1201 08:40:17.932532 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5xv4t" event={"ID":"017da098-350a-458e-a64b-396c5ab6a0e7","Type":"ContainerStarted","Data":"c3fb7a9e6107d12aa61255a37ca7cf2f4d70e37478ae9a69fe7fff8aeef4db07"} Dec 01 08:40:17 crc kubenswrapper[4689]: I1201 08:40:17.933080 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5xv4t" event={"ID":"017da098-350a-458e-a64b-396c5ab6a0e7","Type":"ContainerStarted","Data":"616506e05c1233b4ee78a13635a4e1c436b110b35568056b2d89739780f28760"} Dec 01 08:40:17 crc kubenswrapper[4689]: I1201 08:40:17.957691 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-5xv4t" podStartSLOduration=76.957664125 podStartE2EDuration="1m16.957664125s" podCreationTimestamp="2025-12-01 08:39:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:40:17.956275388 +0000 UTC m=+98.028563362" watchObservedRunningTime="2025-12-01 08:40:17.957664125 +0000 UTC m=+98.029952039" Dec 01 08:40:18 crc kubenswrapper[4689]: I1201 08:40:18.047276 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 08:40:18 crc kubenswrapper[4689]: I1201 08:40:18.047279 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 08:40:18 crc kubenswrapper[4689]: E1201 08:40:18.047457 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 08:40:18 crc kubenswrapper[4689]: I1201 08:40:18.047305 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jtwvs" Dec 01 08:40:18 crc kubenswrapper[4689]: E1201 08:40:18.047651 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 08:40:18 crc kubenswrapper[4689]: E1201 08:40:18.048013 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jtwvs" podUID="5d6a08d0-a948-4c69-b3f0-f5e084adb453" Dec 01 08:40:19 crc kubenswrapper[4689]: I1201 08:40:19.047011 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 08:40:19 crc kubenswrapper[4689]: E1201 08:40:19.047216 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 08:40:19 crc kubenswrapper[4689]: I1201 08:40:19.048864 4689 scope.go:117] "RemoveContainer" containerID="f3c9833e8018d8e0b8fdf680a449297d9419da3fb486c78f21c586947f97e1a1" Dec 01 08:40:19 crc kubenswrapper[4689]: E1201 08:40:19.049289 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8zn56_openshift-ovn-kubernetes(988f960f-52fa-406f-9320-a8eec7a04f76)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" podUID="988f960f-52fa-406f-9320-a8eec7a04f76" Dec 01 08:40:20 crc kubenswrapper[4689]: I1201 08:40:20.046886 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 08:40:20 crc kubenswrapper[4689]: I1201 08:40:20.046886 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 08:40:20 crc kubenswrapper[4689]: I1201 08:40:20.047026 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jtwvs" Dec 01 08:40:20 crc kubenswrapper[4689]: E1201 08:40:20.047116 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 08:40:20 crc kubenswrapper[4689]: E1201 08:40:20.047285 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 08:40:20 crc kubenswrapper[4689]: E1201 08:40:20.047392 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jtwvs" podUID="5d6a08d0-a948-4c69-b3f0-f5e084adb453" Dec 01 08:40:21 crc kubenswrapper[4689]: I1201 08:40:21.046705 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 08:40:21 crc kubenswrapper[4689]: E1201 08:40:21.047658 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 08:40:21 crc kubenswrapper[4689]: I1201 08:40:21.774174 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5d6a08d0-a948-4c69-b3f0-f5e084adb453-metrics-certs\") pod \"network-metrics-daemon-jtwvs\" (UID: \"5d6a08d0-a948-4c69-b3f0-f5e084adb453\") " pod="openshift-multus/network-metrics-daemon-jtwvs" Dec 01 08:40:21 crc kubenswrapper[4689]: E1201 08:40:21.774581 4689 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 01 08:40:21 crc kubenswrapper[4689]: E1201 08:40:21.774703 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d6a08d0-a948-4c69-b3f0-f5e084adb453-metrics-certs podName:5d6a08d0-a948-4c69-b3f0-f5e084adb453 nodeName:}" failed. No retries permitted until 2025-12-01 08:41:25.774667958 +0000 UTC m=+165.846955902 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5d6a08d0-a948-4c69-b3f0-f5e084adb453-metrics-certs") pod "network-metrics-daemon-jtwvs" (UID: "5d6a08d0-a948-4c69-b3f0-f5e084adb453") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 01 08:40:22 crc kubenswrapper[4689]: I1201 08:40:22.047448 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 08:40:22 crc kubenswrapper[4689]: E1201 08:40:22.047650 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 08:40:22 crc kubenswrapper[4689]: I1201 08:40:22.047792 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jtwvs" Dec 01 08:40:22 crc kubenswrapper[4689]: I1201 08:40:22.047891 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 08:40:22 crc kubenswrapper[4689]: E1201 08:40:22.047949 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jtwvs" podUID="5d6a08d0-a948-4c69-b3f0-f5e084adb453" Dec 01 08:40:22 crc kubenswrapper[4689]: E1201 08:40:22.048220 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 08:40:23 crc kubenswrapper[4689]: I1201 08:40:23.047280 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 08:40:23 crc kubenswrapper[4689]: E1201 08:40:23.047932 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 08:40:24 crc kubenswrapper[4689]: I1201 08:40:24.047027 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 08:40:24 crc kubenswrapper[4689]: I1201 08:40:24.047050 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 08:40:24 crc kubenswrapper[4689]: E1201 08:40:24.047320 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 08:40:24 crc kubenswrapper[4689]: E1201 08:40:24.047458 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 08:40:24 crc kubenswrapper[4689]: I1201 08:40:24.047060 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jtwvs" Dec 01 08:40:24 crc kubenswrapper[4689]: E1201 08:40:24.047600 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jtwvs" podUID="5d6a08d0-a948-4c69-b3f0-f5e084adb453" Dec 01 08:40:25 crc kubenswrapper[4689]: I1201 08:40:25.047494 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 08:40:25 crc kubenswrapper[4689]: E1201 08:40:25.047697 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 08:40:26 crc kubenswrapper[4689]: I1201 08:40:26.047084 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 08:40:26 crc kubenswrapper[4689]: I1201 08:40:26.047180 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 08:40:26 crc kubenswrapper[4689]: I1201 08:40:26.047084 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jtwvs" Dec 01 08:40:26 crc kubenswrapper[4689]: E1201 08:40:26.047349 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 08:40:26 crc kubenswrapper[4689]: E1201 08:40:26.047594 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jtwvs" podUID="5d6a08d0-a948-4c69-b3f0-f5e084adb453" Dec 01 08:40:26 crc kubenswrapper[4689]: E1201 08:40:26.047780 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 08:40:27 crc kubenswrapper[4689]: I1201 08:40:27.046541 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 08:40:27 crc kubenswrapper[4689]: E1201 08:40:27.047615 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 08:40:27 crc kubenswrapper[4689]: I1201 08:40:27.068116 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Dec 01 08:40:28 crc kubenswrapper[4689]: I1201 08:40:28.046536 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 08:40:28 crc kubenswrapper[4689]: E1201 08:40:28.046730 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 08:40:28 crc kubenswrapper[4689]: I1201 08:40:28.047045 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 08:40:28 crc kubenswrapper[4689]: E1201 08:40:28.047136 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 08:40:28 crc kubenswrapper[4689]: I1201 08:40:28.047339 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jtwvs" Dec 01 08:40:28 crc kubenswrapper[4689]: E1201 08:40:28.047525 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jtwvs" podUID="5d6a08d0-a948-4c69-b3f0-f5e084adb453" Dec 01 08:40:29 crc kubenswrapper[4689]: I1201 08:40:29.046659 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 08:40:29 crc kubenswrapper[4689]: E1201 08:40:29.046826 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 08:40:30 crc kubenswrapper[4689]: I1201 08:40:30.046638 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 08:40:30 crc kubenswrapper[4689]: I1201 08:40:30.046711 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jtwvs" Dec 01 08:40:30 crc kubenswrapper[4689]: I1201 08:40:30.046668 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 08:40:30 crc kubenswrapper[4689]: E1201 08:40:30.046882 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 08:40:30 crc kubenswrapper[4689]: E1201 08:40:30.047023 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jtwvs" podUID="5d6a08d0-a948-4c69-b3f0-f5e084adb453" Dec 01 08:40:30 crc kubenswrapper[4689]: E1201 08:40:30.047135 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 08:40:31 crc kubenswrapper[4689]: I1201 08:40:31.046599 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 08:40:31 crc kubenswrapper[4689]: E1201 08:40:31.047893 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 08:40:32 crc kubenswrapper[4689]: I1201 08:40:32.046510 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jtwvs" Dec 01 08:40:32 crc kubenswrapper[4689]: I1201 08:40:32.046615 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 08:40:32 crc kubenswrapper[4689]: I1201 08:40:32.046695 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 08:40:32 crc kubenswrapper[4689]: E1201 08:40:32.046775 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jtwvs" podUID="5d6a08d0-a948-4c69-b3f0-f5e084adb453" Dec 01 08:40:32 crc kubenswrapper[4689]: E1201 08:40:32.047139 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 08:40:32 crc kubenswrapper[4689]: E1201 08:40:32.047570 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 08:40:32 crc kubenswrapper[4689]: I1201 08:40:32.047714 4689 scope.go:117] "RemoveContainer" containerID="f3c9833e8018d8e0b8fdf680a449297d9419da3fb486c78f21c586947f97e1a1" Dec 01 08:40:32 crc kubenswrapper[4689]: E1201 08:40:32.047912 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8zn56_openshift-ovn-kubernetes(988f960f-52fa-406f-9320-a8eec7a04f76)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" podUID="988f960f-52fa-406f-9320-a8eec7a04f76" Dec 01 08:40:33 crc kubenswrapper[4689]: I1201 08:40:33.047300 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 08:40:33 crc kubenswrapper[4689]: E1201 08:40:33.047680 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 08:40:34 crc kubenswrapper[4689]: I1201 08:40:34.046388 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 08:40:34 crc kubenswrapper[4689]: I1201 08:40:34.046457 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jtwvs" Dec 01 08:40:34 crc kubenswrapper[4689]: E1201 08:40:34.046538 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 08:40:34 crc kubenswrapper[4689]: E1201 08:40:34.046666 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jtwvs" podUID="5d6a08d0-a948-4c69-b3f0-f5e084adb453" Dec 01 08:40:34 crc kubenswrapper[4689]: I1201 08:40:34.046802 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 08:40:34 crc kubenswrapper[4689]: E1201 08:40:34.046890 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 08:40:35 crc kubenswrapper[4689]: I1201 08:40:35.047025 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 08:40:35 crc kubenswrapper[4689]: E1201 08:40:35.047565 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 08:40:36 crc kubenswrapper[4689]: I1201 08:40:36.046943 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 08:40:36 crc kubenswrapper[4689]: I1201 08:40:36.046934 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 08:40:36 crc kubenswrapper[4689]: I1201 08:40:36.047141 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jtwvs" Dec 01 08:40:36 crc kubenswrapper[4689]: E1201 08:40:36.047941 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 08:40:36 crc kubenswrapper[4689]: E1201 08:40:36.047642 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 08:40:36 crc kubenswrapper[4689]: E1201 08:40:36.048179 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jtwvs" podUID="5d6a08d0-a948-4c69-b3f0-f5e084adb453" Dec 01 08:40:37 crc kubenswrapper[4689]: I1201 08:40:37.047201 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 08:40:37 crc kubenswrapper[4689]: E1201 08:40:37.047413 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 08:40:38 crc kubenswrapper[4689]: I1201 08:40:38.047108 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 08:40:38 crc kubenswrapper[4689]: I1201 08:40:38.047134 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 08:40:38 crc kubenswrapper[4689]: I1201 08:40:38.047302 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jtwvs" Dec 01 08:40:38 crc kubenswrapper[4689]: E1201 08:40:38.048042 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 08:40:38 crc kubenswrapper[4689]: E1201 08:40:38.048426 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jtwvs" podUID="5d6a08d0-a948-4c69-b3f0-f5e084adb453" Dec 01 08:40:38 crc kubenswrapper[4689]: E1201 08:40:38.048658 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 08:40:39 crc kubenswrapper[4689]: I1201 08:40:39.016210 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dl2st_6bebcb50-c292-4bca-9299-2fdc21439b18/kube-multus/1.log" Dec 01 08:40:39 crc kubenswrapper[4689]: I1201 08:40:39.016883 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dl2st_6bebcb50-c292-4bca-9299-2fdc21439b18/kube-multus/0.log" Dec 01 08:40:39 crc kubenswrapper[4689]: I1201 08:40:39.016966 4689 generic.go:334] "Generic (PLEG): container finished" podID="6bebcb50-c292-4bca-9299-2fdc21439b18" containerID="f0a76050989ec3f5388f58967fc29b953bea67fbbf75db6ad980546718f4f034" exitCode=1 Dec 01 08:40:39 crc kubenswrapper[4689]: I1201 08:40:39.017022 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-dl2st" event={"ID":"6bebcb50-c292-4bca-9299-2fdc21439b18","Type":"ContainerDied","Data":"f0a76050989ec3f5388f58967fc29b953bea67fbbf75db6ad980546718f4f034"} Dec 01 08:40:39 crc kubenswrapper[4689]: I1201 08:40:39.017085 4689 scope.go:117] "RemoveContainer" containerID="768aa329fc277550bd549890a176d80246fcbfa50fbc3b88c444434b980b9986" Dec 01 08:40:39 crc kubenswrapper[4689]: I1201 08:40:39.017730 4689 scope.go:117] "RemoveContainer" containerID="f0a76050989ec3f5388f58967fc29b953bea67fbbf75db6ad980546718f4f034" Dec 01 08:40:39 crc kubenswrapper[4689]: E1201 08:40:39.017933 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-dl2st_openshift-multus(6bebcb50-c292-4bca-9299-2fdc21439b18)\"" pod="openshift-multus/multus-dl2st" podUID="6bebcb50-c292-4bca-9299-2fdc21439b18" Dec 01 08:40:39 crc kubenswrapper[4689]: I1201 08:40:39.045656 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=12.045624034 podStartE2EDuration="12.045624034s" podCreationTimestamp="2025-12-01 08:40:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:40:31.087494414 +0000 UTC m=+111.159782318" watchObservedRunningTime="2025-12-01 08:40:39.045624034 +0000 UTC m=+119.117911958" Dec 01 08:40:39 crc kubenswrapper[4689]: I1201 08:40:39.047481 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 08:40:39 crc kubenswrapper[4689]: E1201 08:40:39.047793 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 08:40:40 crc kubenswrapper[4689]: I1201 08:40:40.024308 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dl2st_6bebcb50-c292-4bca-9299-2fdc21439b18/kube-multus/1.log" Dec 01 08:40:40 crc kubenswrapper[4689]: I1201 08:40:40.046520 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 08:40:40 crc kubenswrapper[4689]: I1201 08:40:40.046587 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 08:40:40 crc kubenswrapper[4689]: I1201 08:40:40.046761 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jtwvs" Dec 01 08:40:40 crc kubenswrapper[4689]: E1201 08:40:40.046897 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 08:40:40 crc kubenswrapper[4689]: E1201 08:40:40.047106 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jtwvs" podUID="5d6a08d0-a948-4c69-b3f0-f5e084adb453" Dec 01 08:40:40 crc kubenswrapper[4689]: E1201 08:40:40.047193 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 08:40:40 crc kubenswrapper[4689]: E1201 08:40:40.785624 4689 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Dec 01 08:40:41 crc kubenswrapper[4689]: I1201 08:40:41.047400 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 08:40:41 crc kubenswrapper[4689]: E1201 08:40:41.049589 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 08:40:41 crc kubenswrapper[4689]: E1201 08:40:41.169499 4689 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 01 08:40:42 crc kubenswrapper[4689]: I1201 08:40:42.047253 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 08:40:42 crc kubenswrapper[4689]: I1201 08:40:42.047651 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 08:40:42 crc kubenswrapper[4689]: E1201 08:40:42.048603 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 08:40:42 crc kubenswrapper[4689]: I1201 08:40:42.047818 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jtwvs" Dec 01 08:40:42 crc kubenswrapper[4689]: E1201 08:40:42.048741 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 08:40:42 crc kubenswrapper[4689]: E1201 08:40:42.049100 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jtwvs" podUID="5d6a08d0-a948-4c69-b3f0-f5e084adb453" Dec 01 08:40:43 crc kubenswrapper[4689]: I1201 08:40:43.058011 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 08:40:43 crc kubenswrapper[4689]: E1201 08:40:43.058346 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 08:40:44 crc kubenswrapper[4689]: I1201 08:40:44.046696 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jtwvs" Dec 01 08:40:44 crc kubenswrapper[4689]: E1201 08:40:44.046930 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jtwvs" podUID="5d6a08d0-a948-4c69-b3f0-f5e084adb453" Dec 01 08:40:44 crc kubenswrapper[4689]: I1201 08:40:44.047058 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 08:40:44 crc kubenswrapper[4689]: I1201 08:40:44.047738 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 08:40:44 crc kubenswrapper[4689]: E1201 08:40:44.047855 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 08:40:44 crc kubenswrapper[4689]: E1201 08:40:44.048065 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 08:40:44 crc kubenswrapper[4689]: I1201 08:40:44.048712 4689 scope.go:117] "RemoveContainer" containerID="f3c9833e8018d8e0b8fdf680a449297d9419da3fb486c78f21c586947f97e1a1" Dec 01 08:40:45 crc kubenswrapper[4689]: I1201 08:40:45.047455 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 08:40:45 crc kubenswrapper[4689]: E1201 08:40:45.048992 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 08:40:45 crc kubenswrapper[4689]: I1201 08:40:45.071551 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8zn56_988f960f-52fa-406f-9320-a8eec7a04f76/ovnkube-controller/3.log" Dec 01 08:40:45 crc kubenswrapper[4689]: I1201 08:40:45.074788 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" event={"ID":"988f960f-52fa-406f-9320-a8eec7a04f76","Type":"ContainerStarted","Data":"4c85239b54ff834a926bf6e40f8f4cf64a98a2d9b8ca6eb7b30f3cb1332d35b5"} Dec 01 08:40:45 crc kubenswrapper[4689]: I1201 08:40:45.075472 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" Dec 01 08:40:45 crc kubenswrapper[4689]: I1201 08:40:45.114217 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" podStartSLOduration=104.114188073 podStartE2EDuration="1m44.114188073s" podCreationTimestamp="2025-12-01 08:39:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:40:45.112903708 +0000 UTC m=+125.185191612" watchObservedRunningTime="2025-12-01 08:40:45.114188073 +0000 UTC m=+125.186475977" Dec 01 08:40:45 crc kubenswrapper[4689]: I1201 08:40:45.714649 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-jtwvs"] Dec 01 08:40:45 crc kubenswrapper[4689]: I1201 08:40:45.714878 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jtwvs" Dec 01 08:40:45 crc kubenswrapper[4689]: E1201 08:40:45.715020 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jtwvs" podUID="5d6a08d0-a948-4c69-b3f0-f5e084adb453" Dec 01 08:40:46 crc kubenswrapper[4689]: I1201 08:40:46.113545 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 08:40:46 crc kubenswrapper[4689]: I1201 08:40:46.113827 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 08:40:46 crc kubenswrapper[4689]: E1201 08:40:46.114210 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 08:40:46 crc kubenswrapper[4689]: E1201 08:40:46.114489 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 08:40:46 crc kubenswrapper[4689]: E1201 08:40:46.171394 4689 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 01 08:40:47 crc kubenswrapper[4689]: I1201 08:40:47.046986 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jtwvs" Dec 01 08:40:47 crc kubenswrapper[4689]: I1201 08:40:47.046975 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 08:40:47 crc kubenswrapper[4689]: E1201 08:40:47.047570 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jtwvs" podUID="5d6a08d0-a948-4c69-b3f0-f5e084adb453" Dec 01 08:40:47 crc kubenswrapper[4689]: E1201 08:40:47.047847 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 08:40:48 crc kubenswrapper[4689]: I1201 08:40:48.046784 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 08:40:48 crc kubenswrapper[4689]: I1201 08:40:48.046839 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 08:40:48 crc kubenswrapper[4689]: E1201 08:40:48.047043 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 08:40:48 crc kubenswrapper[4689]: E1201 08:40:48.047205 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 08:40:49 crc kubenswrapper[4689]: I1201 08:40:49.046959 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jtwvs" Dec 01 08:40:49 crc kubenswrapper[4689]: I1201 08:40:49.046959 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 08:40:49 crc kubenswrapper[4689]: E1201 08:40:49.047469 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jtwvs" podUID="5d6a08d0-a948-4c69-b3f0-f5e084adb453" Dec 01 08:40:49 crc kubenswrapper[4689]: E1201 08:40:49.047696 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 08:40:50 crc kubenswrapper[4689]: I1201 08:40:50.046622 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 08:40:50 crc kubenswrapper[4689]: I1201 08:40:50.046683 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 08:40:50 crc kubenswrapper[4689]: E1201 08:40:50.046789 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 08:40:50 crc kubenswrapper[4689]: E1201 08:40:50.047002 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 08:40:51 crc kubenswrapper[4689]: I1201 08:40:51.047342 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jtwvs" Dec 01 08:40:51 crc kubenswrapper[4689]: I1201 08:40:51.047519 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 08:40:51 crc kubenswrapper[4689]: E1201 08:40:51.049766 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jtwvs" podUID="5d6a08d0-a948-4c69-b3f0-f5e084adb453" Dec 01 08:40:51 crc kubenswrapper[4689]: E1201 08:40:51.050033 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 08:40:51 crc kubenswrapper[4689]: E1201 08:40:51.172503 4689 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 01 08:40:52 crc kubenswrapper[4689]: I1201 08:40:52.047432 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 08:40:52 crc kubenswrapper[4689]: I1201 08:40:52.047534 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 08:40:52 crc kubenswrapper[4689]: E1201 08:40:52.047639 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 08:40:52 crc kubenswrapper[4689]: E1201 08:40:52.047763 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 08:40:53 crc kubenswrapper[4689]: I1201 08:40:53.046426 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jtwvs" Dec 01 08:40:53 crc kubenswrapper[4689]: I1201 08:40:53.046483 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 08:40:53 crc kubenswrapper[4689]: E1201 08:40:53.046647 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jtwvs" podUID="5d6a08d0-a948-4c69-b3f0-f5e084adb453" Dec 01 08:40:53 crc kubenswrapper[4689]: E1201 08:40:53.046828 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 08:40:54 crc kubenswrapper[4689]: I1201 08:40:54.047156 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 08:40:54 crc kubenswrapper[4689]: I1201 08:40:54.047175 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 08:40:54 crc kubenswrapper[4689]: E1201 08:40:54.047822 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 08:40:54 crc kubenswrapper[4689]: E1201 08:40:54.048044 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 08:40:54 crc kubenswrapper[4689]: I1201 08:40:54.048087 4689 scope.go:117] "RemoveContainer" containerID="f0a76050989ec3f5388f58967fc29b953bea67fbbf75db6ad980546718f4f034" Dec 01 08:40:55 crc kubenswrapper[4689]: I1201 08:40:55.047052 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jtwvs" Dec 01 08:40:55 crc kubenswrapper[4689]: I1201 08:40:55.047096 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 08:40:55 crc kubenswrapper[4689]: E1201 08:40:55.047311 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jtwvs" podUID="5d6a08d0-a948-4c69-b3f0-f5e084adb453" Dec 01 08:40:55 crc kubenswrapper[4689]: E1201 08:40:55.047440 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 08:40:55 crc kubenswrapper[4689]: I1201 08:40:55.160868 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dl2st_6bebcb50-c292-4bca-9299-2fdc21439b18/kube-multus/1.log" Dec 01 08:40:55 crc kubenswrapper[4689]: I1201 08:40:55.160953 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-dl2st" event={"ID":"6bebcb50-c292-4bca-9299-2fdc21439b18","Type":"ContainerStarted","Data":"e15e4d4d20bedfa63e8dc39de991d3e641a4c410f89da82a2a3386442c160632"} Dec 01 08:40:56 crc kubenswrapper[4689]: I1201 08:40:56.047065 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 08:40:56 crc kubenswrapper[4689]: I1201 08:40:56.047169 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 08:40:56 crc kubenswrapper[4689]: E1201 08:40:56.047314 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 08:40:56 crc kubenswrapper[4689]: E1201 08:40:56.047428 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 08:40:56 crc kubenswrapper[4689]: E1201 08:40:56.173736 4689 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 01 08:40:57 crc kubenswrapper[4689]: I1201 08:40:57.046750 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jtwvs" Dec 01 08:40:57 crc kubenswrapper[4689]: I1201 08:40:57.046783 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 08:40:57 crc kubenswrapper[4689]: E1201 08:40:57.046921 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jtwvs" podUID="5d6a08d0-a948-4c69-b3f0-f5e084adb453" Dec 01 08:40:57 crc kubenswrapper[4689]: E1201 08:40:57.047041 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 08:40:58 crc kubenswrapper[4689]: I1201 08:40:58.047249 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 08:40:58 crc kubenswrapper[4689]: I1201 08:40:58.047657 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 08:40:58 crc kubenswrapper[4689]: E1201 08:40:58.049056 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 08:40:58 crc kubenswrapper[4689]: E1201 08:40:58.048554 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 08:40:59 crc kubenswrapper[4689]: I1201 08:40:59.046580 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 08:40:59 crc kubenswrapper[4689]: I1201 08:40:59.046646 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jtwvs" Dec 01 08:40:59 crc kubenswrapper[4689]: E1201 08:40:59.046835 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 08:40:59 crc kubenswrapper[4689]: E1201 08:40:59.046948 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jtwvs" podUID="5d6a08d0-a948-4c69-b3f0-f5e084adb453" Dec 01 08:41:00 crc kubenswrapper[4689]: I1201 08:41:00.046862 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 08:41:00 crc kubenswrapper[4689]: I1201 08:41:00.046887 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 08:41:00 crc kubenswrapper[4689]: E1201 08:41:00.047089 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 08:41:00 crc kubenswrapper[4689]: E1201 08:41:00.047269 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 08:41:01 crc kubenswrapper[4689]: I1201 08:41:01.046422 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 08:41:01 crc kubenswrapper[4689]: I1201 08:41:01.046421 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jtwvs" Dec 01 08:41:01 crc kubenswrapper[4689]: E1201 08:41:01.049555 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 08:41:01 crc kubenswrapper[4689]: E1201 08:41:01.049719 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jtwvs" podUID="5d6a08d0-a948-4c69-b3f0-f5e084adb453" Dec 01 08:41:02 crc kubenswrapper[4689]: I1201 08:41:02.046502 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 08:41:02 crc kubenswrapper[4689]: I1201 08:41:02.046548 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 08:41:02 crc kubenswrapper[4689]: I1201 08:41:02.051006 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 01 08:41:02 crc kubenswrapper[4689]: I1201 08:41:02.051043 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 01 08:41:02 crc kubenswrapper[4689]: I1201 08:41:02.051116 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 01 08:41:02 crc kubenswrapper[4689]: I1201 08:41:02.050984 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 01 08:41:03 crc kubenswrapper[4689]: I1201 08:41:03.047128 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jtwvs" Dec 01 08:41:03 crc kubenswrapper[4689]: I1201 08:41:03.048159 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 08:41:03 crc kubenswrapper[4689]: I1201 08:41:03.051703 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 01 08:41:03 crc kubenswrapper[4689]: I1201 08:41:03.056149 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 01 08:41:04 crc kubenswrapper[4689]: I1201 08:41:04.675256 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" Dec 01 08:41:07 crc kubenswrapper[4689]: I1201 08:41:07.907455 4689 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Dec 01 08:41:07 crc kubenswrapper[4689]: I1201 08:41:07.970480 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-nnx7f"] Dec 01 08:41:07 crc kubenswrapper[4689]: I1201 08:41:07.971630 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nnx7f" Dec 01 08:41:07 crc kubenswrapper[4689]: I1201 08:41:07.985553 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 01 08:41:07 crc kubenswrapper[4689]: I1201 08:41:07.986028 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 01 08:41:07 crc kubenswrapper[4689]: I1201 08:41:07.986302 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 01 08:41:07 crc kubenswrapper[4689]: I1201 08:41:07.986642 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 01 08:41:07 crc kubenswrapper[4689]: I1201 08:41:07.986901 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 01 08:41:07 crc kubenswrapper[4689]: I1201 08:41:07.987151 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 01 08:41:07 crc kubenswrapper[4689]: I1201 08:41:07.998454 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-24jcf"] Dec 01 08:41:07 crc kubenswrapper[4689]: I1201 08:41:07.999764 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-24jcf" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.013004 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-j5r2f"] Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.013609 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-j5r2f" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.014575 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.018695 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-k7w5z"] Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.019209 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-k7w5z" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.020631 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-h47fn"] Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.021263 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-h47fn" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.021802 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-gzrk2"] Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.022326 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-gzrk2" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.023407 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-t8fdl"] Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.023763 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.028360 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bhjqx"] Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.029671 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bhjqx" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.033288 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-29dmp"] Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.034881 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-29dmp" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.036163 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-chlnk"] Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.036752 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-xx949"] Dec 01 08:41:08 crc kubenswrapper[4689]: W1201 08:41:08.036786 4689 reflector.go:561] object-"openshift-cluster-machine-approver"/"machine-approver-config": failed to list *v1.ConfigMap: configmaps "machine-approver-config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-cluster-machine-approver": no relationship found between node 'crc' and this object Dec 01 08:41:08 crc kubenswrapper[4689]: E1201 08:41:08.036879 4689 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"machine-approver-config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-cluster-machine-approver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.037150 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-xx949" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.037271 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-chlnk" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.044881 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-gwkk8"] Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.045272 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-gwkk8" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.045457 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.045494 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.049796 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.058549 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.058941 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.059040 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.059198 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.059311 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.059339 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.059437 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.059495 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.059523 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.059611 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.059717 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.060213 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.060502 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.061008 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.061087 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-ch9jh"] Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.061214 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.061364 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.061522 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.062134 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.062702 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.062887 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.063147 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.063327 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.067271 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-ch9jh" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.058561 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.067682 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.070694 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4nmmg"] Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.067774 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.067898 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.067965 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.067961 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.067982 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.068001 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.073064 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-9nx2j"] Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.068010 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.068031 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.073236 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4nmmg" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.068039 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.068061 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.068093 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.068127 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.068151 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.068173 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.073857 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-9nx2j" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.068196 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.068564 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.068614 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.068654 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.069002 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.069085 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.069135 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.069312 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.069352 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.070210 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.070258 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.070621 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.070664 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.081710 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-8rfdp"] Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.082452 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-8rfdp" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.083957 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-z629s"] Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.084473 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-z629s" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.088887 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.089068 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.089236 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.089442 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.089550 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.088891 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.090712 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.090947 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.091089 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-nnx7f"] Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.096967 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bsffw"] Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.097095 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/70e552a9-22d9-4efc-b40a-25232123691b-audit-dir\") pod \"apiserver-7bbb656c7d-nnx7f\" (UID: \"70e552a9-22d9-4efc-b40a-25232123691b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nnx7f" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.097169 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkb9h\" (UniqueName: \"kubernetes.io/projected/70e552a9-22d9-4efc-b40a-25232123691b-kube-api-access-wkb9h\") pod \"apiserver-7bbb656c7d-nnx7f\" (UID: \"70e552a9-22d9-4efc-b40a-25232123691b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nnx7f" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.097246 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70e552a9-22d9-4efc-b40a-25232123691b-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-nnx7f\" (UID: \"70e552a9-22d9-4efc-b40a-25232123691b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nnx7f" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.097267 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/70e552a9-22d9-4efc-b40a-25232123691b-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-nnx7f\" (UID: \"70e552a9-22d9-4efc-b40a-25232123691b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nnx7f" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.097315 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/70e552a9-22d9-4efc-b40a-25232123691b-audit-policies\") pod \"apiserver-7bbb656c7d-nnx7f\" (UID: \"70e552a9-22d9-4efc-b40a-25232123691b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nnx7f" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.097338 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70e552a9-22d9-4efc-b40a-25232123691b-serving-cert\") pod \"apiserver-7bbb656c7d-nnx7f\" (UID: \"70e552a9-22d9-4efc-b40a-25232123691b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nnx7f" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.097358 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/70e552a9-22d9-4efc-b40a-25232123691b-encryption-config\") pod \"apiserver-7bbb656c7d-nnx7f\" (UID: \"70e552a9-22d9-4efc-b40a-25232123691b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nnx7f" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.097413 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/70e552a9-22d9-4efc-b40a-25232123691b-etcd-client\") pod \"apiserver-7bbb656c7d-nnx7f\" (UID: \"70e552a9-22d9-4efc-b40a-25232123691b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nnx7f" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.097913 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bsffw" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.098242 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.103082 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.112164 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.112266 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.112389 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.118273 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.118667 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.119491 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.119584 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.119734 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.119830 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.119936 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.120273 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.125280 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-pr577"] Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.125465 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.130657 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.130865 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.130950 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.131173 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.131628 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.131746 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.131831 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.131928 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.132021 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.133993 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-6z2v4"] Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.134290 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-sc9lr"] Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.134565 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-pr577" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.134889 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6z2v4" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.134895 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-sc9lr" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.135070 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.135208 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.137938 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.138640 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.138995 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-j5r2f"] Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.139039 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bhjqx"] Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.139072 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.139541 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-h47fn"] Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.142571 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.145210 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.145672 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.148153 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.148710 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.148769 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.149804 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-k7w5z"] Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.155456 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.164470 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vhndn"] Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.164951 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.164986 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-pzx5g"] Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.165765 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vhndn" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.179893 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ffvcj"] Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.180599 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-hb577"] Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.181135 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vf9h5"] Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.181706 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xjxwg"] Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.181740 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-pzx5g" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.182303 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xjxwg" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.183651 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vf9h5" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.184340 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ffvcj" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.187303 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.190265 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-hb577" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.192786 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-gwkk8"] Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.196488 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-b7hxr"] Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.197841 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-gzrk2"] Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.197960 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b7hxr" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.198270 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/70e552a9-22d9-4efc-b40a-25232123691b-audit-policies\") pod \"apiserver-7bbb656c7d-nnx7f\" (UID: \"70e552a9-22d9-4efc-b40a-25232123691b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nnx7f" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.198321 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70e552a9-22d9-4efc-b40a-25232123691b-serving-cert\") pod \"apiserver-7bbb656c7d-nnx7f\" (UID: \"70e552a9-22d9-4efc-b40a-25232123691b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nnx7f" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.198350 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/70e552a9-22d9-4efc-b40a-25232123691b-encryption-config\") pod \"apiserver-7bbb656c7d-nnx7f\" (UID: \"70e552a9-22d9-4efc-b40a-25232123691b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nnx7f" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.198409 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/70e552a9-22d9-4efc-b40a-25232123691b-etcd-client\") pod \"apiserver-7bbb656c7d-nnx7f\" (UID: \"70e552a9-22d9-4efc-b40a-25232123691b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nnx7f" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.198430 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/70e552a9-22d9-4efc-b40a-25232123691b-audit-dir\") pod \"apiserver-7bbb656c7d-nnx7f\" (UID: \"70e552a9-22d9-4efc-b40a-25232123691b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nnx7f" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.198461 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkb9h\" (UniqueName: \"kubernetes.io/projected/70e552a9-22d9-4efc-b40a-25232123691b-kube-api-access-wkb9h\") pod \"apiserver-7bbb656c7d-nnx7f\" (UID: \"70e552a9-22d9-4efc-b40a-25232123691b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nnx7f" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.198493 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/70e552a9-22d9-4efc-b40a-25232123691b-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-nnx7f\" (UID: \"70e552a9-22d9-4efc-b40a-25232123691b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nnx7f" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.198509 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70e552a9-22d9-4efc-b40a-25232123691b-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-nnx7f\" (UID: \"70e552a9-22d9-4efc-b40a-25232123691b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nnx7f" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.198832 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.199320 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70e552a9-22d9-4efc-b40a-25232123691b-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-nnx7f\" (UID: \"70e552a9-22d9-4efc-b40a-25232123691b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nnx7f" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.199404 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/70e552a9-22d9-4efc-b40a-25232123691b-audit-dir\") pod \"apiserver-7bbb656c7d-nnx7f\" (UID: \"70e552a9-22d9-4efc-b40a-25232123691b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nnx7f" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.200117 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/70e552a9-22d9-4efc-b40a-25232123691b-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-nnx7f\" (UID: \"70e552a9-22d9-4efc-b40a-25232123691b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nnx7f" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.200610 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-g9pxf"] Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.201581 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-g9pxf" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.205123 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-l54ll"] Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.205451 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/70e552a9-22d9-4efc-b40a-25232123691b-audit-policies\") pod \"apiserver-7bbb656c7d-nnx7f\" (UID: \"70e552a9-22d9-4efc-b40a-25232123691b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nnx7f" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.206088 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-l54ll" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.206811 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/70e552a9-22d9-4efc-b40a-25232123691b-encryption-config\") pod \"apiserver-7bbb656c7d-nnx7f\" (UID: \"70e552a9-22d9-4efc-b40a-25232123691b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nnx7f" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.208415 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-f6t2n"] Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.211636 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f6t2n" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.212161 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/70e552a9-22d9-4efc-b40a-25232123691b-etcd-client\") pod \"apiserver-7bbb656c7d-nnx7f\" (UID: \"70e552a9-22d9-4efc-b40a-25232123691b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nnx7f" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.215300 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70e552a9-22d9-4efc-b40a-25232123691b-serving-cert\") pod \"apiserver-7bbb656c7d-nnx7f\" (UID: \"70e552a9-22d9-4efc-b40a-25232123691b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nnx7f" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.215468 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-8rfdp"] Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.217098 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zvzpg"] Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.217965 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zvzpg" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.219143 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-9nx2j"] Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.221026 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ltkzh"] Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.221861 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ltkzh" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.225205 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bjlxg"] Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.225752 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-59vc4"] Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.227252 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lz96b"] Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.228180 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lz96b" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.228728 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bjlxg" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.229029 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-59vc4" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.233637 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-t8fdl"] Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.234624 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-fdcdc"] Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.235247 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-fdcdc" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.235310 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.244124 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409630-9rhzp"] Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.245348 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409630-9rhzp" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.246825 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-bnznn"] Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.247628 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-bnznn" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.249526 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zvzpg"] Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.249939 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-z629s"] Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.251037 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-6z2v4"] Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.253427 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.253687 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4nmmg"] Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.258344 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vhndn"] Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.259239 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bsffw"] Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.262190 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-29dmp"] Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.274234 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.276441 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-pzx5g"] Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.276502 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-xx949"] Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.282174 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-pr577"] Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.284888 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-f6t2n"] Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.286248 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-chlnk"] Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.287244 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-b7hxr"] Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.293880 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.296307 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vf9h5"] Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.297100 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-ch9jh"] Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.299395 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xjxwg"] Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.301584 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pk9c\" (UniqueName: \"kubernetes.io/projected/e552c7d1-a8ad-4033-b89e-951c6f58588b-kube-api-access-2pk9c\") pod \"etcd-operator-b45778765-gzrk2\" (UID: \"e552c7d1-a8ad-4033-b89e-951c6f58588b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gzrk2" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.301617 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b209cd69-6557-4b86-b7b2-680d3bcf8ec0-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-bhjqx\" (UID: \"b209cd69-6557-4b86-b7b2-680d3bcf8ec0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bhjqx" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.301645 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-px2ln\" (UniqueName: \"kubernetes.io/projected/b209cd69-6557-4b86-b7b2-680d3bcf8ec0-kube-api-access-px2ln\") pod \"cluster-image-registry-operator-dc59b4c8b-bhjqx\" (UID: \"b209cd69-6557-4b86-b7b2-680d3bcf8ec0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bhjqx" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.301685 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c062b92b-1709-4892-9b40-b1d2405d5812-config\") pod \"machine-api-operator-5694c8668f-chlnk\" (UID: \"c062b92b-1709-4892-9b40-b1d2405d5812\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-chlnk" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.301708 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c389f615-2c0f-467b-924e-ad740d3fff07-trusted-ca-bundle\") pod \"apiserver-76f77b778f-ch9jh\" (UID: \"c389f615-2c0f-467b-924e-ad740d3fff07\") " pod="openshift-apiserver/apiserver-76f77b778f-ch9jh" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.301728 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/710ccb76-093a-484d-a784-737ae81e7c21-console-serving-cert\") pod \"console-f9d7485db-j5r2f\" (UID: \"710ccb76-093a-484d-a784-737ae81e7c21\") " pod="openshift-console/console-f9d7485db-j5r2f" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.301755 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7d86e20d-febe-4cfb-a738-4705f8122326-ca-trust-extracted\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.301780 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcl2c\" (UniqueName: \"kubernetes.io/projected/c389f615-2c0f-467b-924e-ad740d3fff07-kube-api-access-gcl2c\") pod \"apiserver-76f77b778f-ch9jh\" (UID: \"c389f615-2c0f-467b-924e-ad740d3fff07\") " pod="openshift-apiserver/apiserver-76f77b778f-ch9jh" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.301830 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c389f615-2c0f-467b-924e-ad740d3fff07-config\") pod \"apiserver-76f77b778f-ch9jh\" (UID: \"c389f615-2c0f-467b-924e-ad740d3fff07\") " pod="openshift-apiserver/apiserver-76f77b778f-ch9jh" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.301856 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/710ccb76-093a-484d-a784-737ae81e7c21-oauth-serving-cert\") pod \"console-f9d7485db-j5r2f\" (UID: \"710ccb76-093a-484d-a784-737ae81e7c21\") " pod="openshift-console/console-f9d7485db-j5r2f" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.301880 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7d86e20d-febe-4cfb-a738-4705f8122326-installation-pull-secrets\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.301903 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/c389f615-2c0f-467b-924e-ad740d3fff07-image-import-ca\") pod \"apiserver-76f77b778f-ch9jh\" (UID: \"c389f615-2c0f-467b-924e-ad740d3fff07\") " pod="openshift-apiserver/apiserver-76f77b778f-ch9jh" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.301925 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c389f615-2c0f-467b-924e-ad740d3fff07-serving-cert\") pod \"apiserver-76f77b778f-ch9jh\" (UID: \"c389f615-2c0f-467b-924e-ad740d3fff07\") " pod="openshift-apiserver/apiserver-76f77b778f-ch9jh" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.301962 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/710ccb76-093a-484d-a784-737ae81e7c21-trusted-ca-bundle\") pod \"console-f9d7485db-j5r2f\" (UID: \"710ccb76-093a-484d-a784-737ae81e7c21\") " pod="openshift-console/console-f9d7485db-j5r2f" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.301997 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w94sd\" (UniqueName: \"kubernetes.io/projected/21eaf97a-bf73-4e70-a9bc-153b17b8a799-kube-api-access-w94sd\") pod \"openshift-config-operator-7777fb866f-29dmp\" (UID: \"21eaf97a-bf73-4e70-a9bc-153b17b8a799\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-29dmp" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.302024 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c389f615-2c0f-467b-924e-ad740d3fff07-etcd-client\") pod \"apiserver-76f77b778f-ch9jh\" (UID: \"c389f615-2c0f-467b-924e-ad740d3fff07\") " pod="openshift-apiserver/apiserver-76f77b778f-ch9jh" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.302131 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxpvk\" (UniqueName: \"kubernetes.io/projected/8bebf2e0-afe5-4e98-8cdf-496c5d355ef9-kube-api-access-mxpvk\") pod \"authentication-operator-69f744f599-gwkk8\" (UID: \"8bebf2e0-afe5-4e98-8cdf-496c5d355ef9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gwkk8" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.302158 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c062b92b-1709-4892-9b40-b1d2405d5812-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-chlnk\" (UID: \"c062b92b-1709-4892-9b40-b1d2405d5812\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-chlnk" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.302180 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/710ccb76-093a-484d-a784-737ae81e7c21-console-config\") pod \"console-f9d7485db-j5r2f\" (UID: \"710ccb76-093a-484d-a784-737ae81e7c21\") " pod="openshift-console/console-f9d7485db-j5r2f" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.302205 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7d86e20d-febe-4cfb-a738-4705f8122326-bound-sa-token\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.302227 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/710ccb76-093a-484d-a784-737ae81e7c21-console-oauth-config\") pod \"console-f9d7485db-j5r2f\" (UID: \"710ccb76-093a-484d-a784-737ae81e7c21\") " pod="openshift-console/console-f9d7485db-j5r2f" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.302248 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7d86e20d-febe-4cfb-a738-4705f8122326-trusted-ca\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.302284 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.302308 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7d86e20d-febe-4cfb-a738-4705f8122326-registry-tls\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.302330 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f005f766-04a5-4b03-8c50-2fd9ddf967be-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-h47fn\" (UID: \"f005f766-04a5-4b03-8c50-2fd9ddf967be\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-h47fn" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.302353 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c389f615-2c0f-467b-924e-ad740d3fff07-encryption-config\") pod \"apiserver-76f77b778f-ch9jh\" (UID: \"c389f615-2c0f-467b-924e-ad740d3fff07\") " pod="openshift-apiserver/apiserver-76f77b778f-ch9jh" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.302402 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qq96\" (UniqueName: \"kubernetes.io/projected/7d86e20d-febe-4cfb-a738-4705f8122326-kube-api-access-6qq96\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.302426 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/57864f38-0ce2-401c-a4c4-96fc7cce8346-machine-approver-tls\") pod \"machine-approver-56656f9798-24jcf\" (UID: \"57864f38-0ce2-401c-a4c4-96fc7cce8346\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-24jcf" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.302448 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f005f766-04a5-4b03-8c50-2fd9ddf967be-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-h47fn\" (UID: \"f005f766-04a5-4b03-8c50-2fd9ddf967be\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-h47fn" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.302473 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7646df48-faa9-486b-b7fc-c8b1d97ead27-metrics-tls\") pod \"dns-operator-744455d44c-k7w5z\" (UID: \"7646df48-faa9-486b-b7fc-c8b1d97ead27\") " pod="openshift-dns-operator/dns-operator-744455d44c-k7w5z" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.302497 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21eaf97a-bf73-4e70-a9bc-153b17b8a799-serving-cert\") pod \"openshift-config-operator-7777fb866f-29dmp\" (UID: \"21eaf97a-bf73-4e70-a9bc-153b17b8a799\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-29dmp" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.302520 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b209cd69-6557-4b86-b7b2-680d3bcf8ec0-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-bhjqx\" (UID: \"b209cd69-6557-4b86-b7b2-680d3bcf8ec0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bhjqx" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.302543 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c062b92b-1709-4892-9b40-b1d2405d5812-images\") pod \"machine-api-operator-5694c8668f-chlnk\" (UID: \"c062b92b-1709-4892-9b40-b1d2405d5812\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-chlnk" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.302566 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c389f615-2c0f-467b-924e-ad740d3fff07-node-pullsecrets\") pod \"apiserver-76f77b778f-ch9jh\" (UID: \"c389f615-2c0f-467b-924e-ad740d3fff07\") " pod="openshift-apiserver/apiserver-76f77b778f-ch9jh" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.302715 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8bebf2e0-afe5-4e98-8cdf-496c5d355ef9-config\") pod \"authentication-operator-69f744f599-gwkk8\" (UID: \"8bebf2e0-afe5-4e98-8cdf-496c5d355ef9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gwkk8" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.302779 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e552c7d1-a8ad-4033-b89e-951c6f58588b-serving-cert\") pod \"etcd-operator-b45778765-gzrk2\" (UID: \"e552c7d1-a8ad-4033-b89e-951c6f58588b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gzrk2" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.303087 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/c389f615-2c0f-467b-924e-ad740d3fff07-audit\") pod \"apiserver-76f77b778f-ch9jh\" (UID: \"c389f615-2c0f-467b-924e-ad740d3fff07\") " pod="openshift-apiserver/apiserver-76f77b778f-ch9jh" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.303111 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/710ccb76-093a-484d-a784-737ae81e7c21-service-ca\") pod \"console-f9d7485db-j5r2f\" (UID: \"710ccb76-093a-484d-a784-737ae81e7c21\") " pod="openshift-console/console-f9d7485db-j5r2f" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.303142 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5k5h2\" (UniqueName: \"kubernetes.io/projected/bd24264f-fc40-410e-9bed-3f8e340035b5-kube-api-access-5k5h2\") pod \"downloads-7954f5f757-xx949\" (UID: \"bd24264f-fc40-410e-9bed-3f8e340035b5\") " pod="openshift-console/downloads-7954f5f757-xx949" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.303164 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e552c7d1-a8ad-4033-b89e-951c6f58588b-etcd-client\") pod \"etcd-operator-b45778765-gzrk2\" (UID: \"e552c7d1-a8ad-4033-b89e-951c6f58588b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gzrk2" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.303181 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8bebf2e0-afe5-4e98-8cdf-496c5d355ef9-service-ca-bundle\") pod \"authentication-operator-69f744f599-gwkk8\" (UID: \"8bebf2e0-afe5-4e98-8cdf-496c5d355ef9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gwkk8" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.303207 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ph8kj\" (UniqueName: \"kubernetes.io/projected/57864f38-0ce2-401c-a4c4-96fc7cce8346-kube-api-access-ph8kj\") pod \"machine-approver-56656f9798-24jcf\" (UID: \"57864f38-0ce2-401c-a4c4-96fc7cce8346\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-24jcf" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.303238 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8bebf2e0-afe5-4e98-8cdf-496c5d355ef9-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-gwkk8\" (UID: \"8bebf2e0-afe5-4e98-8cdf-496c5d355ef9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gwkk8" Dec 01 08:41:08 crc kubenswrapper[4689]: E1201 08:41:08.303278 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:08.803248907 +0000 UTC m=+148.875536811 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.303317 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/21eaf97a-bf73-4e70-a9bc-153b17b8a799-available-featuregates\") pod \"openshift-config-operator-7777fb866f-29dmp\" (UID: \"21eaf97a-bf73-4e70-a9bc-153b17b8a799\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-29dmp" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.303348 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8bebf2e0-afe5-4e98-8cdf-496c5d355ef9-serving-cert\") pod \"authentication-operator-69f744f599-gwkk8\" (UID: \"8bebf2e0-afe5-4e98-8cdf-496c5d355ef9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gwkk8" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.303446 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwxz6\" (UniqueName: \"kubernetes.io/projected/c062b92b-1709-4892-9b40-b1d2405d5812-kube-api-access-gwxz6\") pod \"machine-api-operator-5694c8668f-chlnk\" (UID: \"c062b92b-1709-4892-9b40-b1d2405d5812\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-chlnk" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.303475 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/e552c7d1-a8ad-4033-b89e-951c6f58588b-etcd-ca\") pod \"etcd-operator-b45778765-gzrk2\" (UID: \"e552c7d1-a8ad-4033-b89e-951c6f58588b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gzrk2" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.303501 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/e552c7d1-a8ad-4033-b89e-951c6f58588b-etcd-service-ca\") pod \"etcd-operator-b45778765-gzrk2\" (UID: \"e552c7d1-a8ad-4033-b89e-951c6f58588b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gzrk2" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.303516 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c389f615-2c0f-467b-924e-ad740d3fff07-audit-dir\") pod \"apiserver-76f77b778f-ch9jh\" (UID: \"c389f615-2c0f-467b-924e-ad740d3fff07\") " pod="openshift-apiserver/apiserver-76f77b778f-ch9jh" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.303536 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/57864f38-0ce2-401c-a4c4-96fc7cce8346-auth-proxy-config\") pod \"machine-approver-56656f9798-24jcf\" (UID: \"57864f38-0ce2-401c-a4c4-96fc7cce8346\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-24jcf" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.303552 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5qsf\" (UniqueName: \"kubernetes.io/projected/710ccb76-093a-484d-a784-737ae81e7c21-kube-api-access-l5qsf\") pod \"console-f9d7485db-j5r2f\" (UID: \"710ccb76-093a-484d-a784-737ae81e7c21\") " pod="openshift-console/console-f9d7485db-j5r2f" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.303579 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57864f38-0ce2-401c-a4c4-96fc7cce8346-config\") pod \"machine-approver-56656f9798-24jcf\" (UID: \"57864f38-0ce2-401c-a4c4-96fc7cce8346\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-24jcf" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.303594 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sr6s\" (UniqueName: \"kubernetes.io/projected/7646df48-faa9-486b-b7fc-c8b1d97ead27-kube-api-access-4sr6s\") pod \"dns-operator-744455d44c-k7w5z\" (UID: \"7646df48-faa9-486b-b7fc-c8b1d97ead27\") " pod="openshift-dns-operator/dns-operator-744455d44c-k7w5z" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.303610 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwd9j\" (UniqueName: \"kubernetes.io/projected/f005f766-04a5-4b03-8c50-2fd9ddf967be-kube-api-access-hwd9j\") pod \"openshift-controller-manager-operator-756b6f6bc6-h47fn\" (UID: \"f005f766-04a5-4b03-8c50-2fd9ddf967be\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-h47fn" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.303633 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7d86e20d-febe-4cfb-a738-4705f8122326-registry-certificates\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.303651 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b209cd69-6557-4b86-b7b2-680d3bcf8ec0-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-bhjqx\" (UID: \"b209cd69-6557-4b86-b7b2-680d3bcf8ec0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bhjqx" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.303666 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e552c7d1-a8ad-4033-b89e-951c6f58588b-config\") pod \"etcd-operator-b45778765-gzrk2\" (UID: \"e552c7d1-a8ad-4033-b89e-951c6f58588b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gzrk2" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.303698 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c389f615-2c0f-467b-924e-ad740d3fff07-etcd-serving-ca\") pod \"apiserver-76f77b778f-ch9jh\" (UID: \"c389f615-2c0f-467b-924e-ad740d3fff07\") " pod="openshift-apiserver/apiserver-76f77b778f-ch9jh" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.305939 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-sc9lr"] Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.307668 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-l54ll"] Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.308972 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ffvcj"] Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.310715 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409630-9rhzp"] Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.313643 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.314461 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bjlxg"] Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.315050 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ltkzh"] Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.316566 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-8pg9k"] Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.317598 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-bnznn"] Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.317962 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-8pg9k" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.318691 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lz96b"] Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.319778 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-fdcdc"] Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.321178 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-59vc4"] Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.321975 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-8pg9k"] Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.323557 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-g9pxf"] Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.340539 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-wrw72"] Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.339059 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.341720 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-wrw72"] Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.342148 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-wrw72" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.347872 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-qszrn"] Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.348875 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-qszrn" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.353684 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.374006 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.393813 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.404276 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:08 crc kubenswrapper[4689]: E1201 08:41:08.404500 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:08.9044682 +0000 UTC m=+148.976756104 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.404537 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w94sd\" (UniqueName: \"kubernetes.io/projected/21eaf97a-bf73-4e70-a9bc-153b17b8a799-kube-api-access-w94sd\") pod \"openshift-config-operator-7777fb866f-29dmp\" (UID: \"21eaf97a-bf73-4e70-a9bc-153b17b8a799\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-29dmp" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.404584 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d0af3ff-5d7b-41ae-be27-4dea7a282d86-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-vf9h5\" (UID: \"3d0af3ff-5d7b-41ae-be27-4dea7a282d86\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vf9h5" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.404613 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2fd47e85-de9d-475a-8907-4e805cb1cfc8-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-l54ll\" (UID: \"2fd47e85-de9d-475a-8907-4e805cb1cfc8\") " pod="openshift-marketplace/marketplace-operator-79b997595-l54ll" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.404659 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbj5g\" (UniqueName: \"kubernetes.io/projected/3bb10b5c-893e-422d-a60f-101f4717b0bc-kube-api-access-rbj5g\") pod \"router-default-5444994796-hb577\" (UID: \"3bb10b5c-893e-422d-a60f-101f4717b0bc\") " pod="openshift-ingress/router-default-5444994796-hb577" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.404791 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/304e31dc-6fcd-4654-9c3d-ef693f7c71a6-trusted-ca\") pod \"console-operator-58897d9998-z629s\" (UID: \"304e31dc-6fcd-4654-9c3d-ef693f7c71a6\") " pod="openshift-console-operator/console-operator-58897d9998-z629s" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.404852 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxpvk\" (UniqueName: \"kubernetes.io/projected/8bebf2e0-afe5-4e98-8cdf-496c5d355ef9-kube-api-access-mxpvk\") pod \"authentication-operator-69f744f599-gwkk8\" (UID: \"8bebf2e0-afe5-4e98-8cdf-496c5d355ef9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gwkk8" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.404890 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c062b92b-1709-4892-9b40-b1d2405d5812-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-chlnk\" (UID: \"c062b92b-1709-4892-9b40-b1d2405d5812\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-chlnk" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.404920 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4wck\" (UniqueName: \"kubernetes.io/projected/c1a4774c-b15d-424e-bb37-d6880da5ad85-kube-api-access-j4wck\") pod \"package-server-manager-789f6589d5-zvzpg\" (UID: \"c1a4774c-b15d-424e-bb37-d6880da5ad85\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zvzpg" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.404948 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8j65\" (UniqueName: \"kubernetes.io/projected/304e31dc-6fcd-4654-9c3d-ef693f7c71a6-kube-api-access-j8j65\") pod \"console-operator-58897d9998-z629s\" (UID: \"304e31dc-6fcd-4654-9c3d-ef693f7c71a6\") " pod="openshift-console-operator/console-operator-58897d9998-z629s" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.404993 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7d86e20d-febe-4cfb-a738-4705f8122326-bound-sa-token\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.405033 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/710ccb76-093a-484d-a784-737ae81e7c21-console-oauth-config\") pod \"console-f9d7485db-j5r2f\" (UID: \"710ccb76-093a-484d-a784-737ae81e7c21\") " pod="openshift-console/console-f9d7485db-j5r2f" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.405072 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/79369af1-c9d2-4d8e-a675-a5174bc0e4ad-csi-data-dir\") pod \"csi-hostpathplugin-8pg9k\" (UID: \"79369af1-c9d2-4d8e-a675-a5174bc0e4ad\") " pod="hostpath-provisioner/csi-hostpathplugin-8pg9k" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.405103 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7d86e20d-febe-4cfb-a738-4705f8122326-trusted-ca\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.405133 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bp692\" (UniqueName: \"kubernetes.io/projected/2499ecbd-1cda-49a9-8c8a-e80d44127f01-kube-api-access-bp692\") pod \"control-plane-machine-set-operator-78cbb6b69f-xjxwg\" (UID: \"2499ecbd-1cda-49a9-8c8a-e80d44127f01\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xjxwg" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.405164 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd8122c2-aaf0-4148-849c-ca4502dd0f55-config\") pod \"route-controller-manager-6576b87f9c-6z2v4\" (UID: \"bd8122c2-aaf0-4148-849c-ca4502dd0f55\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6z2v4" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.405189 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bd8122c2-aaf0-4148-849c-ca4502dd0f55-client-ca\") pod \"route-controller-manager-6576b87f9c-6z2v4\" (UID: \"bd8122c2-aaf0-4148-849c-ca4502dd0f55\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6z2v4" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.405216 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/45f382fb-86c3-493f-a2ab-eb9b51923752-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-f6t2n\" (UID: \"45f382fb-86c3-493f-a2ab-eb9b51923752\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f6t2n" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.405267 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/304e31dc-6fcd-4654-9c3d-ef693f7c71a6-serving-cert\") pod \"console-operator-58897d9998-z629s\" (UID: \"304e31dc-6fcd-4654-9c3d-ef693f7c71a6\") " pod="openshift-console-operator/console-operator-58897d9998-z629s" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.405306 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcwnk\" (UniqueName: \"kubernetes.io/projected/3d0af3ff-5d7b-41ae-be27-4dea7a282d86-kube-api-access-xcwnk\") pod \"kube-storage-version-migrator-operator-b67b599dd-vf9h5\" (UID: \"3d0af3ff-5d7b-41ae-be27-4dea7a282d86\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vf9h5" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.405328 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/57864f38-0ce2-401c-a4c4-96fc7cce8346-machine-approver-tls\") pod \"machine-approver-56656f9798-24jcf\" (UID: \"57864f38-0ce2-401c-a4c4-96fc7cce8346\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-24jcf" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.405349 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f005f766-04a5-4b03-8c50-2fd9ddf967be-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-h47fn\" (UID: \"f005f766-04a5-4b03-8c50-2fd9ddf967be\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-h47fn" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.405378 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c062b92b-1709-4892-9b40-b1d2405d5812-images\") pod \"machine-api-operator-5694c8668f-chlnk\" (UID: \"c062b92b-1709-4892-9b40-b1d2405d5812\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-chlnk" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.405396 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c389f615-2c0f-467b-924e-ad740d3fff07-node-pullsecrets\") pod \"apiserver-76f77b778f-ch9jh\" (UID: \"c389f615-2c0f-467b-924e-ad740d3fff07\") " pod="openshift-apiserver/apiserver-76f77b778f-ch9jh" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.405418 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7754ebe-fe0e-44a5-b463-1d005035d249-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-pzx5g\" (UID: \"e7754ebe-fe0e-44a5-b463-1d005035d249\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-pzx5g" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.405448 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21eaf97a-bf73-4e70-a9bc-153b17b8a799-serving-cert\") pod \"openshift-config-operator-7777fb866f-29dmp\" (UID: \"21eaf97a-bf73-4e70-a9bc-153b17b8a799\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-29dmp" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.405468 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6j2p\" (UniqueName: \"kubernetes.io/projected/9eaef062-e274-4f3c-8ce2-3ea23e7106da-kube-api-access-s6j2p\") pod \"migrator-59844c95c7-pr577\" (UID: \"9eaef062-e274-4f3c-8ce2-3ea23e7106da\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-pr577" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.405487 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59ce6c52-f027-43b4-904f-c402047a39f0-config\") pod \"service-ca-operator-777779d784-59vc4\" (UID: \"59ce6c52-f027-43b4-904f-c402047a39f0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-59vc4" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.405564 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/df446b8a-9d6b-41e5-9b7f-2ffa97a1217c-cert\") pod \"ingress-canary-bnznn\" (UID: \"df446b8a-9d6b-41e5-9b7f-2ffa97a1217c\") " pod="openshift-ingress-canary/ingress-canary-bnznn" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.405581 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e552c7d1-a8ad-4033-b89e-951c6f58588b-serving-cert\") pod \"etcd-operator-b45778765-gzrk2\" (UID: \"e552c7d1-a8ad-4033-b89e-951c6f58588b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gzrk2" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.405600 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/c389f615-2c0f-467b-924e-ad740d3fff07-audit\") pod \"apiserver-76f77b778f-ch9jh\" (UID: \"c389f615-2c0f-467b-924e-ad740d3fff07\") " pod="openshift-apiserver/apiserver-76f77b778f-ch9jh" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.405617 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59ce6c52-f027-43b4-904f-c402047a39f0-serving-cert\") pod \"service-ca-operator-777779d784-59vc4\" (UID: \"59ce6c52-f027-43b4-904f-c402047a39f0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-59vc4" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.405633 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d0af3ff-5d7b-41ae-be27-4dea7a282d86-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-vf9h5\" (UID: \"3d0af3ff-5d7b-41ae-be27-4dea7a282d86\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vf9h5" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.405649 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e85d92ae-30aa-4302-b217-43a48dcadd8a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-8rfdp\" (UID: \"e85d92ae-30aa-4302-b217-43a48dcadd8a\") " pod="openshift-authentication/oauth-openshift-558db77b4-8rfdp" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.405735 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e552c7d1-a8ad-4033-b89e-951c6f58588b-etcd-client\") pod \"etcd-operator-b45778765-gzrk2\" (UID: \"e552c7d1-a8ad-4033-b89e-951c6f58588b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gzrk2" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.405754 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szh8z\" (UniqueName: \"kubernetes.io/projected/e85d92ae-30aa-4302-b217-43a48dcadd8a-kube-api-access-szh8z\") pod \"oauth-openshift-558db77b4-8rfdp\" (UID: \"e85d92ae-30aa-4302-b217-43a48dcadd8a\") " pod="openshift-authentication/oauth-openshift-558db77b4-8rfdp" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.405773 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ph8kj\" (UniqueName: \"kubernetes.io/projected/57864f38-0ce2-401c-a4c4-96fc7cce8346-kube-api-access-ph8kj\") pod \"machine-approver-56656f9798-24jcf\" (UID: \"57864f38-0ce2-401c-a4c4-96fc7cce8346\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-24jcf" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.405794 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/48cda75c-3d0c-44e1-8f98-c191a4e79e1b-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ffvcj\" (UID: \"48cda75c-3d0c-44e1-8f98-c191a4e79e1b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ffvcj" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.405813 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/79369af1-c9d2-4d8e-a675-a5174bc0e4ad-registration-dir\") pod \"csi-hostpathplugin-8pg9k\" (UID: \"79369af1-c9d2-4d8e-a675-a5174bc0e4ad\") " pod="hostpath-provisioner/csi-hostpathplugin-8pg9k" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.405828 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e85d92ae-30aa-4302-b217-43a48dcadd8a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-8rfdp\" (UID: \"e85d92ae-30aa-4302-b217-43a48dcadd8a\") " pod="openshift-authentication/oauth-openshift-558db77b4-8rfdp" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.405845 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/21eaf97a-bf73-4e70-a9bc-153b17b8a799-available-featuregates\") pod \"openshift-config-operator-7777fb866f-29dmp\" (UID: \"21eaf97a-bf73-4e70-a9bc-153b17b8a799\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-29dmp" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.405863 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48cda75c-3d0c-44e1-8f98-c191a4e79e1b-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ffvcj\" (UID: \"48cda75c-3d0c-44e1-8f98-c191a4e79e1b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ffvcj" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.405892 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e85d92ae-30aa-4302-b217-43a48dcadd8a-audit-policies\") pod \"oauth-openshift-558db77b4-8rfdp\" (UID: \"e85d92ae-30aa-4302-b217-43a48dcadd8a\") " pod="openshift-authentication/oauth-openshift-558db77b4-8rfdp" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.405908 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/79369af1-c9d2-4d8e-a675-a5174bc0e4ad-mountpoint-dir\") pod \"csi-hostpathplugin-8pg9k\" (UID: \"79369af1-c9d2-4d8e-a675-a5174bc0e4ad\") " pod="hostpath-provisioner/csi-hostpathplugin-8pg9k" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.405925 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/e552c7d1-a8ad-4033-b89e-951c6f58588b-etcd-ca\") pod \"etcd-operator-b45778765-gzrk2\" (UID: \"e552c7d1-a8ad-4033-b89e-951c6f58588b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gzrk2" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.405939 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/e552c7d1-a8ad-4033-b89e-951c6f58588b-etcd-service-ca\") pod \"etcd-operator-b45778765-gzrk2\" (UID: \"e552c7d1-a8ad-4033-b89e-951c6f58588b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gzrk2" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.405955 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c389f615-2c0f-467b-924e-ad740d3fff07-audit-dir\") pod \"apiserver-76f77b778f-ch9jh\" (UID: \"c389f615-2c0f-467b-924e-ad740d3fff07\") " pod="openshift-apiserver/apiserver-76f77b778f-ch9jh" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.405971 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/57864f38-0ce2-401c-a4c4-96fc7cce8346-auth-proxy-config\") pod \"machine-approver-56656f9798-24jcf\" (UID: \"57864f38-0ce2-401c-a4c4-96fc7cce8346\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-24jcf" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.405986 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5qsf\" (UniqueName: \"kubernetes.io/projected/710ccb76-093a-484d-a784-737ae81e7c21-kube-api-access-l5qsf\") pod \"console-f9d7485db-j5r2f\" (UID: \"710ccb76-093a-484d-a784-737ae81e7c21\") " pod="openshift-console/console-f9d7485db-j5r2f" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.406005 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlctf\" (UniqueName: \"kubernetes.io/projected/be1070d3-8d5b-4910-aee6-3fee2a360934-kube-api-access-zlctf\") pod \"collect-profiles-29409630-9rhzp\" (UID: \"be1070d3-8d5b-4910-aee6-3fee2a360934\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409630-9rhzp" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.406032 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkh8x\" (UniqueName: \"kubernetes.io/projected/a2f6cac4-8eb9-4d62-8ef2-3ceb354076bf-kube-api-access-fkh8x\") pod \"machine-config-operator-74547568cd-b7hxr\" (UID: \"a2f6cac4-8eb9-4d62-8ef2-3ceb354076bf\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b7hxr" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.406048 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7754ebe-fe0e-44a5-b463-1d005035d249-config\") pod \"kube-controller-manager-operator-78b949d7b-pzx5g\" (UID: \"e7754ebe-fe0e-44a5-b463-1d005035d249\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-pzx5g" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.406065 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/79369af1-c9d2-4d8e-a675-a5174bc0e4ad-socket-dir\") pod \"csi-hostpathplugin-8pg9k\" (UID: \"79369af1-c9d2-4d8e-a675-a5174bc0e4ad\") " pod="hostpath-provisioner/csi-hostpathplugin-8pg9k" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.406086 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b209cd69-6557-4b86-b7b2-680d3bcf8ec0-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-bhjqx\" (UID: \"b209cd69-6557-4b86-b7b2-680d3bcf8ec0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bhjqx" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.406107 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5j8mk\" (UniqueName: \"kubernetes.io/projected/45f382fb-86c3-493f-a2ab-eb9b51923752-kube-api-access-5j8mk\") pod \"machine-config-controller-84d6567774-f6t2n\" (UID: \"45f382fb-86c3-493f-a2ab-eb9b51923752\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f6t2n" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.406128 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c389f615-2c0f-467b-924e-ad740d3fff07-etcd-serving-ca\") pod \"apiserver-76f77b778f-ch9jh\" (UID: \"c389f615-2c0f-467b-924e-ad740d3fff07\") " pod="openshift-apiserver/apiserver-76f77b778f-ch9jh" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.406147 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e85d92ae-30aa-4302-b217-43a48dcadd8a-audit-dir\") pod \"oauth-openshift-558db77b4-8rfdp\" (UID: \"e85d92ae-30aa-4302-b217-43a48dcadd8a\") " pod="openshift-authentication/oauth-openshift-558db77b4-8rfdp" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.406166 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dadd1959-680d-4f67-9af9-65d8519398df-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-bsffw\" (UID: \"dadd1959-680d-4f67-9af9-65d8519398df\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bsffw" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.406183 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ns49\" (UniqueName: \"kubernetes.io/projected/df446b8a-9d6b-41e5-9b7f-2ffa97a1217c-kube-api-access-4ns49\") pod \"ingress-canary-bnznn\" (UID: \"df446b8a-9d6b-41e5-9b7f-2ffa97a1217c\") " pod="openshift-ingress-canary/ingress-canary-bnznn" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.406204 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e85d92ae-30aa-4302-b217-43a48dcadd8a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-8rfdp\" (UID: \"e85d92ae-30aa-4302-b217-43a48dcadd8a\") " pod="openshift-authentication/oauth-openshift-558db77b4-8rfdp" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.406223 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c389f615-2c0f-467b-924e-ad740d3fff07-trusted-ca-bundle\") pod \"apiserver-76f77b778f-ch9jh\" (UID: \"c389f615-2c0f-467b-924e-ad740d3fff07\") " pod="openshift-apiserver/apiserver-76f77b778f-ch9jh" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.406243 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e85d92ae-30aa-4302-b217-43a48dcadd8a-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-8rfdp\" (UID: \"e85d92ae-30aa-4302-b217-43a48dcadd8a\") " pod="openshift-authentication/oauth-openshift-558db77b4-8rfdp" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.406262 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7d86e20d-febe-4cfb-a738-4705f8122326-ca-trust-extracted\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.406280 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gcl2c\" (UniqueName: \"kubernetes.io/projected/c389f615-2c0f-467b-924e-ad740d3fff07-kube-api-access-gcl2c\") pod \"apiserver-76f77b778f-ch9jh\" (UID: \"c389f615-2c0f-467b-924e-ad740d3fff07\") " pod="openshift-apiserver/apiserver-76f77b778f-ch9jh" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.406296 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2fd47e85-de9d-475a-8907-4e805cb1cfc8-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-l54ll\" (UID: \"2fd47e85-de9d-475a-8907-4e805cb1cfc8\") " pod="openshift-marketplace/marketplace-operator-79b997595-l54ll" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.406315 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dadd1959-680d-4f67-9af9-65d8519398df-config\") pod \"kube-apiserver-operator-766d6c64bb-bsffw\" (UID: \"dadd1959-680d-4f67-9af9-65d8519398df\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bsffw" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.406334 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/3bb10b5c-893e-422d-a60f-101f4717b0bc-default-certificate\") pod \"router-default-5444994796-hb577\" (UID: \"3bb10b5c-893e-422d-a60f-101f4717b0bc\") " pod="openshift-ingress/router-default-5444994796-hb577" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.406350 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b1b970c0-59a2-4782-8664-b17a7d7a8202-srv-cert\") pod \"olm-operator-6b444d44fb-lz96b\" (UID: \"b1b970c0-59a2-4782-8664-b17a7d7a8202\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lz96b" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.406380 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkmmm\" (UniqueName: \"kubernetes.io/projected/6e2d0411-e6d8-49a0-90c5-e1454e71bf44-kube-api-access-nkmmm\") pod \"service-ca-9c57cc56f-fdcdc\" (UID: \"6e2d0411-e6d8-49a0-90c5-e1454e71bf44\") " pod="openshift-service-ca/service-ca-9c57cc56f-fdcdc" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.406397 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/2499ecbd-1cda-49a9-8c8a-e80d44127f01-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-xjxwg\" (UID: \"2499ecbd-1cda-49a9-8c8a-e80d44127f01\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xjxwg" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.406414 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxzzf\" (UniqueName: \"kubernetes.io/projected/bd8122c2-aaf0-4148-849c-ca4502dd0f55-kube-api-access-zxzzf\") pod \"route-controller-manager-6576b87f9c-6z2v4\" (UID: \"bd8122c2-aaf0-4148-849c-ca4502dd0f55\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6z2v4" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.406411 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c062b92b-1709-4892-9b40-b1d2405d5812-images\") pod \"machine-api-operator-5694c8668f-chlnk\" (UID: \"c062b92b-1709-4892-9b40-b1d2405d5812\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-chlnk" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.406442 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7d86e20d-febe-4cfb-a738-4705f8122326-installation-pull-secrets\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.406461 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/c389f615-2c0f-467b-924e-ad740d3fff07-image-import-ca\") pod \"apiserver-76f77b778f-ch9jh\" (UID: \"c389f615-2c0f-467b-924e-ad740d3fff07\") " pod="openshift-apiserver/apiserver-76f77b778f-ch9jh" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.406478 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e85d92ae-30aa-4302-b217-43a48dcadd8a-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-8rfdp\" (UID: \"e85d92ae-30aa-4302-b217-43a48dcadd8a\") " pod="openshift-authentication/oauth-openshift-558db77b4-8rfdp" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.406500 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/6e2d0411-e6d8-49a0-90c5-e1454e71bf44-signing-key\") pod \"service-ca-9c57cc56f-fdcdc\" (UID: \"6e2d0411-e6d8-49a0-90c5-e1454e71bf44\") " pod="openshift-service-ca/service-ca-9c57cc56f-fdcdc" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.406515 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cf58a7d2-9013-4cdf-a435-67695f7677a1-metrics-tls\") pod \"ingress-operator-5b745b69d9-sc9lr\" (UID: \"cf58a7d2-9013-4cdf-a435-67695f7677a1\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-sc9lr" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.406531 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7754ebe-fe0e-44a5-b463-1d005035d249-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-pzx5g\" (UID: \"e7754ebe-fe0e-44a5-b463-1d005035d249\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-pzx5g" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.406550 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e85d92ae-30aa-4302-b217-43a48dcadd8a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-8rfdp\" (UID: \"e85d92ae-30aa-4302-b217-43a48dcadd8a\") " pod="openshift-authentication/oauth-openshift-558db77b4-8rfdp" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.406589 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d52jc\" (UniqueName: \"kubernetes.io/projected/cf58a7d2-9013-4cdf-a435-67695f7677a1-kube-api-access-d52jc\") pod \"ingress-operator-5b745b69d9-sc9lr\" (UID: \"cf58a7d2-9013-4cdf-a435-67695f7677a1\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-sc9lr" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.406608 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c389f615-2c0f-467b-924e-ad740d3fff07-etcd-client\") pod \"apiserver-76f77b778f-ch9jh\" (UID: \"c389f615-2c0f-467b-924e-ad740d3fff07\") " pod="openshift-apiserver/apiserver-76f77b778f-ch9jh" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.406624 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b1b970c0-59a2-4782-8664-b17a7d7a8202-profile-collector-cert\") pod \"olm-operator-6b444d44fb-lz96b\" (UID: \"b1b970c0-59a2-4782-8664-b17a7d7a8202\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lz96b" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.406641 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/304e31dc-6fcd-4654-9c3d-ef693f7c71a6-config\") pod \"console-operator-58897d9998-z629s\" (UID: \"304e31dc-6fcd-4654-9c3d-ef693f7c71a6\") " pod="openshift-console-operator/console-operator-58897d9998-z629s" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.406662 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/710ccb76-093a-484d-a784-737ae81e7c21-console-config\") pod \"console-f9d7485db-j5r2f\" (UID: \"710ccb76-093a-484d-a784-737ae81e7c21\") " pod="openshift-console/console-f9d7485db-j5r2f" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.406684 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7d86e20d-febe-4cfb-a738-4705f8122326-registry-tls\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.406699 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f005f766-04a5-4b03-8c50-2fd9ddf967be-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-h47fn\" (UID: \"f005f766-04a5-4b03-8c50-2fd9ddf967be\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-h47fn" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.406715 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c389f615-2c0f-467b-924e-ad740d3fff07-encryption-config\") pod \"apiserver-76f77b778f-ch9jh\" (UID: \"c389f615-2c0f-467b-924e-ad740d3fff07\") " pod="openshift-apiserver/apiserver-76f77b778f-ch9jh" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.406732 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5fb20738-492b-4b13-bf8a-5c32aabc0f32-serving-cert\") pod \"controller-manager-879f6c89f-9nx2j\" (UID: \"5fb20738-492b-4b13-bf8a-5c32aabc0f32\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9nx2j" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.406751 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7d86e20d-febe-4cfb-a738-4705f8122326-trusted-ca\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.406793 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.406816 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjzt6\" (UniqueName: \"kubernetes.io/projected/2546c757-03ab-4ba3-95d0-aa537cd615fb-kube-api-access-mjzt6\") pod \"packageserver-d55dfcdfc-bjlxg\" (UID: \"2546c757-03ab-4ba3-95d0-aa537cd615fb\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bjlxg" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.406835 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qq96\" (UniqueName: \"kubernetes.io/projected/7d86e20d-febe-4cfb-a738-4705f8122326-kube-api-access-6qq96\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.406853 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/79369af1-c9d2-4d8e-a675-a5174bc0e4ad-plugins-dir\") pod \"csi-hostpathplugin-8pg9k\" (UID: \"79369af1-c9d2-4d8e-a675-a5174bc0e4ad\") " pod="hostpath-provisioner/csi-hostpathplugin-8pg9k" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.406870 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7646df48-faa9-486b-b7fc-c8b1d97ead27-metrics-tls\") pod \"dns-operator-744455d44c-k7w5z\" (UID: \"7646df48-faa9-486b-b7fc-c8b1d97ead27\") " pod="openshift-dns-operator/dns-operator-744455d44c-k7w5z" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.406886 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/1120a89b-2c45-428f-8577-eb6eb712961b-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-4nmmg\" (UID: \"1120a89b-2c45-428f-8577-eb6eb712961b\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4nmmg" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.406906 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/dcf239d3-b34c-4fa0-aafa-4e82e29e0c9b-profile-collector-cert\") pod \"catalog-operator-68c6474976-ltkzh\" (UID: \"dcf239d3-b34c-4fa0-aafa-4e82e29e0c9b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ltkzh" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.406926 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b209cd69-6557-4b86-b7b2-680d3bcf8ec0-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-bhjqx\" (UID: \"b209cd69-6557-4b86-b7b2-680d3bcf8ec0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bhjqx" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.406951 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/2546c757-03ab-4ba3-95d0-aa537cd615fb-tmpfs\") pod \"packageserver-d55dfcdfc-bjlxg\" (UID: \"2546c757-03ab-4ba3-95d0-aa537cd615fb\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bjlxg" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.406972 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3bb10b5c-893e-422d-a60f-101f4717b0bc-service-ca-bundle\") pod \"router-default-5444994796-hb577\" (UID: \"3bb10b5c-893e-422d-a60f-101f4717b0bc\") " pod="openshift-ingress/router-default-5444994796-hb577" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.406989 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a2f6cac4-8eb9-4d62-8ef2-3ceb354076bf-images\") pod \"machine-config-operator-74547568cd-b7hxr\" (UID: \"a2f6cac4-8eb9-4d62-8ef2-3ceb354076bf\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b7hxr" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.407007 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8bebf2e0-afe5-4e98-8cdf-496c5d355ef9-config\") pod \"authentication-operator-69f744f599-gwkk8\" (UID: \"8bebf2e0-afe5-4e98-8cdf-496c5d355ef9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gwkk8" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.407025 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bd8122c2-aaf0-4148-849c-ca4502dd0f55-serving-cert\") pod \"route-controller-manager-6576b87f9c-6z2v4\" (UID: \"bd8122c2-aaf0-4148-849c-ca4502dd0f55\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6z2v4" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.407042 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/828d985f-2b1a-47df-b653-907e8684d1f5-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-vhndn\" (UID: \"828d985f-2b1a-47df-b653-907e8684d1f5\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vhndn" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.407059 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/710ccb76-093a-484d-a784-737ae81e7c21-service-ca\") pod \"console-f9d7485db-j5r2f\" (UID: \"710ccb76-093a-484d-a784-737ae81e7c21\") " pod="openshift-console/console-f9d7485db-j5r2f" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.407080 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/45f382fb-86c3-493f-a2ab-eb9b51923752-proxy-tls\") pod \"machine-config-controller-84d6567774-f6t2n\" (UID: \"45f382fb-86c3-493f-a2ab-eb9b51923752\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f6t2n" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.407102 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/3bb10b5c-893e-422d-a60f-101f4717b0bc-stats-auth\") pod \"router-default-5444994796-hb577\" (UID: \"3bb10b5c-893e-422d-a60f-101f4717b0bc\") " pod="openshift-ingress/router-default-5444994796-hb577" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.407119 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5k5h2\" (UniqueName: \"kubernetes.io/projected/bd24264f-fc40-410e-9bed-3f8e340035b5-kube-api-access-5k5h2\") pod \"downloads-7954f5f757-xx949\" (UID: \"bd24264f-fc40-410e-9bed-3f8e340035b5\") " pod="openshift-console/downloads-7954f5f757-xx949" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.407137 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8bebf2e0-afe5-4e98-8cdf-496c5d355ef9-service-ca-bundle\") pod \"authentication-operator-69f744f599-gwkk8\" (UID: \"8bebf2e0-afe5-4e98-8cdf-496c5d355ef9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gwkk8" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.407155 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/be1070d3-8d5b-4910-aee6-3fee2a360934-config-volume\") pod \"collect-profiles-29409630-9rhzp\" (UID: \"be1070d3-8d5b-4910-aee6-3fee2a360934\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409630-9rhzp" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.407173 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8bebf2e0-afe5-4e98-8cdf-496c5d355ef9-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-gwkk8\" (UID: \"8bebf2e0-afe5-4e98-8cdf-496c5d355ef9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gwkk8" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.407188 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xffcg\" (UniqueName: \"kubernetes.io/projected/2fd47e85-de9d-475a-8907-4e805cb1cfc8-kube-api-access-xffcg\") pod \"marketplace-operator-79b997595-l54ll\" (UID: \"2fd47e85-de9d-475a-8907-4e805cb1cfc8\") " pod="openshift-marketplace/marketplace-operator-79b997595-l54ll" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.407207 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8bebf2e0-afe5-4e98-8cdf-496c5d355ef9-serving-cert\") pod \"authentication-operator-69f744f599-gwkk8\" (UID: \"8bebf2e0-afe5-4e98-8cdf-496c5d355ef9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gwkk8" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.407212 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f005f766-04a5-4b03-8c50-2fd9ddf967be-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-h47fn\" (UID: \"f005f766-04a5-4b03-8c50-2fd9ddf967be\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-h47fn" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.407224 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hm94m\" (UniqueName: \"kubernetes.io/projected/b1b970c0-59a2-4782-8664-b17a7d7a8202-kube-api-access-hm94m\") pod \"olm-operator-6b444d44fb-lz96b\" (UID: \"b1b970c0-59a2-4782-8664-b17a7d7a8202\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lz96b" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.407288 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c389f615-2c0f-467b-924e-ad740d3fff07-node-pullsecrets\") pod \"apiserver-76f77b778f-ch9jh\" (UID: \"c389f615-2c0f-467b-924e-ad740d3fff07\") " pod="openshift-apiserver/apiserver-76f77b778f-ch9jh" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.407560 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2546c757-03ab-4ba3-95d0-aa537cd615fb-apiservice-cert\") pod \"packageserver-d55dfcdfc-bjlxg\" (UID: \"2546c757-03ab-4ba3-95d0-aa537cd615fb\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bjlxg" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.407617 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5fb20738-492b-4b13-bf8a-5c32aabc0f32-client-ca\") pod \"controller-manager-879f6c89f-9nx2j\" (UID: \"5fb20738-492b-4b13-bf8a-5c32aabc0f32\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9nx2j" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.407647 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z422r\" (UniqueName: \"kubernetes.io/projected/79369af1-c9d2-4d8e-a675-a5174bc0e4ad-kube-api-access-z422r\") pod \"csi-hostpathplugin-8pg9k\" (UID: \"79369af1-c9d2-4d8e-a675-a5174bc0e4ad\") " pod="hostpath-provisioner/csi-hostpathplugin-8pg9k" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.407671 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3bb10b5c-893e-422d-a60f-101f4717b0bc-metrics-certs\") pod \"router-default-5444994796-hb577\" (UID: \"3bb10b5c-893e-422d-a60f-101f4717b0bc\") " pod="openshift-ingress/router-default-5444994796-hb577" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.407703 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwxz6\" (UniqueName: \"kubernetes.io/projected/c062b92b-1709-4892-9b40-b1d2405d5812-kube-api-access-gwxz6\") pod \"machine-api-operator-5694c8668f-chlnk\" (UID: \"c062b92b-1709-4892-9b40-b1d2405d5812\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-chlnk" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.407731 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5fb20738-492b-4b13-bf8a-5c32aabc0f32-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-9nx2j\" (UID: \"5fb20738-492b-4b13-bf8a-5c32aabc0f32\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9nx2j" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.407759 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dadd1959-680d-4f67-9af9-65d8519398df-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-bsffw\" (UID: \"dadd1959-680d-4f67-9af9-65d8519398df\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bsffw" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.407805 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/828d985f-2b1a-47df-b653-907e8684d1f5-config\") pod \"openshift-apiserver-operator-796bbdcf4f-vhndn\" (UID: \"828d985f-2b1a-47df-b653-907e8684d1f5\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vhndn" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.407828 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e85d92ae-30aa-4302-b217-43a48dcadd8a-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-8rfdp\" (UID: \"e85d92ae-30aa-4302-b217-43a48dcadd8a\") " pod="openshift-authentication/oauth-openshift-558db77b4-8rfdp" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.407859 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e85d92ae-30aa-4302-b217-43a48dcadd8a-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-8rfdp\" (UID: \"e85d92ae-30aa-4302-b217-43a48dcadd8a\") " pod="openshift-authentication/oauth-openshift-558db77b4-8rfdp" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.407884 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pb277\" (UniqueName: \"kubernetes.io/projected/dcf239d3-b34c-4fa0-aafa-4e82e29e0c9b-kube-api-access-pb277\") pod \"catalog-operator-68c6474976-ltkzh\" (UID: \"dcf239d3-b34c-4fa0-aafa-4e82e29e0c9b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ltkzh" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.407908 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2546c757-03ab-4ba3-95d0-aa537cd615fb-webhook-cert\") pod \"packageserver-d55dfcdfc-bjlxg\" (UID: \"2546c757-03ab-4ba3-95d0-aa537cd615fb\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bjlxg" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.407930 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cf58a7d2-9013-4cdf-a435-67695f7677a1-trusted-ca\") pod \"ingress-operator-5b745b69d9-sc9lr\" (UID: \"cf58a7d2-9013-4cdf-a435-67695f7677a1\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-sc9lr" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.407955 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57864f38-0ce2-401c-a4c4-96fc7cce8346-config\") pod \"machine-approver-56656f9798-24jcf\" (UID: \"57864f38-0ce2-401c-a4c4-96fc7cce8346\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-24jcf" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.407979 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4sr6s\" (UniqueName: \"kubernetes.io/projected/7646df48-faa9-486b-b7fc-c8b1d97ead27-kube-api-access-4sr6s\") pod \"dns-operator-744455d44c-k7w5z\" (UID: \"7646df48-faa9-486b-b7fc-c8b1d97ead27\") " pod="openshift-dns-operator/dns-operator-744455d44c-k7w5z" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.408005 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7d86e20d-febe-4cfb-a738-4705f8122326-registry-certificates\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.408028 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwd9j\" (UniqueName: \"kubernetes.io/projected/f005f766-04a5-4b03-8c50-2fd9ddf967be-kube-api-access-hwd9j\") pod \"openshift-controller-manager-operator-756b6f6bc6-h47fn\" (UID: \"f005f766-04a5-4b03-8c50-2fd9ddf967be\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-h47fn" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.408058 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e85d92ae-30aa-4302-b217-43a48dcadd8a-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-8rfdp\" (UID: \"e85d92ae-30aa-4302-b217-43a48dcadd8a\") " pod="openshift-authentication/oauth-openshift-558db77b4-8rfdp" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.408084 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/065b75bb-d7a1-478c-bb62-cec913693a7e-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-g9pxf\" (UID: \"065b75bb-d7a1-478c-bb62-cec913693a7e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-g9pxf" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.408110 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e552c7d1-a8ad-4033-b89e-951c6f58588b-config\") pod \"etcd-operator-b45778765-gzrk2\" (UID: \"e552c7d1-a8ad-4033-b89e-951c6f58588b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gzrk2" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.408133 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/6e2d0411-e6d8-49a0-90c5-e1454e71bf44-signing-cabundle\") pod \"service-ca-9c57cc56f-fdcdc\" (UID: \"6e2d0411-e6d8-49a0-90c5-e1454e71bf44\") " pod="openshift-service-ca/service-ca-9c57cc56f-fdcdc" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.408156 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a2f6cac4-8eb9-4d62-8ef2-3ceb354076bf-proxy-tls\") pod \"machine-config-operator-74547568cd-b7hxr\" (UID: \"a2f6cac4-8eb9-4d62-8ef2-3ceb354076bf\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b7hxr" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.408195 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e85d92ae-30aa-4302-b217-43a48dcadd8a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-8rfdp\" (UID: \"e85d92ae-30aa-4302-b217-43a48dcadd8a\") " pod="openshift-authentication/oauth-openshift-558db77b4-8rfdp" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.408230 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/c1a4774c-b15d-424e-bb37-d6880da5ad85-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-zvzpg\" (UID: \"c1a4774c-b15d-424e-bb37-d6880da5ad85\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zvzpg" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.408268 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pk9c\" (UniqueName: \"kubernetes.io/projected/e552c7d1-a8ad-4033-b89e-951c6f58588b-kube-api-access-2pk9c\") pod \"etcd-operator-b45778765-gzrk2\" (UID: \"e552c7d1-a8ad-4033-b89e-951c6f58588b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gzrk2" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.408302 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmvxh\" (UniqueName: \"kubernetes.io/projected/59ce6c52-f027-43b4-904f-c402047a39f0-kube-api-access-cmvxh\") pod \"service-ca-operator-777779d784-59vc4\" (UID: \"59ce6c52-f027-43b4-904f-c402047a39f0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-59vc4" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.408331 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e85d92ae-30aa-4302-b217-43a48dcadd8a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-8rfdp\" (UID: \"e85d92ae-30aa-4302-b217-43a48dcadd8a\") " pod="openshift-authentication/oauth-openshift-558db77b4-8rfdp" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.408491 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fb20738-492b-4b13-bf8a-5c32aabc0f32-config\") pod \"controller-manager-879f6c89f-9nx2j\" (UID: \"5fb20738-492b-4b13-bf8a-5c32aabc0f32\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9nx2j" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.408532 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a2f6cac4-8eb9-4d62-8ef2-3ceb354076bf-auth-proxy-config\") pod \"machine-config-operator-74547568cd-b7hxr\" (UID: \"a2f6cac4-8eb9-4d62-8ef2-3ceb354076bf\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b7hxr" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.408569 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b209cd69-6557-4b86-b7b2-680d3bcf8ec0-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-bhjqx\" (UID: \"b209cd69-6557-4b86-b7b2-680d3bcf8ec0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bhjqx" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.408920 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-px2ln\" (UniqueName: \"kubernetes.io/projected/b209cd69-6557-4b86-b7b2-680d3bcf8ec0-kube-api-access-px2ln\") pod \"cluster-image-registry-operator-dc59b4c8b-bhjqx\" (UID: \"b209cd69-6557-4b86-b7b2-680d3bcf8ec0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bhjqx" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.408952 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/be1070d3-8d5b-4910-aee6-3fee2a360934-secret-volume\") pod \"collect-profiles-29409630-9rhzp\" (UID: \"be1070d3-8d5b-4910-aee6-3fee2a360934\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409630-9rhzp" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.408976 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2z6gq\" (UniqueName: \"kubernetes.io/projected/5fb20738-492b-4b13-bf8a-5c32aabc0f32-kube-api-access-2z6gq\") pod \"controller-manager-879f6c89f-9nx2j\" (UID: \"5fb20738-492b-4b13-bf8a-5c32aabc0f32\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9nx2j" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.409003 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxcv2\" (UniqueName: \"kubernetes.io/projected/1120a89b-2c45-428f-8577-eb6eb712961b-kube-api-access-zxcv2\") pod \"cluster-samples-operator-665b6dd947-4nmmg\" (UID: \"1120a89b-2c45-428f-8577-eb6eb712961b\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4nmmg" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.409045 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c062b92b-1709-4892-9b40-b1d2405d5812-config\") pod \"machine-api-operator-5694c8668f-chlnk\" (UID: \"c062b92b-1709-4892-9b40-b1d2405d5812\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-chlnk" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.409070 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/710ccb76-093a-484d-a784-737ae81e7c21-console-serving-cert\") pod \"console-f9d7485db-j5r2f\" (UID: \"710ccb76-093a-484d-a784-737ae81e7c21\") " pod="openshift-console/console-f9d7485db-j5r2f" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.409096 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8956x\" (UniqueName: \"kubernetes.io/projected/065b75bb-d7a1-478c-bb62-cec913693a7e-kube-api-access-8956x\") pod \"multus-admission-controller-857f4d67dd-g9pxf\" (UID: \"065b75bb-d7a1-478c-bb62-cec913693a7e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-g9pxf" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.409123 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c389f615-2c0f-467b-924e-ad740d3fff07-config\") pod \"apiserver-76f77b778f-ch9jh\" (UID: \"c389f615-2c0f-467b-924e-ad740d3fff07\") " pod="openshift-apiserver/apiserver-76f77b778f-ch9jh" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.409148 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/710ccb76-093a-484d-a784-737ae81e7c21-oauth-serving-cert\") pod \"console-f9d7485db-j5r2f\" (UID: \"710ccb76-093a-484d-a784-737ae81e7c21\") " pod="openshift-console/console-f9d7485db-j5r2f" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.409172 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cf58a7d2-9013-4cdf-a435-67695f7677a1-bound-sa-token\") pod \"ingress-operator-5b745b69d9-sc9lr\" (UID: \"cf58a7d2-9013-4cdf-a435-67695f7677a1\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-sc9lr" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.409194 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/dcf239d3-b34c-4fa0-aafa-4e82e29e0c9b-srv-cert\") pod \"catalog-operator-68c6474976-ltkzh\" (UID: \"dcf239d3-b34c-4fa0-aafa-4e82e29e0c9b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ltkzh" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.409217 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgs9z\" (UniqueName: \"kubernetes.io/projected/828d985f-2b1a-47df-b653-907e8684d1f5-kube-api-access-jgs9z\") pod \"openshift-apiserver-operator-796bbdcf4f-vhndn\" (UID: \"828d985f-2b1a-47df-b653-907e8684d1f5\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vhndn" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.409240 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48cda75c-3d0c-44e1-8f98-c191a4e79e1b-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ffvcj\" (UID: \"48cda75c-3d0c-44e1-8f98-c191a4e79e1b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ffvcj" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.409269 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c389f615-2c0f-467b-924e-ad740d3fff07-serving-cert\") pod \"apiserver-76f77b778f-ch9jh\" (UID: \"c389f615-2c0f-467b-924e-ad740d3fff07\") " pod="openshift-apiserver/apiserver-76f77b778f-ch9jh" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.409293 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/710ccb76-093a-484d-a784-737ae81e7c21-trusted-ca-bundle\") pod \"console-f9d7485db-j5r2f\" (UID: \"710ccb76-093a-484d-a784-737ae81e7c21\") " pod="openshift-console/console-f9d7485db-j5r2f" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.409874 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b209cd69-6557-4b86-b7b2-680d3bcf8ec0-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-bhjqx\" (UID: \"b209cd69-6557-4b86-b7b2-680d3bcf8ec0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bhjqx" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.409958 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21eaf97a-bf73-4e70-a9bc-153b17b8a799-serving-cert\") pod \"openshift-config-operator-7777fb866f-29dmp\" (UID: \"21eaf97a-bf73-4e70-a9bc-153b17b8a799\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-29dmp" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.410670 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/710ccb76-093a-484d-a784-737ae81e7c21-trusted-ca-bundle\") pod \"console-f9d7485db-j5r2f\" (UID: \"710ccb76-093a-484d-a784-737ae81e7c21\") " pod="openshift-console/console-f9d7485db-j5r2f" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.410785 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c062b92b-1709-4892-9b40-b1d2405d5812-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-chlnk\" (UID: \"c062b92b-1709-4892-9b40-b1d2405d5812\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-chlnk" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.411682 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/c389f615-2c0f-467b-924e-ad740d3fff07-image-import-ca\") pod \"apiserver-76f77b778f-ch9jh\" (UID: \"c389f615-2c0f-467b-924e-ad740d3fff07\") " pod="openshift-apiserver/apiserver-76f77b778f-ch9jh" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.412012 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/710ccb76-093a-484d-a784-737ae81e7c21-console-oauth-config\") pod \"console-f9d7485db-j5r2f\" (UID: \"710ccb76-093a-484d-a784-737ae81e7c21\") " pod="openshift-console/console-f9d7485db-j5r2f" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.412221 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7d86e20d-febe-4cfb-a738-4705f8122326-installation-pull-secrets\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.412417 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c389f615-2c0f-467b-924e-ad740d3fff07-etcd-serving-ca\") pod \"apiserver-76f77b778f-ch9jh\" (UID: \"c389f615-2c0f-467b-924e-ad740d3fff07\") " pod="openshift-apiserver/apiserver-76f77b778f-ch9jh" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.412570 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/57864f38-0ce2-401c-a4c4-96fc7cce8346-machine-approver-tls\") pod \"machine-approver-56656f9798-24jcf\" (UID: \"57864f38-0ce2-401c-a4c4-96fc7cce8346\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-24jcf" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.412762 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7d86e20d-febe-4cfb-a738-4705f8122326-ca-trust-extracted\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.413282 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/c389f615-2c0f-467b-924e-ad740d3fff07-audit\") pod \"apiserver-76f77b778f-ch9jh\" (UID: \"c389f615-2c0f-467b-924e-ad740d3fff07\") " pod="openshift-apiserver/apiserver-76f77b778f-ch9jh" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.413630 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c389f615-2c0f-467b-924e-ad740d3fff07-trusted-ca-bundle\") pod \"apiserver-76f77b778f-ch9jh\" (UID: \"c389f615-2c0f-467b-924e-ad740d3fff07\") " pod="openshift-apiserver/apiserver-76f77b778f-ch9jh" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.413921 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e552c7d1-a8ad-4033-b89e-951c6f58588b-serving-cert\") pod \"etcd-operator-b45778765-gzrk2\" (UID: \"e552c7d1-a8ad-4033-b89e-951c6f58588b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gzrk2" Dec 01 08:41:08 crc kubenswrapper[4689]: E1201 08:41:08.414270 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:08.914250931 +0000 UTC m=+148.986538915 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.414743 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/e552c7d1-a8ad-4033-b89e-951c6f58588b-etcd-ca\") pod \"etcd-operator-b45778765-gzrk2\" (UID: \"e552c7d1-a8ad-4033-b89e-951c6f58588b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gzrk2" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.415850 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/e552c7d1-a8ad-4033-b89e-951c6f58588b-etcd-service-ca\") pod \"etcd-operator-b45778765-gzrk2\" (UID: \"e552c7d1-a8ad-4033-b89e-951c6f58588b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gzrk2" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.416042 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c389f615-2c0f-467b-924e-ad740d3fff07-audit-dir\") pod \"apiserver-76f77b778f-ch9jh\" (UID: \"c389f615-2c0f-467b-924e-ad740d3fff07\") " pod="openshift-apiserver/apiserver-76f77b778f-ch9jh" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.416307 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8bebf2e0-afe5-4e98-8cdf-496c5d355ef9-config\") pod \"authentication-operator-69f744f599-gwkk8\" (UID: \"8bebf2e0-afe5-4e98-8cdf-496c5d355ef9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gwkk8" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.417299 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.417746 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/57864f38-0ce2-401c-a4c4-96fc7cce8346-auth-proxy-config\") pod \"machine-approver-56656f9798-24jcf\" (UID: \"57864f38-0ce2-401c-a4c4-96fc7cce8346\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-24jcf" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.419112 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/710ccb76-093a-484d-a784-737ae81e7c21-console-config\") pod \"console-f9d7485db-j5r2f\" (UID: \"710ccb76-093a-484d-a784-737ae81e7c21\") " pod="openshift-console/console-f9d7485db-j5r2f" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.419955 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c062b92b-1709-4892-9b40-b1d2405d5812-config\") pod \"machine-api-operator-5694c8668f-chlnk\" (UID: \"c062b92b-1709-4892-9b40-b1d2405d5812\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-chlnk" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.420912 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c389f615-2c0f-467b-924e-ad740d3fff07-etcd-client\") pod \"apiserver-76f77b778f-ch9jh\" (UID: \"c389f615-2c0f-467b-924e-ad740d3fff07\") " pod="openshift-apiserver/apiserver-76f77b778f-ch9jh" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.422103 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8bebf2e0-afe5-4e98-8cdf-496c5d355ef9-service-ca-bundle\") pod \"authentication-operator-69f744f599-gwkk8\" (UID: \"8bebf2e0-afe5-4e98-8cdf-496c5d355ef9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gwkk8" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.422267 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/710ccb76-093a-484d-a784-737ae81e7c21-service-ca\") pod \"console-f9d7485db-j5r2f\" (UID: \"710ccb76-093a-484d-a784-737ae81e7c21\") " pod="openshift-console/console-f9d7485db-j5r2f" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.422575 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/21eaf97a-bf73-4e70-a9bc-153b17b8a799-available-featuregates\") pod \"openshift-config-operator-7777fb866f-29dmp\" (UID: \"21eaf97a-bf73-4e70-a9bc-153b17b8a799\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-29dmp" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.422897 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7d86e20d-febe-4cfb-a738-4705f8122326-registry-certificates\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.423399 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8bebf2e0-afe5-4e98-8cdf-496c5d355ef9-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-gwkk8\" (UID: \"8bebf2e0-afe5-4e98-8cdf-496c5d355ef9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gwkk8" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.423653 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e552c7d1-a8ad-4033-b89e-951c6f58588b-config\") pod \"etcd-operator-b45778765-gzrk2\" (UID: \"e552c7d1-a8ad-4033-b89e-951c6f58588b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gzrk2" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.423909 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c389f615-2c0f-467b-924e-ad740d3fff07-config\") pod \"apiserver-76f77b778f-ch9jh\" (UID: \"c389f615-2c0f-467b-924e-ad740d3fff07\") " pod="openshift-apiserver/apiserver-76f77b778f-ch9jh" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.424483 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/710ccb76-093a-484d-a784-737ae81e7c21-oauth-serving-cert\") pod \"console-f9d7485db-j5r2f\" (UID: \"710ccb76-093a-484d-a784-737ae81e7c21\") " pod="openshift-console/console-f9d7485db-j5r2f" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.429756 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7646df48-faa9-486b-b7fc-c8b1d97ead27-metrics-tls\") pod \"dns-operator-744455d44c-k7w5z\" (UID: \"7646df48-faa9-486b-b7fc-c8b1d97ead27\") " pod="openshift-dns-operator/dns-operator-744455d44c-k7w5z" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.432047 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c389f615-2c0f-467b-924e-ad740d3fff07-encryption-config\") pod \"apiserver-76f77b778f-ch9jh\" (UID: \"c389f615-2c0f-467b-924e-ad740d3fff07\") " pod="openshift-apiserver/apiserver-76f77b778f-ch9jh" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.432283 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b209cd69-6557-4b86-b7b2-680d3bcf8ec0-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-bhjqx\" (UID: \"b209cd69-6557-4b86-b7b2-680d3bcf8ec0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bhjqx" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.433046 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f005f766-04a5-4b03-8c50-2fd9ddf967be-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-h47fn\" (UID: \"f005f766-04a5-4b03-8c50-2fd9ddf967be\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-h47fn" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.434175 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8bebf2e0-afe5-4e98-8cdf-496c5d355ef9-serving-cert\") pod \"authentication-operator-69f744f599-gwkk8\" (UID: \"8bebf2e0-afe5-4e98-8cdf-496c5d355ef9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gwkk8" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.434305 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.434509 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c389f615-2c0f-467b-924e-ad740d3fff07-serving-cert\") pod \"apiserver-76f77b778f-ch9jh\" (UID: \"c389f615-2c0f-467b-924e-ad740d3fff07\") " pod="openshift-apiserver/apiserver-76f77b778f-ch9jh" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.435521 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e552c7d1-a8ad-4033-b89e-951c6f58588b-etcd-client\") pod \"etcd-operator-b45778765-gzrk2\" (UID: \"e552c7d1-a8ad-4033-b89e-951c6f58588b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gzrk2" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.440305 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7d86e20d-febe-4cfb-a738-4705f8122326-registry-tls\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.441938 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/710ccb76-093a-484d-a784-737ae81e7c21-console-serving-cert\") pod \"console-f9d7485db-j5r2f\" (UID: \"710ccb76-093a-484d-a784-737ae81e7c21\") " pod="openshift-console/console-f9d7485db-j5r2f" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.457988 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.477033 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.496728 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.510033 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:08 crc kubenswrapper[4689]: E1201 08:41:08.510226 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:09.010188495 +0000 UTC m=+149.082476409 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.510288 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a2f6cac4-8eb9-4d62-8ef2-3ceb354076bf-proxy-tls\") pod \"machine-config-operator-74547568cd-b7hxr\" (UID: \"a2f6cac4-8eb9-4d62-8ef2-3ceb354076bf\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b7hxr" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.510360 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e85d92ae-30aa-4302-b217-43a48dcadd8a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-8rfdp\" (UID: \"e85d92ae-30aa-4302-b217-43a48dcadd8a\") " pod="openshift-authentication/oauth-openshift-558db77b4-8rfdp" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.510427 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/c1a4774c-b15d-424e-bb37-d6880da5ad85-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-zvzpg\" (UID: \"c1a4774c-b15d-424e-bb37-d6880da5ad85\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zvzpg" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.510461 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xnjd\" (UniqueName: \"kubernetes.io/projected/73a295f0-f461-4f83-af34-72b309949e99-kube-api-access-2xnjd\") pod \"machine-config-server-qszrn\" (UID: \"73a295f0-f461-4f83-af34-72b309949e99\") " pod="openshift-machine-config-operator/machine-config-server-qszrn" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.510487 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a2f6cac4-8eb9-4d62-8ef2-3ceb354076bf-auth-proxy-config\") pod \"machine-config-operator-74547568cd-b7hxr\" (UID: \"a2f6cac4-8eb9-4d62-8ef2-3ceb354076bf\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b7hxr" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.510527 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/be1070d3-8d5b-4910-aee6-3fee2a360934-secret-volume\") pod \"collect-profiles-29409630-9rhzp\" (UID: \"be1070d3-8d5b-4910-aee6-3fee2a360934\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409630-9rhzp" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.510551 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmvxh\" (UniqueName: \"kubernetes.io/projected/59ce6c52-f027-43b4-904f-c402047a39f0-kube-api-access-cmvxh\") pod \"service-ca-operator-777779d784-59vc4\" (UID: \"59ce6c52-f027-43b4-904f-c402047a39f0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-59vc4" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.510575 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e85d92ae-30aa-4302-b217-43a48dcadd8a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-8rfdp\" (UID: \"e85d92ae-30aa-4302-b217-43a48dcadd8a\") " pod="openshift-authentication/oauth-openshift-558db77b4-8rfdp" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.510598 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fb20738-492b-4b13-bf8a-5c32aabc0f32-config\") pod \"controller-manager-879f6c89f-9nx2j\" (UID: \"5fb20738-492b-4b13-bf8a-5c32aabc0f32\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9nx2j" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.510633 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2z6gq\" (UniqueName: \"kubernetes.io/projected/5fb20738-492b-4b13-bf8a-5c32aabc0f32-kube-api-access-2z6gq\") pod \"controller-manager-879f6c89f-9nx2j\" (UID: \"5fb20738-492b-4b13-bf8a-5c32aabc0f32\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9nx2j" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.510657 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxcv2\" (UniqueName: \"kubernetes.io/projected/1120a89b-2c45-428f-8577-eb6eb712961b-kube-api-access-zxcv2\") pod \"cluster-samples-operator-665b6dd947-4nmmg\" (UID: \"1120a89b-2c45-428f-8577-eb6eb712961b\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4nmmg" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.510678 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8956x\" (UniqueName: \"kubernetes.io/projected/065b75bb-d7a1-478c-bb62-cec913693a7e-kube-api-access-8956x\") pod \"multus-admission-controller-857f4d67dd-g9pxf\" (UID: \"065b75bb-d7a1-478c-bb62-cec913693a7e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-g9pxf" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.510705 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cf58a7d2-9013-4cdf-a435-67695f7677a1-bound-sa-token\") pod \"ingress-operator-5b745b69d9-sc9lr\" (UID: \"cf58a7d2-9013-4cdf-a435-67695f7677a1\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-sc9lr" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.510725 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/dcf239d3-b34c-4fa0-aafa-4e82e29e0c9b-srv-cert\") pod \"catalog-operator-68c6474976-ltkzh\" (UID: \"dcf239d3-b34c-4fa0-aafa-4e82e29e0c9b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ltkzh" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.510755 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgs9z\" (UniqueName: \"kubernetes.io/projected/828d985f-2b1a-47df-b653-907e8684d1f5-kube-api-access-jgs9z\") pod \"openshift-apiserver-operator-796bbdcf4f-vhndn\" (UID: \"828d985f-2b1a-47df-b653-907e8684d1f5\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vhndn" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.510786 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48cda75c-3d0c-44e1-8f98-c191a4e79e1b-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ffvcj\" (UID: \"48cda75c-3d0c-44e1-8f98-c191a4e79e1b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ffvcj" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.510829 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d0af3ff-5d7b-41ae-be27-4dea7a282d86-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-vf9h5\" (UID: \"3d0af3ff-5d7b-41ae-be27-4dea7a282d86\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vf9h5" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.510854 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2fd47e85-de9d-475a-8907-4e805cb1cfc8-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-l54ll\" (UID: \"2fd47e85-de9d-475a-8907-4e805cb1cfc8\") " pod="openshift-marketplace/marketplace-operator-79b997595-l54ll" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.510878 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbj5g\" (UniqueName: \"kubernetes.io/projected/3bb10b5c-893e-422d-a60f-101f4717b0bc-kube-api-access-rbj5g\") pod \"router-default-5444994796-hb577\" (UID: \"3bb10b5c-893e-422d-a60f-101f4717b0bc\") " pod="openshift-ingress/router-default-5444994796-hb577" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.510901 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/304e31dc-6fcd-4654-9c3d-ef693f7c71a6-trusted-ca\") pod \"console-operator-58897d9998-z629s\" (UID: \"304e31dc-6fcd-4654-9c3d-ef693f7c71a6\") " pod="openshift-console-operator/console-operator-58897d9998-z629s" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.510923 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4wck\" (UniqueName: \"kubernetes.io/projected/c1a4774c-b15d-424e-bb37-d6880da5ad85-kube-api-access-j4wck\") pod \"package-server-manager-789f6589d5-zvzpg\" (UID: \"c1a4774c-b15d-424e-bb37-d6880da5ad85\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zvzpg" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.510948 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8j65\" (UniqueName: \"kubernetes.io/projected/304e31dc-6fcd-4654-9c3d-ef693f7c71a6-kube-api-access-j8j65\") pod \"console-operator-58897d9998-z629s\" (UID: \"304e31dc-6fcd-4654-9c3d-ef693f7c71a6\") " pod="openshift-console-operator/console-operator-58897d9998-z629s" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.511021 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4095fada-3a3f-4938-a63b-07eb736ad683-metrics-tls\") pod \"dns-default-wrw72\" (UID: \"4095fada-3a3f-4938-a63b-07eb736ad683\") " pod="openshift-dns/dns-default-wrw72" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.511054 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/79369af1-c9d2-4d8e-a675-a5174bc0e4ad-csi-data-dir\") pod \"csi-hostpathplugin-8pg9k\" (UID: \"79369af1-c9d2-4d8e-a675-a5174bc0e4ad\") " pod="hostpath-provisioner/csi-hostpathplugin-8pg9k" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.511076 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bd8122c2-aaf0-4148-849c-ca4502dd0f55-client-ca\") pod \"route-controller-manager-6576b87f9c-6z2v4\" (UID: \"bd8122c2-aaf0-4148-849c-ca4502dd0f55\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6z2v4" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.511101 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/45f382fb-86c3-493f-a2ab-eb9b51923752-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-f6t2n\" (UID: \"45f382fb-86c3-493f-a2ab-eb9b51923752\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f6t2n" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.511133 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bp692\" (UniqueName: \"kubernetes.io/projected/2499ecbd-1cda-49a9-8c8a-e80d44127f01-kube-api-access-bp692\") pod \"control-plane-machine-set-operator-78cbb6b69f-xjxwg\" (UID: \"2499ecbd-1cda-49a9-8c8a-e80d44127f01\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xjxwg" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.511157 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd8122c2-aaf0-4148-849c-ca4502dd0f55-config\") pod \"route-controller-manager-6576b87f9c-6z2v4\" (UID: \"bd8122c2-aaf0-4148-849c-ca4502dd0f55\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6z2v4" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.511178 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/304e31dc-6fcd-4654-9c3d-ef693f7c71a6-serving-cert\") pod \"console-operator-58897d9998-z629s\" (UID: \"304e31dc-6fcd-4654-9c3d-ef693f7c71a6\") " pod="openshift-console-operator/console-operator-58897d9998-z629s" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.511204 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcwnk\" (UniqueName: \"kubernetes.io/projected/3d0af3ff-5d7b-41ae-be27-4dea7a282d86-kube-api-access-xcwnk\") pod \"kube-storage-version-migrator-operator-b67b599dd-vf9h5\" (UID: \"3d0af3ff-5d7b-41ae-be27-4dea7a282d86\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vf9h5" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.511234 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7754ebe-fe0e-44a5-b463-1d005035d249-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-pzx5g\" (UID: \"e7754ebe-fe0e-44a5-b463-1d005035d249\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-pzx5g" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.511257 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59ce6c52-f027-43b4-904f-c402047a39f0-config\") pod \"service-ca-operator-777779d784-59vc4\" (UID: \"59ce6c52-f027-43b4-904f-c402047a39f0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-59vc4" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.511277 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/df446b8a-9d6b-41e5-9b7f-2ffa97a1217c-cert\") pod \"ingress-canary-bnznn\" (UID: \"df446b8a-9d6b-41e5-9b7f-2ffa97a1217c\") " pod="openshift-ingress-canary/ingress-canary-bnznn" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.511299 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6j2p\" (UniqueName: \"kubernetes.io/projected/9eaef062-e274-4f3c-8ce2-3ea23e7106da-kube-api-access-s6j2p\") pod \"migrator-59844c95c7-pr577\" (UID: \"9eaef062-e274-4f3c-8ce2-3ea23e7106da\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-pr577" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.511323 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d0af3ff-5d7b-41ae-be27-4dea7a282d86-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-vf9h5\" (UID: \"3d0af3ff-5d7b-41ae-be27-4dea7a282d86\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vf9h5" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.511347 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e85d92ae-30aa-4302-b217-43a48dcadd8a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-8rfdp\" (UID: \"e85d92ae-30aa-4302-b217-43a48dcadd8a\") " pod="openshift-authentication/oauth-openshift-558db77b4-8rfdp" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.511396 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59ce6c52-f027-43b4-904f-c402047a39f0-serving-cert\") pod \"service-ca-operator-777779d784-59vc4\" (UID: \"59ce6c52-f027-43b4-904f-c402047a39f0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-59vc4" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.511424 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szh8z\" (UniqueName: \"kubernetes.io/projected/e85d92ae-30aa-4302-b217-43a48dcadd8a-kube-api-access-szh8z\") pod \"oauth-openshift-558db77b4-8rfdp\" (UID: \"e85d92ae-30aa-4302-b217-43a48dcadd8a\") " pod="openshift-authentication/oauth-openshift-558db77b4-8rfdp" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.511455 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/48cda75c-3d0c-44e1-8f98-c191a4e79e1b-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ffvcj\" (UID: \"48cda75c-3d0c-44e1-8f98-c191a4e79e1b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ffvcj" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.511479 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/79369af1-c9d2-4d8e-a675-a5174bc0e4ad-registration-dir\") pod \"csi-hostpathplugin-8pg9k\" (UID: \"79369af1-c9d2-4d8e-a675-a5174bc0e4ad\") " pod="hostpath-provisioner/csi-hostpathplugin-8pg9k" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.511504 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e85d92ae-30aa-4302-b217-43a48dcadd8a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-8rfdp\" (UID: \"e85d92ae-30aa-4302-b217-43a48dcadd8a\") " pod="openshift-authentication/oauth-openshift-558db77b4-8rfdp" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.511553 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48cda75c-3d0c-44e1-8f98-c191a4e79e1b-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ffvcj\" (UID: \"48cda75c-3d0c-44e1-8f98-c191a4e79e1b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ffvcj" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.511581 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e85d92ae-30aa-4302-b217-43a48dcadd8a-audit-policies\") pod \"oauth-openshift-558db77b4-8rfdp\" (UID: \"e85d92ae-30aa-4302-b217-43a48dcadd8a\") " pod="openshift-authentication/oauth-openshift-558db77b4-8rfdp" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.511606 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/79369af1-c9d2-4d8e-a675-a5174bc0e4ad-mountpoint-dir\") pod \"csi-hostpathplugin-8pg9k\" (UID: \"79369af1-c9d2-4d8e-a675-a5174bc0e4ad\") " pod="hostpath-provisioner/csi-hostpathplugin-8pg9k" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.511639 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlctf\" (UniqueName: \"kubernetes.io/projected/be1070d3-8d5b-4910-aee6-3fee2a360934-kube-api-access-zlctf\") pod \"collect-profiles-29409630-9rhzp\" (UID: \"be1070d3-8d5b-4910-aee6-3fee2a360934\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409630-9rhzp" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.511669 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkh8x\" (UniqueName: \"kubernetes.io/projected/a2f6cac4-8eb9-4d62-8ef2-3ceb354076bf-kube-api-access-fkh8x\") pod \"machine-config-operator-74547568cd-b7hxr\" (UID: \"a2f6cac4-8eb9-4d62-8ef2-3ceb354076bf\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b7hxr" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.511698 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7754ebe-fe0e-44a5-b463-1d005035d249-config\") pod \"kube-controller-manager-operator-78b949d7b-pzx5g\" (UID: \"e7754ebe-fe0e-44a5-b463-1d005035d249\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-pzx5g" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.511724 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/79369af1-c9d2-4d8e-a675-a5174bc0e4ad-socket-dir\") pod \"csi-hostpathplugin-8pg9k\" (UID: \"79369af1-c9d2-4d8e-a675-a5174bc0e4ad\") " pod="hostpath-provisioner/csi-hostpathplugin-8pg9k" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.511749 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5j8mk\" (UniqueName: \"kubernetes.io/projected/45f382fb-86c3-493f-a2ab-eb9b51923752-kube-api-access-5j8mk\") pod \"machine-config-controller-84d6567774-f6t2n\" (UID: \"45f382fb-86c3-493f-a2ab-eb9b51923752\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f6t2n" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.511790 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e85d92ae-30aa-4302-b217-43a48dcadd8a-audit-dir\") pod \"oauth-openshift-558db77b4-8rfdp\" (UID: \"e85d92ae-30aa-4302-b217-43a48dcadd8a\") " pod="openshift-authentication/oauth-openshift-558db77b4-8rfdp" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.511813 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dadd1959-680d-4f67-9af9-65d8519398df-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-bsffw\" (UID: \"dadd1959-680d-4f67-9af9-65d8519398df\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bsffw" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.511838 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ns49\" (UniqueName: \"kubernetes.io/projected/df446b8a-9d6b-41e5-9b7f-2ffa97a1217c-kube-api-access-4ns49\") pod \"ingress-canary-bnznn\" (UID: \"df446b8a-9d6b-41e5-9b7f-2ffa97a1217c\") " pod="openshift-ingress-canary/ingress-canary-bnznn" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.511864 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e85d92ae-30aa-4302-b217-43a48dcadd8a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-8rfdp\" (UID: \"e85d92ae-30aa-4302-b217-43a48dcadd8a\") " pod="openshift-authentication/oauth-openshift-558db77b4-8rfdp" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.511889 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/73a295f0-f461-4f83-af34-72b309949e99-node-bootstrap-token\") pod \"machine-config-server-qszrn\" (UID: \"73a295f0-f461-4f83-af34-72b309949e99\") " pod="openshift-machine-config-operator/machine-config-server-qszrn" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.511914 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e85d92ae-30aa-4302-b217-43a48dcadd8a-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-8rfdp\" (UID: \"e85d92ae-30aa-4302-b217-43a48dcadd8a\") " pod="openshift-authentication/oauth-openshift-558db77b4-8rfdp" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.511944 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2fd47e85-de9d-475a-8907-4e805cb1cfc8-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-l54ll\" (UID: \"2fd47e85-de9d-475a-8907-4e805cb1cfc8\") " pod="openshift-marketplace/marketplace-operator-79b997595-l54ll" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.511973 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dadd1959-680d-4f67-9af9-65d8519398df-config\") pod \"kube-apiserver-operator-766d6c64bb-bsffw\" (UID: \"dadd1959-680d-4f67-9af9-65d8519398df\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bsffw" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.511997 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/3bb10b5c-893e-422d-a60f-101f4717b0bc-default-certificate\") pod \"router-default-5444994796-hb577\" (UID: \"3bb10b5c-893e-422d-a60f-101f4717b0bc\") " pod="openshift-ingress/router-default-5444994796-hb577" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.512017 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b1b970c0-59a2-4782-8664-b17a7d7a8202-srv-cert\") pod \"olm-operator-6b444d44fb-lz96b\" (UID: \"b1b970c0-59a2-4782-8664-b17a7d7a8202\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lz96b" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.512048 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkmmm\" (UniqueName: \"kubernetes.io/projected/6e2d0411-e6d8-49a0-90c5-e1454e71bf44-kube-api-access-nkmmm\") pod \"service-ca-9c57cc56f-fdcdc\" (UID: \"6e2d0411-e6d8-49a0-90c5-e1454e71bf44\") " pod="openshift-service-ca/service-ca-9c57cc56f-fdcdc" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.512072 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/2499ecbd-1cda-49a9-8c8a-e80d44127f01-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-xjxwg\" (UID: \"2499ecbd-1cda-49a9-8c8a-e80d44127f01\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xjxwg" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.512095 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxzzf\" (UniqueName: \"kubernetes.io/projected/bd8122c2-aaf0-4148-849c-ca4502dd0f55-kube-api-access-zxzzf\") pod \"route-controller-manager-6576b87f9c-6z2v4\" (UID: \"bd8122c2-aaf0-4148-849c-ca4502dd0f55\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6z2v4" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.512116 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e85d92ae-30aa-4302-b217-43a48dcadd8a-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-8rfdp\" (UID: \"e85d92ae-30aa-4302-b217-43a48dcadd8a\") " pod="openshift-authentication/oauth-openshift-558db77b4-8rfdp" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.512144 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/6e2d0411-e6d8-49a0-90c5-e1454e71bf44-signing-key\") pod \"service-ca-9c57cc56f-fdcdc\" (UID: \"6e2d0411-e6d8-49a0-90c5-e1454e71bf44\") " pod="openshift-service-ca/service-ca-9c57cc56f-fdcdc" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.512169 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cf58a7d2-9013-4cdf-a435-67695f7677a1-metrics-tls\") pod \"ingress-operator-5b745b69d9-sc9lr\" (UID: \"cf58a7d2-9013-4cdf-a435-67695f7677a1\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-sc9lr" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.512191 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7754ebe-fe0e-44a5-b463-1d005035d249-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-pzx5g\" (UID: \"e7754ebe-fe0e-44a5-b463-1d005035d249\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-pzx5g" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.512214 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e85d92ae-30aa-4302-b217-43a48dcadd8a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-8rfdp\" (UID: \"e85d92ae-30aa-4302-b217-43a48dcadd8a\") " pod="openshift-authentication/oauth-openshift-558db77b4-8rfdp" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.512240 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b1b970c0-59a2-4782-8664-b17a7d7a8202-profile-collector-cert\") pod \"olm-operator-6b444d44fb-lz96b\" (UID: \"b1b970c0-59a2-4782-8664-b17a7d7a8202\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lz96b" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.512262 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d52jc\" (UniqueName: \"kubernetes.io/projected/cf58a7d2-9013-4cdf-a435-67695f7677a1-kube-api-access-d52jc\") pod \"ingress-operator-5b745b69d9-sc9lr\" (UID: \"cf58a7d2-9013-4cdf-a435-67695f7677a1\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-sc9lr" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.512284 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/73a295f0-f461-4f83-af34-72b309949e99-certs\") pod \"machine-config-server-qszrn\" (UID: \"73a295f0-f461-4f83-af34-72b309949e99\") " pod="openshift-machine-config-operator/machine-config-server-qszrn" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.512310 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/304e31dc-6fcd-4654-9c3d-ef693f7c71a6-config\") pod \"console-operator-58897d9998-z629s\" (UID: \"304e31dc-6fcd-4654-9c3d-ef693f7c71a6\") " pod="openshift-console-operator/console-operator-58897d9998-z629s" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.512331 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5fb20738-492b-4b13-bf8a-5c32aabc0f32-serving-cert\") pod \"controller-manager-879f6c89f-9nx2j\" (UID: \"5fb20738-492b-4b13-bf8a-5c32aabc0f32\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9nx2j" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.512360 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.512416 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/79369af1-c9d2-4d8e-a675-a5174bc0e4ad-plugins-dir\") pod \"csi-hostpathplugin-8pg9k\" (UID: \"79369af1-c9d2-4d8e-a675-a5174bc0e4ad\") " pod="hostpath-provisioner/csi-hostpathplugin-8pg9k" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.512434 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjzt6\" (UniqueName: \"kubernetes.io/projected/2546c757-03ab-4ba3-95d0-aa537cd615fb-kube-api-access-mjzt6\") pod \"packageserver-d55dfcdfc-bjlxg\" (UID: \"2546c757-03ab-4ba3-95d0-aa537cd615fb\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bjlxg" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.512454 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/1120a89b-2c45-428f-8577-eb6eb712961b-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-4nmmg\" (UID: \"1120a89b-2c45-428f-8577-eb6eb712961b\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4nmmg" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.512471 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/dcf239d3-b34c-4fa0-aafa-4e82e29e0c9b-profile-collector-cert\") pod \"catalog-operator-68c6474976-ltkzh\" (UID: \"dcf239d3-b34c-4fa0-aafa-4e82e29e0c9b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ltkzh" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.512499 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/2546c757-03ab-4ba3-95d0-aa537cd615fb-tmpfs\") pod \"packageserver-d55dfcdfc-bjlxg\" (UID: \"2546c757-03ab-4ba3-95d0-aa537cd615fb\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bjlxg" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.512514 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bd8122c2-aaf0-4148-849c-ca4502dd0f55-serving-cert\") pod \"route-controller-manager-6576b87f9c-6z2v4\" (UID: \"bd8122c2-aaf0-4148-849c-ca4502dd0f55\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6z2v4" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.512529 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3bb10b5c-893e-422d-a60f-101f4717b0bc-service-ca-bundle\") pod \"router-default-5444994796-hb577\" (UID: \"3bb10b5c-893e-422d-a60f-101f4717b0bc\") " pod="openshift-ingress/router-default-5444994796-hb577" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.512546 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a2f6cac4-8eb9-4d62-8ef2-3ceb354076bf-images\") pod \"machine-config-operator-74547568cd-b7hxr\" (UID: \"a2f6cac4-8eb9-4d62-8ef2-3ceb354076bf\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b7hxr" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.512569 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/45f382fb-86c3-493f-a2ab-eb9b51923752-proxy-tls\") pod \"machine-config-controller-84d6567774-f6t2n\" (UID: \"45f382fb-86c3-493f-a2ab-eb9b51923752\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f6t2n" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.512592 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/828d985f-2b1a-47df-b653-907e8684d1f5-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-vhndn\" (UID: \"828d985f-2b1a-47df-b653-907e8684d1f5\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vhndn" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.512618 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/be1070d3-8d5b-4910-aee6-3fee2a360934-config-volume\") pod \"collect-profiles-29409630-9rhzp\" (UID: \"be1070d3-8d5b-4910-aee6-3fee2a360934\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409630-9rhzp" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.512634 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/3bb10b5c-893e-422d-a60f-101f4717b0bc-stats-auth\") pod \"router-default-5444994796-hb577\" (UID: \"3bb10b5c-893e-422d-a60f-101f4717b0bc\") " pod="openshift-ingress/router-default-5444994796-hb577" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.512651 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xffcg\" (UniqueName: \"kubernetes.io/projected/2fd47e85-de9d-475a-8907-4e805cb1cfc8-kube-api-access-xffcg\") pod \"marketplace-operator-79b997595-l54ll\" (UID: \"2fd47e85-de9d-475a-8907-4e805cb1cfc8\") " pod="openshift-marketplace/marketplace-operator-79b997595-l54ll" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.512668 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2546c757-03ab-4ba3-95d0-aa537cd615fb-apiservice-cert\") pod \"packageserver-d55dfcdfc-bjlxg\" (UID: \"2546c757-03ab-4ba3-95d0-aa537cd615fb\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bjlxg" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.512688 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hm94m\" (UniqueName: \"kubernetes.io/projected/b1b970c0-59a2-4782-8664-b17a7d7a8202-kube-api-access-hm94m\") pod \"olm-operator-6b444d44fb-lz96b\" (UID: \"b1b970c0-59a2-4782-8664-b17a7d7a8202\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lz96b" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.512706 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4095fada-3a3f-4938-a63b-07eb736ad683-config-volume\") pod \"dns-default-wrw72\" (UID: \"4095fada-3a3f-4938-a63b-07eb736ad683\") " pod="openshift-dns/dns-default-wrw72" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.512722 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7r8t\" (UniqueName: \"kubernetes.io/projected/4095fada-3a3f-4938-a63b-07eb736ad683-kube-api-access-k7r8t\") pod \"dns-default-wrw72\" (UID: \"4095fada-3a3f-4938-a63b-07eb736ad683\") " pod="openshift-dns/dns-default-wrw72" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.512746 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5fb20738-492b-4b13-bf8a-5c32aabc0f32-client-ca\") pod \"controller-manager-879f6c89f-9nx2j\" (UID: \"5fb20738-492b-4b13-bf8a-5c32aabc0f32\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9nx2j" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.512762 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z422r\" (UniqueName: \"kubernetes.io/projected/79369af1-c9d2-4d8e-a675-a5174bc0e4ad-kube-api-access-z422r\") pod \"csi-hostpathplugin-8pg9k\" (UID: \"79369af1-c9d2-4d8e-a675-a5174bc0e4ad\") " pod="hostpath-provisioner/csi-hostpathplugin-8pg9k" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.512779 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3bb10b5c-893e-422d-a60f-101f4717b0bc-metrics-certs\") pod \"router-default-5444994796-hb577\" (UID: \"3bb10b5c-893e-422d-a60f-101f4717b0bc\") " pod="openshift-ingress/router-default-5444994796-hb577" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.512803 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5fb20738-492b-4b13-bf8a-5c32aabc0f32-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-9nx2j\" (UID: \"5fb20738-492b-4b13-bf8a-5c32aabc0f32\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9nx2j" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.512819 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dadd1959-680d-4f67-9af9-65d8519398df-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-bsffw\" (UID: \"dadd1959-680d-4f67-9af9-65d8519398df\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bsffw" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.512835 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e85d92ae-30aa-4302-b217-43a48dcadd8a-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-8rfdp\" (UID: \"e85d92ae-30aa-4302-b217-43a48dcadd8a\") " pod="openshift-authentication/oauth-openshift-558db77b4-8rfdp" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.512852 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e85d92ae-30aa-4302-b217-43a48dcadd8a-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-8rfdp\" (UID: \"e85d92ae-30aa-4302-b217-43a48dcadd8a\") " pod="openshift-authentication/oauth-openshift-558db77b4-8rfdp" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.512869 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pb277\" (UniqueName: \"kubernetes.io/projected/dcf239d3-b34c-4fa0-aafa-4e82e29e0c9b-kube-api-access-pb277\") pod \"catalog-operator-68c6474976-ltkzh\" (UID: \"dcf239d3-b34c-4fa0-aafa-4e82e29e0c9b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ltkzh" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.512884 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/828d985f-2b1a-47df-b653-907e8684d1f5-config\") pod \"openshift-apiserver-operator-796bbdcf4f-vhndn\" (UID: \"828d985f-2b1a-47df-b653-907e8684d1f5\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vhndn" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.512910 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2546c757-03ab-4ba3-95d0-aa537cd615fb-webhook-cert\") pod \"packageserver-d55dfcdfc-bjlxg\" (UID: \"2546c757-03ab-4ba3-95d0-aa537cd615fb\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bjlxg" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.512927 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cf58a7d2-9013-4cdf-a435-67695f7677a1-trusted-ca\") pod \"ingress-operator-5b745b69d9-sc9lr\" (UID: \"cf58a7d2-9013-4cdf-a435-67695f7677a1\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-sc9lr" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.512952 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/6e2d0411-e6d8-49a0-90c5-e1454e71bf44-signing-cabundle\") pod \"service-ca-9c57cc56f-fdcdc\" (UID: \"6e2d0411-e6d8-49a0-90c5-e1454e71bf44\") " pod="openshift-service-ca/service-ca-9c57cc56f-fdcdc" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.512967 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e85d92ae-30aa-4302-b217-43a48dcadd8a-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-8rfdp\" (UID: \"e85d92ae-30aa-4302-b217-43a48dcadd8a\") " pod="openshift-authentication/oauth-openshift-558db77b4-8rfdp" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.512982 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/065b75bb-d7a1-478c-bb62-cec913693a7e-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-g9pxf\" (UID: \"065b75bb-d7a1-478c-bb62-cec913693a7e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-g9pxf" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.514437 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/304e31dc-6fcd-4654-9c3d-ef693f7c71a6-trusted-ca\") pod \"console-operator-58897d9998-z629s\" (UID: \"304e31dc-6fcd-4654-9c3d-ef693f7c71a6\") " pod="openshift-console-operator/console-operator-58897d9998-z629s" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.514645 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/79369af1-c9d2-4d8e-a675-a5174bc0e4ad-csi-data-dir\") pod \"csi-hostpathplugin-8pg9k\" (UID: \"79369af1-c9d2-4d8e-a675-a5174bc0e4ad\") " pod="hostpath-provisioner/csi-hostpathplugin-8pg9k" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.514954 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/79369af1-c9d2-4d8e-a675-a5174bc0e4ad-mountpoint-dir\") pod \"csi-hostpathplugin-8pg9k\" (UID: \"79369af1-c9d2-4d8e-a675-a5174bc0e4ad\") " pod="hostpath-provisioner/csi-hostpathplugin-8pg9k" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.515960 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/45f382fb-86c3-493f-a2ab-eb9b51923752-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-f6t2n\" (UID: \"45f382fb-86c3-493f-a2ab-eb9b51923752\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f6t2n" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.517663 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fb20738-492b-4b13-bf8a-5c32aabc0f32-config\") pod \"controller-manager-879f6c89f-9nx2j\" (UID: \"5fb20738-492b-4b13-bf8a-5c32aabc0f32\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9nx2j" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.517904 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd8122c2-aaf0-4148-849c-ca4502dd0f55-config\") pod \"route-controller-manager-6576b87f9c-6z2v4\" (UID: \"bd8122c2-aaf0-4148-849c-ca4502dd0f55\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6z2v4" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.518171 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/79369af1-c9d2-4d8e-a675-a5174bc0e4ad-registration-dir\") pod \"csi-hostpathplugin-8pg9k\" (UID: \"79369af1-c9d2-4d8e-a675-a5174bc0e4ad\") " pod="hostpath-provisioner/csi-hostpathplugin-8pg9k" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.518198 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e85d92ae-30aa-4302-b217-43a48dcadd8a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-8rfdp\" (UID: \"e85d92ae-30aa-4302-b217-43a48dcadd8a\") " pod="openshift-authentication/oauth-openshift-558db77b4-8rfdp" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.518211 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e85d92ae-30aa-4302-b217-43a48dcadd8a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-8rfdp\" (UID: \"e85d92ae-30aa-4302-b217-43a48dcadd8a\") " pod="openshift-authentication/oauth-openshift-558db77b4-8rfdp" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.518954 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a2f6cac4-8eb9-4d62-8ef2-3ceb354076bf-auth-proxy-config\") pod \"machine-config-operator-74547568cd-b7hxr\" (UID: \"a2f6cac4-8eb9-4d62-8ef2-3ceb354076bf\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b7hxr" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.519126 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e85d92ae-30aa-4302-b217-43a48dcadd8a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-8rfdp\" (UID: \"e85d92ae-30aa-4302-b217-43a48dcadd8a\") " pod="openshift-authentication/oauth-openshift-558db77b4-8rfdp" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.519208 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e85d92ae-30aa-4302-b217-43a48dcadd8a-audit-dir\") pod \"oauth-openshift-558db77b4-8rfdp\" (UID: \"e85d92ae-30aa-4302-b217-43a48dcadd8a\") " pod="openshift-authentication/oauth-openshift-558db77b4-8rfdp" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.519247 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e85d92ae-30aa-4302-b217-43a48dcadd8a-audit-policies\") pod \"oauth-openshift-558db77b4-8rfdp\" (UID: \"e85d92ae-30aa-4302-b217-43a48dcadd8a\") " pod="openshift-authentication/oauth-openshift-558db77b4-8rfdp" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.519918 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bd8122c2-aaf0-4148-849c-ca4502dd0f55-client-ca\") pod \"route-controller-manager-6576b87f9c-6z2v4\" (UID: \"bd8122c2-aaf0-4148-849c-ca4502dd0f55\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6z2v4" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.520429 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5fb20738-492b-4b13-bf8a-5c32aabc0f32-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-9nx2j\" (UID: \"5fb20738-492b-4b13-bf8a-5c32aabc0f32\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9nx2j" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.521104 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dadd1959-680d-4f67-9af9-65d8519398df-config\") pod \"kube-apiserver-operator-766d6c64bb-bsffw\" (UID: \"dadd1959-680d-4f67-9af9-65d8519398df\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bsffw" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.521685 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5fb20738-492b-4b13-bf8a-5c32aabc0f32-client-ca\") pod \"controller-manager-879f6c89f-9nx2j\" (UID: \"5fb20738-492b-4b13-bf8a-5c32aabc0f32\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9nx2j" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.521734 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e85d92ae-30aa-4302-b217-43a48dcadd8a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-8rfdp\" (UID: \"e85d92ae-30aa-4302-b217-43a48dcadd8a\") " pod="openshift-authentication/oauth-openshift-558db77b4-8rfdp" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.522421 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e85d92ae-30aa-4302-b217-43a48dcadd8a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-8rfdp\" (UID: \"e85d92ae-30aa-4302-b217-43a48dcadd8a\") " pod="openshift-authentication/oauth-openshift-558db77b4-8rfdp" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.522716 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e85d92ae-30aa-4302-b217-43a48dcadd8a-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-8rfdp\" (UID: \"e85d92ae-30aa-4302-b217-43a48dcadd8a\") " pod="openshift-authentication/oauth-openshift-558db77b4-8rfdp" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.522937 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/304e31dc-6fcd-4654-9c3d-ef693f7c71a6-config\") pod \"console-operator-58897d9998-z629s\" (UID: \"304e31dc-6fcd-4654-9c3d-ef693f7c71a6\") " pod="openshift-console-operator/console-operator-58897d9998-z629s" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.523709 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dadd1959-680d-4f67-9af9-65d8519398df-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-bsffw\" (UID: \"dadd1959-680d-4f67-9af9-65d8519398df\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bsffw" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.523806 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/79369af1-c9d2-4d8e-a675-a5174bc0e4ad-socket-dir\") pod \"csi-hostpathplugin-8pg9k\" (UID: \"79369af1-c9d2-4d8e-a675-a5174bc0e4ad\") " pod="hostpath-provisioner/csi-hostpathplugin-8pg9k" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.524132 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e85d92ae-30aa-4302-b217-43a48dcadd8a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-8rfdp\" (UID: \"e85d92ae-30aa-4302-b217-43a48dcadd8a\") " pod="openshift-authentication/oauth-openshift-558db77b4-8rfdp" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.525545 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/304e31dc-6fcd-4654-9c3d-ef693f7c71a6-serving-cert\") pod \"console-operator-58897d9998-z629s\" (UID: \"304e31dc-6fcd-4654-9c3d-ef693f7c71a6\") " pod="openshift-console-operator/console-operator-58897d9998-z629s" Dec 01 08:41:08 crc kubenswrapper[4689]: E1201 08:41:08.525726 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:09.025711109 +0000 UTC m=+149.097999013 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.525787 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/79369af1-c9d2-4d8e-a675-a5174bc0e4ad-plugins-dir\") pod \"csi-hostpathplugin-8pg9k\" (UID: \"79369af1-c9d2-4d8e-a675-a5174bc0e4ad\") " pod="hostpath-provisioner/csi-hostpathplugin-8pg9k" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.528097 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.529363 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5fb20738-492b-4b13-bf8a-5c32aabc0f32-serving-cert\") pod \"controller-manager-879f6c89f-9nx2j\" (UID: \"5fb20738-492b-4b13-bf8a-5c32aabc0f32\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9nx2j" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.529694 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/2546c757-03ab-4ba3-95d0-aa537cd615fb-tmpfs\") pod \"packageserver-d55dfcdfc-bjlxg\" (UID: \"2546c757-03ab-4ba3-95d0-aa537cd615fb\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bjlxg" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.531365 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e85d92ae-30aa-4302-b217-43a48dcadd8a-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-8rfdp\" (UID: \"e85d92ae-30aa-4302-b217-43a48dcadd8a\") " pod="openshift-authentication/oauth-openshift-558db77b4-8rfdp" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.531595 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e85d92ae-30aa-4302-b217-43a48dcadd8a-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-8rfdp\" (UID: \"e85d92ae-30aa-4302-b217-43a48dcadd8a\") " pod="openshift-authentication/oauth-openshift-558db77b4-8rfdp" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.531817 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e85d92ae-30aa-4302-b217-43a48dcadd8a-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-8rfdp\" (UID: \"e85d92ae-30aa-4302-b217-43a48dcadd8a\") " pod="openshift-authentication/oauth-openshift-558db77b4-8rfdp" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.533178 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bd8122c2-aaf0-4148-849c-ca4502dd0f55-serving-cert\") pod \"route-controller-manager-6576b87f9c-6z2v4\" (UID: \"bd8122c2-aaf0-4148-849c-ca4502dd0f55\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6z2v4" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.533557 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.537860 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/1120a89b-2c45-428f-8577-eb6eb712961b-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-4nmmg\" (UID: \"1120a89b-2c45-428f-8577-eb6eb712961b\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4nmmg" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.543177 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cf58a7d2-9013-4cdf-a435-67695f7677a1-metrics-tls\") pod \"ingress-operator-5b745b69d9-sc9lr\" (UID: \"cf58a7d2-9013-4cdf-a435-67695f7677a1\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-sc9lr" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.559824 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.566959 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cf58a7d2-9013-4cdf-a435-67695f7677a1-trusted-ca\") pod \"ingress-operator-5b745b69d9-sc9lr\" (UID: \"cf58a7d2-9013-4cdf-a435-67695f7677a1\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-sc9lr" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.573998 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.614581 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e85d92ae-30aa-4302-b217-43a48dcadd8a-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-8rfdp\" (UID: \"e85d92ae-30aa-4302-b217-43a48dcadd8a\") " pod="openshift-authentication/oauth-openshift-558db77b4-8rfdp" Dec 01 08:41:08 crc kubenswrapper[4689]: E1201 08:41:08.615201 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:09.115181527 +0000 UTC m=+149.187469431 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.615816 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.616160 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/73a295f0-f461-4f83-af34-72b309949e99-node-bootstrap-token\") pod \"machine-config-server-qszrn\" (UID: \"73a295f0-f461-4f83-af34-72b309949e99\") " pod="openshift-machine-config-operator/machine-config-server-qszrn" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.616255 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.616407 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/73a295f0-f461-4f83-af34-72b309949e99-certs\") pod \"machine-config-server-qszrn\" (UID: \"73a295f0-f461-4f83-af34-72b309949e99\") " pod="openshift-machine-config-operator/machine-config-server-qszrn" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.616506 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.616667 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7r8t\" (UniqueName: \"kubernetes.io/projected/4095fada-3a3f-4938-a63b-07eb736ad683-kube-api-access-k7r8t\") pod \"dns-default-wrw72\" (UID: \"4095fada-3a3f-4938-a63b-07eb736ad683\") " pod="openshift-dns/dns-default-wrw72" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.616755 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4095fada-3a3f-4938-a63b-07eb736ad683-config-volume\") pod \"dns-default-wrw72\" (UID: \"4095fada-3a3f-4938-a63b-07eb736ad683\") " pod="openshift-dns/dns-default-wrw72" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.616915 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xnjd\" (UniqueName: \"kubernetes.io/projected/73a295f0-f461-4f83-af34-72b309949e99-kube-api-access-2xnjd\") pod \"machine-config-server-qszrn\" (UID: \"73a295f0-f461-4f83-af34-72b309949e99\") " pod="openshift-machine-config-operator/machine-config-server-qszrn" Dec 01 08:41:08 crc kubenswrapper[4689]: E1201 08:41:08.617068 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:09.117053577 +0000 UTC m=+149.189341481 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.617215 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4095fada-3a3f-4938-a63b-07eb736ad683-metrics-tls\") pod \"dns-default-wrw72\" (UID: \"4095fada-3a3f-4938-a63b-07eb736ad683\") " pod="openshift-dns/dns-default-wrw72" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.633773 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.655132 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.659632 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/828d985f-2b1a-47df-b653-907e8684d1f5-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-vhndn\" (UID: \"828d985f-2b1a-47df-b653-907e8684d1f5\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vhndn" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.675018 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.676718 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/828d985f-2b1a-47df-b653-907e8684d1f5-config\") pod \"openshift-apiserver-operator-796bbdcf4f-vhndn\" (UID: \"828d985f-2b1a-47df-b653-907e8684d1f5\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vhndn" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.694184 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.715850 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.718127 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:08 crc kubenswrapper[4689]: E1201 08:41:08.718701 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:09.218668112 +0000 UTC m=+149.290956016 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.733862 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.754057 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.764189 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7754ebe-fe0e-44a5-b463-1d005035d249-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-pzx5g\" (UID: \"e7754ebe-fe0e-44a5-b463-1d005035d249\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-pzx5g" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.773726 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.778207 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7754ebe-fe0e-44a5-b463-1d005035d249-config\") pod \"kube-controller-manager-operator-78b949d7b-pzx5g\" (UID: \"e7754ebe-fe0e-44a5-b463-1d005035d249\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-pzx5g" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.803359 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.813346 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.814812 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/2499ecbd-1cda-49a9-8c8a-e80d44127f01-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-xjxwg\" (UID: \"2499ecbd-1cda-49a9-8c8a-e80d44127f01\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xjxwg" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.820632 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.820776 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.820904 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 08:41:08 crc kubenswrapper[4689]: E1201 08:41:08.821076 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:09.32105912 +0000 UTC m=+149.393347024 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.821142 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.824045 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.824661 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.834601 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.854732 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.874071 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.888495 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d0af3ff-5d7b-41ae-be27-4dea7a282d86-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-vf9h5\" (UID: \"3d0af3ff-5d7b-41ae-be27-4dea7a282d86\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vf9h5" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.893691 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.914585 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.922008 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:08 crc kubenswrapper[4689]: E1201 08:41:08.922408 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:09.422352725 +0000 UTC m=+149.494640669 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.922597 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.922850 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 08:41:08 crc kubenswrapper[4689]: E1201 08:41:08.922994 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:09.422977625 +0000 UTC m=+149.495265519 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.923812 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d0af3ff-5d7b-41ae-be27-4dea7a282d86-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-vf9h5\" (UID: \"3d0af3ff-5d7b-41ae-be27-4dea7a282d86\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vf9h5" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.926605 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.934236 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.954129 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.972783 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.973879 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.982069 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48cda75c-3d0c-44e1-8f98-c191a4e79e1b-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ffvcj\" (UID: \"48cda75c-3d0c-44e1-8f98-c191a4e79e1b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ffvcj" Dec 01 08:41:08 crc kubenswrapper[4689]: I1201 08:41:08.994597 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 01 08:41:09 crc kubenswrapper[4689]: I1201 08:41:09.004589 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48cda75c-3d0c-44e1-8f98-c191a4e79e1b-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ffvcj\" (UID: \"48cda75c-3d0c-44e1-8f98-c191a4e79e1b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ffvcj" Dec 01 08:41:09 crc kubenswrapper[4689]: I1201 08:41:09.013911 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 01 08:41:09 crc kubenswrapper[4689]: I1201 08:41:09.023953 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:09 crc kubenswrapper[4689]: E1201 08:41:09.024178 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:09.524128995 +0000 UTC m=+149.596416909 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:09 crc kubenswrapper[4689]: I1201 08:41:09.024636 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:09 crc kubenswrapper[4689]: E1201 08:41:09.025248 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:09.525176488 +0000 UTC m=+149.597464432 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:09 crc kubenswrapper[4689]: I1201 08:41:09.030763 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3bb10b5c-893e-422d-a60f-101f4717b0bc-metrics-certs\") pod \"router-default-5444994796-hb577\" (UID: \"3bb10b5c-893e-422d-a60f-101f4717b0bc\") " pod="openshift-ingress/router-default-5444994796-hb577" Dec 01 08:41:09 crc kubenswrapper[4689]: I1201 08:41:09.033872 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 01 08:41:09 crc kubenswrapper[4689]: I1201 08:41:09.045918 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/3bb10b5c-893e-422d-a60f-101f4717b0bc-default-certificate\") pod \"router-default-5444994796-hb577\" (UID: \"3bb10b5c-893e-422d-a60f-101f4717b0bc\") " pod="openshift-ingress/router-default-5444994796-hb577" Dec 01 08:41:09 crc kubenswrapper[4689]: I1201 08:41:09.054125 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 01 08:41:09 crc kubenswrapper[4689]: I1201 08:41:09.074126 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 01 08:41:09 crc kubenswrapper[4689]: I1201 08:41:09.089298 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 08:41:09 crc kubenswrapper[4689]: I1201 08:41:09.094410 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 01 08:41:09 crc kubenswrapper[4689]: I1201 08:41:09.108186 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/3bb10b5c-893e-422d-a60f-101f4717b0bc-stats-auth\") pod \"router-default-5444994796-hb577\" (UID: \"3bb10b5c-893e-422d-a60f-101f4717b0bc\") " pod="openshift-ingress/router-default-5444994796-hb577" Dec 01 08:41:09 crc kubenswrapper[4689]: I1201 08:41:09.115605 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 01 08:41:09 crc kubenswrapper[4689]: I1201 08:41:09.126190 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:09 crc kubenswrapper[4689]: E1201 08:41:09.126359 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:09.626321678 +0000 UTC m=+149.698609582 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:09 crc kubenswrapper[4689]: I1201 08:41:09.127562 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:09 crc kubenswrapper[4689]: E1201 08:41:09.127983 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:09.627971841 +0000 UTC m=+149.700259825 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:09 crc kubenswrapper[4689]: I1201 08:41:09.134530 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 01 08:41:09 crc kubenswrapper[4689]: I1201 08:41:09.147132 4689 patch_prober.go:28] interesting pod/machine-config-daemon-hmdnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 08:41:09 crc kubenswrapper[4689]: I1201 08:41:09.147233 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 08:41:09 crc kubenswrapper[4689]: I1201 08:41:09.153952 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 01 08:41:09 crc kubenswrapper[4689]: I1201 08:41:09.155526 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a2f6cac4-8eb9-4d62-8ef2-3ceb354076bf-images\") pod \"machine-config-operator-74547568cd-b7hxr\" (UID: \"a2f6cac4-8eb9-4d62-8ef2-3ceb354076bf\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b7hxr" Dec 01 08:41:09 crc kubenswrapper[4689]: I1201 08:41:09.174571 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 01 08:41:09 crc kubenswrapper[4689]: I1201 08:41:09.194668 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 01 08:41:09 crc kubenswrapper[4689]: I1201 08:41:09.198262 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a2f6cac4-8eb9-4d62-8ef2-3ceb354076bf-proxy-tls\") pod \"machine-config-operator-74547568cd-b7hxr\" (UID: \"a2f6cac4-8eb9-4d62-8ef2-3ceb354076bf\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b7hxr" Dec 01 08:41:09 crc kubenswrapper[4689]: I1201 08:41:09.213765 4689 request.go:700] Waited for 1.014142793s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa/token Dec 01 08:41:09 crc kubenswrapper[4689]: I1201 08:41:09.230247 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:09 crc kubenswrapper[4689]: E1201 08:41:09.230627 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:09.730593428 +0000 UTC m=+149.802881332 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:09 crc kubenswrapper[4689]: I1201 08:41:09.230871 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:09 crc kubenswrapper[4689]: E1201 08:41:09.231465 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:09.731442344 +0000 UTC m=+149.803730278 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:09 crc kubenswrapper[4689]: I1201 08:41:09.233392 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkb9h\" (UniqueName: \"kubernetes.io/projected/70e552a9-22d9-4efc-b40a-25232123691b-kube-api-access-wkb9h\") pod \"apiserver-7bbb656c7d-nnx7f\" (UID: \"70e552a9-22d9-4efc-b40a-25232123691b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nnx7f" Dec 01 08:41:09 crc kubenswrapper[4689]: I1201 08:41:09.234097 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 01 08:41:09 crc kubenswrapper[4689]: I1201 08:41:09.249696 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/065b75bb-d7a1-478c-bb62-cec913693a7e-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-g9pxf\" (UID: \"065b75bb-d7a1-478c-bb62-cec913693a7e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-g9pxf" Dec 01 08:41:09 crc kubenswrapper[4689]: I1201 08:41:09.254044 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 01 08:41:09 crc kubenswrapper[4689]: I1201 08:41:09.274049 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 01 08:41:09 crc kubenswrapper[4689]: I1201 08:41:09.294636 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 01 08:41:09 crc kubenswrapper[4689]: I1201 08:41:09.314623 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 01 08:41:09 crc kubenswrapper[4689]: I1201 08:41:09.332342 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:09 crc kubenswrapper[4689]: I1201 08:41:09.333632 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2fd47e85-de9d-475a-8907-4e805cb1cfc8-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-l54ll\" (UID: \"2fd47e85-de9d-475a-8907-4e805cb1cfc8\") " pod="openshift-marketplace/marketplace-operator-79b997595-l54ll" Dec 01 08:41:09 crc kubenswrapper[4689]: E1201 08:41:09.333661 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:09.833634937 +0000 UTC m=+149.905922861 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:09 crc kubenswrapper[4689]: I1201 08:41:09.345227 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 01 08:41:09 crc kubenswrapper[4689]: I1201 08:41:09.351207 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2fd47e85-de9d-475a-8907-4e805cb1cfc8-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-l54ll\" (UID: \"2fd47e85-de9d-475a-8907-4e805cb1cfc8\") " pod="openshift-marketplace/marketplace-operator-79b997595-l54ll" Dec 01 08:41:09 crc kubenswrapper[4689]: I1201 08:41:09.354004 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 01 08:41:09 crc kubenswrapper[4689]: I1201 08:41:09.373611 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 01 08:41:09 crc kubenswrapper[4689]: I1201 08:41:09.379915 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/45f382fb-86c3-493f-a2ab-eb9b51923752-proxy-tls\") pod \"machine-config-controller-84d6567774-f6t2n\" (UID: \"45f382fb-86c3-493f-a2ab-eb9b51923752\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f6t2n" Dec 01 08:41:09 crc kubenswrapper[4689]: I1201 08:41:09.394705 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 08:41:09 crc kubenswrapper[4689]: I1201 08:41:09.395866 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3bb10b5c-893e-422d-a60f-101f4717b0bc-service-ca-bundle\") pod \"router-default-5444994796-hb577\" (UID: \"3bb10b5c-893e-422d-a60f-101f4717b0bc\") " pod="openshift-ingress/router-default-5444994796-hb577" Dec 01 08:41:09 crc kubenswrapper[4689]: I1201 08:41:09.396424 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 01 08:41:09 crc kubenswrapper[4689]: E1201 08:41:09.409668 4689 configmap.go:193] Couldn't get configMap openshift-cluster-machine-approver/machine-approver-config: failed to sync configmap cache: timed out waiting for the condition Dec 01 08:41:09 crc kubenswrapper[4689]: E1201 08:41:09.410209 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/57864f38-0ce2-401c-a4c4-96fc7cce8346-config podName:57864f38-0ce2-401c-a4c4-96fc7cce8346 nodeName:}" failed. No retries permitted until 2025-12-01 08:41:09.910163164 +0000 UTC m=+149.982451108 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/57864f38-0ce2-401c-a4c4-96fc7cce8346-config") pod "machine-approver-56656f9798-24jcf" (UID: "57864f38-0ce2-401c-a4c4-96fc7cce8346") : failed to sync configmap cache: timed out waiting for the condition Dec 01 08:41:09 crc kubenswrapper[4689]: I1201 08:41:09.416504 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 01 08:41:09 crc kubenswrapper[4689]: I1201 08:41:09.434880 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 01 08:41:09 crc kubenswrapper[4689]: I1201 08:41:09.436818 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:09 crc kubenswrapper[4689]: E1201 08:41:09.437222 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:09.937208384 +0000 UTC m=+150.009496288 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:09 crc kubenswrapper[4689]: I1201 08:41:09.453676 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 01 08:41:09 crc kubenswrapper[4689]: I1201 08:41:09.474157 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 01 08:41:09 crc kubenswrapper[4689]: I1201 08:41:09.485735 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/c1a4774c-b15d-424e-bb37-d6880da5ad85-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-zvzpg\" (UID: \"c1a4774c-b15d-424e-bb37-d6880da5ad85\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zvzpg" Dec 01 08:41:09 crc kubenswrapper[4689]: I1201 08:41:09.494717 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 01 08:41:09 crc kubenswrapper[4689]: I1201 08:41:09.501222 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/dcf239d3-b34c-4fa0-aafa-4e82e29e0c9b-profile-collector-cert\") pod \"catalog-operator-68c6474976-ltkzh\" (UID: \"dcf239d3-b34c-4fa0-aafa-4e82e29e0c9b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ltkzh" Dec 01 08:41:09 crc kubenswrapper[4689]: I1201 08:41:09.502840 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/be1070d3-8d5b-4910-aee6-3fee2a360934-secret-volume\") pod \"collect-profiles-29409630-9rhzp\" (UID: \"be1070d3-8d5b-4910-aee6-3fee2a360934\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409630-9rhzp" Dec 01 08:41:09 crc kubenswrapper[4689]: I1201 08:41:09.504463 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b1b970c0-59a2-4782-8664-b17a7d7a8202-profile-collector-cert\") pod \"olm-operator-6b444d44fb-lz96b\" (UID: \"b1b970c0-59a2-4782-8664-b17a7d7a8202\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lz96b" Dec 01 08:41:09 crc kubenswrapper[4689]: I1201 08:41:09.514112 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nnx7f" Dec 01 08:41:09 crc kubenswrapper[4689]: I1201 08:41:09.514431 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 01 08:41:09 crc kubenswrapper[4689]: E1201 08:41:09.518072 4689 secret.go:188] Couldn't get secret openshift-service-ca/signing-key: failed to sync secret cache: timed out waiting for the condition Dec 01 08:41:09 crc kubenswrapper[4689]: E1201 08:41:09.518166 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6e2d0411-e6d8-49a0-90c5-e1454e71bf44-signing-key podName:6e2d0411-e6d8-49a0-90c5-e1454e71bf44 nodeName:}" failed. No retries permitted until 2025-12-01 08:41:10.01813786 +0000 UTC m=+150.090425764 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/6e2d0411-e6d8-49a0-90c5-e1454e71bf44-signing-key") pod "service-ca-9c57cc56f-fdcdc" (UID: "6e2d0411-e6d8-49a0-90c5-e1454e71bf44") : failed to sync secret cache: timed out waiting for the condition Dec 01 08:41:09 crc kubenswrapper[4689]: E1201 08:41:09.522095 4689 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Dec 01 08:41:09 crc kubenswrapper[4689]: E1201 08:41:09.522144 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1b970c0-59a2-4782-8664-b17a7d7a8202-srv-cert podName:b1b970c0-59a2-4782-8664-b17a7d7a8202 nodeName:}" failed. No retries permitted until 2025-12-01 08:41:10.022131508 +0000 UTC m=+150.094419402 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/b1b970c0-59a2-4782-8664-b17a7d7a8202-srv-cert") pod "olm-operator-6b444d44fb-lz96b" (UID: "b1b970c0-59a2-4782-8664-b17a7d7a8202") : failed to sync secret cache: timed out waiting for the condition Dec 01 08:41:09 crc kubenswrapper[4689]: E1201 08:41:09.522692 4689 configmap.go:193] Couldn't get configMap openshift-operator-lifecycle-manager/collect-profiles-config: failed to sync configmap cache: timed out waiting for the condition Dec 01 08:41:09 crc kubenswrapper[4689]: E1201 08:41:09.522819 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/be1070d3-8d5b-4910-aee6-3fee2a360934-config-volume podName:be1070d3-8d5b-4910-aee6-3fee2a360934 nodeName:}" failed. No retries permitted until 2025-12-01 08:41:10.022785859 +0000 UTC m=+150.095073763 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/be1070d3-8d5b-4910-aee6-3fee2a360934-config-volume") pod "collect-profiles-29409630-9rhzp" (UID: "be1070d3-8d5b-4910-aee6-3fee2a360934") : failed to sync configmap cache: timed out waiting for the condition Dec 01 08:41:09 crc kubenswrapper[4689]: E1201 08:41:09.523262 4689 secret.go:188] Couldn't get secret openshift-ingress-canary/canary-serving-cert: failed to sync secret cache: timed out waiting for the condition Dec 01 08:41:09 crc kubenswrapper[4689]: E1201 08:41:09.523309 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/df446b8a-9d6b-41e5-9b7f-2ffa97a1217c-cert podName:df446b8a-9d6b-41e5-9b7f-2ffa97a1217c nodeName:}" failed. No retries permitted until 2025-12-01 08:41:10.023299205 +0000 UTC m=+150.095587109 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/df446b8a-9d6b-41e5-9b7f-2ffa97a1217c-cert") pod "ingress-canary-bnznn" (UID: "df446b8a-9d6b-41e5-9b7f-2ffa97a1217c") : failed to sync secret cache: timed out waiting for the condition Dec 01 08:41:09 crc kubenswrapper[4689]: E1201 08:41:09.523338 4689 secret.go:188] Couldn't get secret openshift-service-ca-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Dec 01 08:41:09 crc kubenswrapper[4689]: E1201 08:41:09.523389 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59ce6c52-f027-43b4-904f-c402047a39f0-serving-cert podName:59ce6c52-f027-43b4-904f-c402047a39f0 nodeName:}" failed. No retries permitted until 2025-12-01 08:41:10.023380387 +0000 UTC m=+150.095668291 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/59ce6c52-f027-43b4-904f-c402047a39f0-serving-cert") pod "service-ca-operator-777779d784-59vc4" (UID: "59ce6c52-f027-43b4-904f-c402047a39f0") : failed to sync secret cache: timed out waiting for the condition Dec 01 08:41:09 crc kubenswrapper[4689]: E1201 08:41:09.523391 4689 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Dec 01 08:41:09 crc kubenswrapper[4689]: E1201 08:41:09.523518 4689 configmap.go:193] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: failed to sync configmap cache: timed out waiting for the condition Dec 01 08:41:09 crc kubenswrapper[4689]: E1201 08:41:09.523544 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2546c757-03ab-4ba3-95d0-aa537cd615fb-apiservice-cert podName:2546c757-03ab-4ba3-95d0-aa537cd615fb nodeName:}" failed. No retries permitted until 2025-12-01 08:41:10.023509652 +0000 UTC m=+150.095797746 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/2546c757-03ab-4ba3-95d0-aa537cd615fb-apiservice-cert") pod "packageserver-d55dfcdfc-bjlxg" (UID: "2546c757-03ab-4ba3-95d0-aa537cd615fb") : failed to sync secret cache: timed out waiting for the condition Dec 01 08:41:09 crc kubenswrapper[4689]: E1201 08:41:09.523697 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/59ce6c52-f027-43b4-904f-c402047a39f0-config podName:59ce6c52-f027-43b4-904f-c402047a39f0 nodeName:}" failed. No retries permitted until 2025-12-01 08:41:10.023683197 +0000 UTC m=+150.095971101 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/59ce6c52-f027-43b4-904f-c402047a39f0-config") pod "service-ca-operator-777779d784-59vc4" (UID: "59ce6c52-f027-43b4-904f-c402047a39f0") : failed to sync configmap cache: timed out waiting for the condition Dec 01 08:41:09 crc kubenswrapper[4689]: E1201 08:41:09.525632 4689 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Dec 01 08:41:09 crc kubenswrapper[4689]: E1201 08:41:09.525749 4689 configmap.go:193] Couldn't get configMap openshift-service-ca/signing-cabundle: failed to sync configmap cache: timed out waiting for the condition Dec 01 08:41:09 crc kubenswrapper[4689]: E1201 08:41:09.525755 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2546c757-03ab-4ba3-95d0-aa537cd615fb-webhook-cert podName:2546c757-03ab-4ba3-95d0-aa537cd615fb nodeName:}" failed. No retries permitted until 2025-12-01 08:41:10.025743083 +0000 UTC m=+150.098030987 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/2546c757-03ab-4ba3-95d0-aa537cd615fb-webhook-cert") pod "packageserver-d55dfcdfc-bjlxg" (UID: "2546c757-03ab-4ba3-95d0-aa537cd615fb") : failed to sync secret cache: timed out waiting for the condition Dec 01 08:41:09 crc kubenswrapper[4689]: E1201 08:41:09.526440 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6e2d0411-e6d8-49a0-90c5-e1454e71bf44-signing-cabundle podName:6e2d0411-e6d8-49a0-90c5-e1454e71bf44 nodeName:}" failed. No retries permitted until 2025-12-01 08:41:10.026424414 +0000 UTC m=+150.098712308 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/6e2d0411-e6d8-49a0-90c5-e1454e71bf44-signing-cabundle") pod "service-ca-9c57cc56f-fdcdc" (UID: "6e2d0411-e6d8-49a0-90c5-e1454e71bf44") : failed to sync configmap cache: timed out waiting for the condition Dec 01 08:41:09 crc kubenswrapper[4689]: I1201 08:41:09.527080 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/dcf239d3-b34c-4fa0-aafa-4e82e29e0c9b-srv-cert\") pod \"catalog-operator-68c6474976-ltkzh\" (UID: \"dcf239d3-b34c-4fa0-aafa-4e82e29e0c9b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ltkzh" Dec 01 08:41:09 crc kubenswrapper[4689]: I1201 08:41:09.532836 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 01 08:41:09 crc kubenswrapper[4689]: I1201 08:41:09.537648 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:09 crc kubenswrapper[4689]: E1201 08:41:09.537942 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:10.03790565 +0000 UTC m=+150.110193554 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:09 crc kubenswrapper[4689]: I1201 08:41:09.542278 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:09 crc kubenswrapper[4689]: E1201 08:41:09.542617 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:10.04260319 +0000 UTC m=+150.114891094 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:09 crc kubenswrapper[4689]: I1201 08:41:09.554317 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 01 08:41:09 crc kubenswrapper[4689]: I1201 08:41:09.565058 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 08:41:09 crc kubenswrapper[4689]: I1201 08:41:09.575262 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 01 08:41:09 crc kubenswrapper[4689]: I1201 08:41:09.594795 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 01 08:41:09 crc kubenswrapper[4689]: I1201 08:41:09.615842 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 01 08:41:09 crc kubenswrapper[4689]: E1201 08:41:09.619545 4689 configmap.go:193] Couldn't get configMap openshift-dns/dns-default: failed to sync configmap cache: timed out waiting for the condition Dec 01 08:41:09 crc kubenswrapper[4689]: E1201 08:41:09.619673 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4095fada-3a3f-4938-a63b-07eb736ad683-config-volume podName:4095fada-3a3f-4938-a63b-07eb736ad683 nodeName:}" failed. No retries permitted until 2025-12-01 08:41:10.119648522 +0000 UTC m=+150.191936436 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/4095fada-3a3f-4938-a63b-07eb736ad683-config-volume") pod "dns-default-wrw72" (UID: "4095fada-3a3f-4938-a63b-07eb736ad683") : failed to sync configmap cache: timed out waiting for the condition Dec 01 08:41:09 crc kubenswrapper[4689]: E1201 08:41:09.619971 4689 secret.go:188] Couldn't get secret openshift-dns/dns-default-metrics-tls: failed to sync secret cache: timed out waiting for the condition Dec 01 08:41:09 crc kubenswrapper[4689]: E1201 08:41:09.620005 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4095fada-3a3f-4938-a63b-07eb736ad683-metrics-tls podName:4095fada-3a3f-4938-a63b-07eb736ad683 nodeName:}" failed. No retries permitted until 2025-12-01 08:41:10.119997884 +0000 UTC m=+150.192285788 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/4095fada-3a3f-4938-a63b-07eb736ad683-metrics-tls") pod "dns-default-wrw72" (UID: "4095fada-3a3f-4938-a63b-07eb736ad683") : failed to sync secret cache: timed out waiting for the condition Dec 01 08:41:09 crc kubenswrapper[4689]: E1201 08:41:09.620028 4689 secret.go:188] Couldn't get secret openshift-machine-config-operator/machine-config-server-tls: failed to sync secret cache: timed out waiting for the condition Dec 01 08:41:09 crc kubenswrapper[4689]: E1201 08:41:09.620055 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/73a295f0-f461-4f83-af34-72b309949e99-certs podName:73a295f0-f461-4f83-af34-72b309949e99 nodeName:}" failed. No retries permitted until 2025-12-01 08:41:10.120046495 +0000 UTC m=+150.192334399 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "certs" (UniqueName: "kubernetes.io/secret/73a295f0-f461-4f83-af34-72b309949e99-certs") pod "machine-config-server-qszrn" (UID: "73a295f0-f461-4f83-af34-72b309949e99") : failed to sync secret cache: timed out waiting for the condition Dec 01 08:41:09 crc kubenswrapper[4689]: E1201 08:41:09.620074 4689 secret.go:188] Couldn't get secret openshift-machine-config-operator/node-bootstrapper-token: failed to sync secret cache: timed out waiting for the condition Dec 01 08:41:09 crc kubenswrapper[4689]: E1201 08:41:09.620098 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/73a295f0-f461-4f83-af34-72b309949e99-node-bootstrap-token podName:73a295f0-f461-4f83-af34-72b309949e99 nodeName:}" failed. No retries permitted until 2025-12-01 08:41:10.120092457 +0000 UTC m=+150.192380361 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-bootstrap-token" (UniqueName: "kubernetes.io/secret/73a295f0-f461-4f83-af34-72b309949e99-node-bootstrap-token") pod "machine-config-server-qszrn" (UID: "73a295f0-f461-4f83-af34-72b309949e99") : failed to sync secret cache: timed out waiting for the condition Dec 01 08:41:09 crc kubenswrapper[4689]: I1201 08:41:09.638098 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 01 08:41:09 crc kubenswrapper[4689]: I1201 08:41:09.649801 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:09 crc kubenswrapper[4689]: E1201 08:41:09.650041 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:10.150001338 +0000 UTC m=+150.222289262 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:09 crc kubenswrapper[4689]: I1201 08:41:09.656786 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 01 08:41:09 crc kubenswrapper[4689]: I1201 08:41:09.656998 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:09 crc kubenswrapper[4689]: E1201 08:41:09.657926 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:10.15789963 +0000 UTC m=+150.230187534 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:09 crc kubenswrapper[4689]: I1201 08:41:09.675524 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 01 08:41:09 crc kubenswrapper[4689]: I1201 08:41:09.695018 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 01 08:41:09 crc kubenswrapper[4689]: I1201 08:41:09.725073 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 01 08:41:09 crc kubenswrapper[4689]: I1201 08:41:09.734098 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 01 08:41:09 crc kubenswrapper[4689]: I1201 08:41:09.754211 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 01 08:41:09 crc kubenswrapper[4689]: I1201 08:41:09.763098 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:09 crc kubenswrapper[4689]: E1201 08:41:09.768447 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:10.268409537 +0000 UTC m=+150.340697441 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:09 crc kubenswrapper[4689]: I1201 08:41:09.768655 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:09 crc kubenswrapper[4689]: E1201 08:41:09.769329 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:10.269322137 +0000 UTC m=+150.341610041 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:09 crc kubenswrapper[4689]: I1201 08:41:09.781716 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 01 08:41:09 crc kubenswrapper[4689]: I1201 08:41:09.795601 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 01 08:41:09 crc kubenswrapper[4689]: W1201 08:41:09.807770 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-7a5b29c3d916a3e92a5c451c9c02c498de9dbab0e8208af38e09baa46d7d52b5 WatchSource:0}: Error finding container 7a5b29c3d916a3e92a5c451c9c02c498de9dbab0e8208af38e09baa46d7d52b5: Status 404 returned error can't find the container with id 7a5b29c3d916a3e92a5c451c9c02c498de9dbab0e8208af38e09baa46d7d52b5 Dec 01 08:41:09 crc kubenswrapper[4689]: I1201 08:41:09.819176 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 01 08:41:09 crc kubenswrapper[4689]: W1201 08:41:09.828839 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-39366a52f58d202543e077f1691f567fbec0771ee68612fae0fe3be5dffaa04e WatchSource:0}: Error finding container 39366a52f58d202543e077f1691f567fbec0771ee68612fae0fe3be5dffaa04e: Status 404 returned error can't find the container with id 39366a52f58d202543e077f1691f567fbec0771ee68612fae0fe3be5dffaa04e Dec 01 08:41:09 crc kubenswrapper[4689]: I1201 08:41:09.834397 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 01 08:41:09 crc kubenswrapper[4689]: I1201 08:41:09.854230 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 01 08:41:09 crc kubenswrapper[4689]: I1201 08:41:09.857484 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-nnx7f"] Dec 01 08:41:09 crc kubenswrapper[4689]: I1201 08:41:09.873233 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:09 crc kubenswrapper[4689]: E1201 08:41:09.873914 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:10.373897965 +0000 UTC m=+150.446185859 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:09 crc kubenswrapper[4689]: I1201 08:41:09.877175 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 01 08:41:09 crc kubenswrapper[4689]: W1201 08:41:09.887064 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod70e552a9_22d9_4efc_b40a_25232123691b.slice/crio-f32481a856ad2a0c91b63da4029f37ec4c3850baa50e82c3702769e88e9417d2 WatchSource:0}: Error finding container f32481a856ad2a0c91b63da4029f37ec4c3850baa50e82c3702769e88e9417d2: Status 404 returned error can't find the container with id f32481a856ad2a0c91b63da4029f37ec4c3850baa50e82c3702769e88e9417d2 Dec 01 08:41:09 crc kubenswrapper[4689]: I1201 08:41:09.895214 4689 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 01 08:41:09 crc kubenswrapper[4689]: I1201 08:41:09.915350 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 01 08:41:09 crc kubenswrapper[4689]: I1201 08:41:09.935128 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 01 08:41:09 crc kubenswrapper[4689]: I1201 08:41:09.955046 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 01 08:41:09 crc kubenswrapper[4689]: I1201 08:41:09.975547 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 01 08:41:09 crc kubenswrapper[4689]: I1201 08:41:09.976176 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57864f38-0ce2-401c-a4c4-96fc7cce8346-config\") pod \"machine-approver-56656f9798-24jcf\" (UID: \"57864f38-0ce2-401c-a4c4-96fc7cce8346\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-24jcf" Dec 01 08:41:09 crc kubenswrapper[4689]: I1201 08:41:09.976669 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:09 crc kubenswrapper[4689]: E1201 08:41:09.977216 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:10.477197894 +0000 UTC m=+150.549485798 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:09 crc kubenswrapper[4689]: I1201 08:41:09.995040 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.015092 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.035425 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.055011 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.077148 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.077452 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2546c757-03ab-4ba3-95d0-aa537cd615fb-apiservice-cert\") pod \"packageserver-d55dfcdfc-bjlxg\" (UID: \"2546c757-03ab-4ba3-95d0-aa537cd615fb\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bjlxg" Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.078335 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2546c757-03ab-4ba3-95d0-aa537cd615fb-webhook-cert\") pod \"packageserver-d55dfcdfc-bjlxg\" (UID: \"2546c757-03ab-4ba3-95d0-aa537cd615fb\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bjlxg" Dec 01 08:41:10 crc kubenswrapper[4689]: E1201 08:41:10.078452 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:10.578416705 +0000 UTC m=+150.650704609 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.078597 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/6e2d0411-e6d8-49a0-90c5-e1454e71bf44-signing-cabundle\") pod \"service-ca-9c57cc56f-fdcdc\" (UID: \"6e2d0411-e6d8-49a0-90c5-e1454e71bf44\") " pod="openshift-service-ca/service-ca-9c57cc56f-fdcdc" Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.078954 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59ce6c52-f027-43b4-904f-c402047a39f0-config\") pod \"service-ca-operator-777779d784-59vc4\" (UID: \"59ce6c52-f027-43b4-904f-c402047a39f0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-59vc4" Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.079019 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/df446b8a-9d6b-41e5-9b7f-2ffa97a1217c-cert\") pod \"ingress-canary-bnznn\" (UID: \"df446b8a-9d6b-41e5-9b7f-2ffa97a1217c\") " pod="openshift-ingress-canary/ingress-canary-bnznn" Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.079578 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59ce6c52-f027-43b4-904f-c402047a39f0-serving-cert\") pod \"service-ca-operator-777779d784-59vc4\" (UID: \"59ce6c52-f027-43b4-904f-c402047a39f0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-59vc4" Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.079760 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b1b970c0-59a2-4782-8664-b17a7d7a8202-srv-cert\") pod \"olm-operator-6b444d44fb-lz96b\" (UID: \"b1b970c0-59a2-4782-8664-b17a7d7a8202\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lz96b" Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.079847 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/6e2d0411-e6d8-49a0-90c5-e1454e71bf44-signing-cabundle\") pod \"service-ca-9c57cc56f-fdcdc\" (UID: \"6e2d0411-e6d8-49a0-90c5-e1454e71bf44\") " pod="openshift-service-ca/service-ca-9c57cc56f-fdcdc" Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.079853 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/6e2d0411-e6d8-49a0-90c5-e1454e71bf44-signing-key\") pod \"service-ca-9c57cc56f-fdcdc\" (UID: \"6e2d0411-e6d8-49a0-90c5-e1454e71bf44\") " pod="openshift-service-ca/service-ca-9c57cc56f-fdcdc" Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.079954 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.080019 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/be1070d3-8d5b-4910-aee6-3fee2a360934-config-volume\") pod \"collect-profiles-29409630-9rhzp\" (UID: \"be1070d3-8d5b-4910-aee6-3fee2a360934\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409630-9rhzp" Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.080339 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59ce6c52-f027-43b4-904f-c402047a39f0-config\") pod \"service-ca-operator-777779d784-59vc4\" (UID: \"59ce6c52-f027-43b4-904f-c402047a39f0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-59vc4" Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.080949 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/be1070d3-8d5b-4910-aee6-3fee2a360934-config-volume\") pod \"collect-profiles-29409630-9rhzp\" (UID: \"be1070d3-8d5b-4910-aee6-3fee2a360934\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409630-9rhzp" Dec 01 08:41:10 crc kubenswrapper[4689]: E1201 08:41:10.081316 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:10.581304417 +0000 UTC m=+150.653592411 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.086969 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2546c757-03ab-4ba3-95d0-aa537cd615fb-apiservice-cert\") pod \"packageserver-d55dfcdfc-bjlxg\" (UID: \"2546c757-03ab-4ba3-95d0-aa537cd615fb\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bjlxg" Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.087057 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/6e2d0411-e6d8-49a0-90c5-e1454e71bf44-signing-key\") pod \"service-ca-9c57cc56f-fdcdc\" (UID: \"6e2d0411-e6d8-49a0-90c5-e1454e71bf44\") " pod="openshift-service-ca/service-ca-9c57cc56f-fdcdc" Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.091014 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b1b970c0-59a2-4782-8664-b17a7d7a8202-srv-cert\") pod \"olm-operator-6b444d44fb-lz96b\" (UID: \"b1b970c0-59a2-4782-8664-b17a7d7a8202\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lz96b" Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.092003 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59ce6c52-f027-43b4-904f-c402047a39f0-serving-cert\") pod \"service-ca-operator-777779d784-59vc4\" (UID: \"59ce6c52-f027-43b4-904f-c402047a39f0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-59vc4" Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.092605 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2546c757-03ab-4ba3-95d0-aa537cd615fb-webhook-cert\") pod \"packageserver-d55dfcdfc-bjlxg\" (UID: \"2546c757-03ab-4ba3-95d0-aa537cd615fb\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bjlxg" Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.095740 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/df446b8a-9d6b-41e5-9b7f-2ffa97a1217c-cert\") pod \"ingress-canary-bnznn\" (UID: \"df446b8a-9d6b-41e5-9b7f-2ffa97a1217c\") " pod="openshift-ingress-canary/ingress-canary-bnznn" Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.103638 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w94sd\" (UniqueName: \"kubernetes.io/projected/21eaf97a-bf73-4e70-a9bc-153b17b8a799-kube-api-access-w94sd\") pod \"openshift-config-operator-7777fb866f-29dmp\" (UID: \"21eaf97a-bf73-4e70-a9bc-153b17b8a799\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-29dmp" Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.116960 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxpvk\" (UniqueName: \"kubernetes.io/projected/8bebf2e0-afe5-4e98-8cdf-496c5d355ef9-kube-api-access-mxpvk\") pod \"authentication-operator-69f744f599-gwkk8\" (UID: \"8bebf2e0-afe5-4e98-8cdf-496c5d355ef9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gwkk8" Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.133120 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7d86e20d-febe-4cfb-a738-4705f8122326-bound-sa-token\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.149974 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwxz6\" (UniqueName: \"kubernetes.io/projected/c062b92b-1709-4892-9b40-b1d2405d5812-kube-api-access-gwxz6\") pod \"machine-api-operator-5694c8668f-chlnk\" (UID: \"c062b92b-1709-4892-9b40-b1d2405d5812\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-chlnk" Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.169617 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gcl2c\" (UniqueName: \"kubernetes.io/projected/c389f615-2c0f-467b-924e-ad740d3fff07-kube-api-access-gcl2c\") pod \"apiserver-76f77b778f-ch9jh\" (UID: \"c389f615-2c0f-467b-924e-ad740d3fff07\") " pod="openshift-apiserver/apiserver-76f77b778f-ch9jh" Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.181473 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:10 crc kubenswrapper[4689]: E1201 08:41:10.181617 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:10.68158317 +0000 UTC m=+150.753871074 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.181881 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/73a295f0-f461-4f83-af34-72b309949e99-node-bootstrap-token\") pod \"machine-config-server-qszrn\" (UID: \"73a295f0-f461-4f83-af34-72b309949e99\") " pod="openshift-machine-config-operator/machine-config-server-qszrn" Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.181958 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/73a295f0-f461-4f83-af34-72b309949e99-certs\") pod \"machine-config-server-qszrn\" (UID: \"73a295f0-f461-4f83-af34-72b309949e99\") " pod="openshift-machine-config-operator/machine-config-server-qszrn" Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.181998 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.182080 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4095fada-3a3f-4938-a63b-07eb736ad683-config-volume\") pod \"dns-default-wrw72\" (UID: \"4095fada-3a3f-4938-a63b-07eb736ad683\") " pod="openshift-dns/dns-default-wrw72" Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.182297 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4095fada-3a3f-4938-a63b-07eb736ad683-metrics-tls\") pod \"dns-default-wrw72\" (UID: \"4095fada-3a3f-4938-a63b-07eb736ad683\") " pod="openshift-dns/dns-default-wrw72" Dec 01 08:41:10 crc kubenswrapper[4689]: E1201 08:41:10.182732 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:10.682700005 +0000 UTC m=+150.754987909 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.182909 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4095fada-3a3f-4938-a63b-07eb736ad683-config-volume\") pod \"dns-default-wrw72\" (UID: \"4095fada-3a3f-4938-a63b-07eb736ad683\") " pod="openshift-dns/dns-default-wrw72" Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.185354 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/73a295f0-f461-4f83-af34-72b309949e99-certs\") pod \"machine-config-server-qszrn\" (UID: \"73a295f0-f461-4f83-af34-72b309949e99\") " pod="openshift-machine-config-operator/machine-config-server-qszrn" Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.185929 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/73a295f0-f461-4f83-af34-72b309949e99-node-bootstrap-token\") pod \"machine-config-server-qszrn\" (UID: \"73a295f0-f461-4f83-af34-72b309949e99\") " pod="openshift-machine-config-operator/machine-config-server-qszrn" Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.187881 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4095fada-3a3f-4938-a63b-07eb736ad683-metrics-tls\") pod \"dns-default-wrw72\" (UID: \"4095fada-3a3f-4938-a63b-07eb736ad683\") " pod="openshift-dns/dns-default-wrw72" Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.188633 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b209cd69-6557-4b86-b7b2-680d3bcf8ec0-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-bhjqx\" (UID: \"b209cd69-6557-4b86-b7b2-680d3bcf8ec0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bhjqx" Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.208135 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4sr6s\" (UniqueName: \"kubernetes.io/projected/7646df48-faa9-486b-b7fc-c8b1d97ead27-kube-api-access-4sr6s\") pod \"dns-operator-744455d44c-k7w5z\" (UID: \"7646df48-faa9-486b-b7fc-c8b1d97ead27\") " pod="openshift-dns-operator/dns-operator-744455d44c-k7w5z" Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.213292 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-k7w5z" Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.232454 4689 request.go:700] Waited for 1.812246968s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/serviceaccounts/console/token Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.240735 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pk9c\" (UniqueName: \"kubernetes.io/projected/e552c7d1-a8ad-4033-b89e-951c6f58588b-kube-api-access-2pk9c\") pod \"etcd-operator-b45778765-gzrk2\" (UID: \"e552c7d1-a8ad-4033-b89e-951c6f58588b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gzrk2" Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.243536 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"67f47271d2a500ab5ac8d86c284a953d0a6cef1c764df37a96713f8cac505ff2"} Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.246313 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"bc5ea63ceb1a6daf3f7ff9d19275c40dd01fa46337e8d62cf8c3937a4a36a8df"} Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.246345 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"39366a52f58d202543e077f1691f567fbec0771ee68612fae0fe3be5dffaa04e"} Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.257050 4689 generic.go:334] "Generic (PLEG): container finished" podID="70e552a9-22d9-4efc-b40a-25232123691b" containerID="3d2951febf58db45697057c575e65e0c6d28d2dd1606beb16892b9335ed10935" exitCode=0 Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.257158 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nnx7f" event={"ID":"70e552a9-22d9-4efc-b40a-25232123691b","Type":"ContainerDied","Data":"3d2951febf58db45697057c575e65e0c6d28d2dd1606beb16892b9335ed10935"} Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.257195 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nnx7f" event={"ID":"70e552a9-22d9-4efc-b40a-25232123691b","Type":"ContainerStarted","Data":"f32481a856ad2a0c91b63da4029f37ec4c3850baa50e82c3702769e88e9417d2"} Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.258591 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5qsf\" (UniqueName: \"kubernetes.io/projected/710ccb76-093a-484d-a784-737ae81e7c21-kube-api-access-l5qsf\") pod \"console-f9d7485db-j5r2f\" (UID: \"710ccb76-093a-484d-a784-737ae81e7c21\") " pod="openshift-console/console-f9d7485db-j5r2f" Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.268291 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-gzrk2" Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.273385 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-px2ln\" (UniqueName: \"kubernetes.io/projected/b209cd69-6557-4b86-b7b2-680d3bcf8ec0-kube-api-access-px2ln\") pod \"cluster-image-registry-operator-dc59b4c8b-bhjqx\" (UID: \"b209cd69-6557-4b86-b7b2-680d3bcf8ec0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bhjqx" Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.274930 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"f80113f00fdfffb2cdaa6a9c7046d6bee1d1240f3de14d120b777d9c99d2fa76"} Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.275011 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"7a5b29c3d916a3e92a5c451c9c02c498de9dbab0e8208af38e09baa46d7d52b5"} Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.275220 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.283465 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:10 crc kubenswrapper[4689]: E1201 08:41:10.285291 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:10.78527453 +0000 UTC m=+150.857562434 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.295976 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qq96\" (UniqueName: \"kubernetes.io/projected/7d86e20d-febe-4cfb-a738-4705f8122326-kube-api-access-6qq96\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.312034 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5k5h2\" (UniqueName: \"kubernetes.io/projected/bd24264f-fc40-410e-9bed-3f8e340035b5-kube-api-access-5k5h2\") pod \"downloads-7954f5f757-xx949\" (UID: \"bd24264f-fc40-410e-9bed-3f8e340035b5\") " pod="openshift-console/downloads-7954f5f757-xx949" Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.326498 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bhjqx" Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.339295 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwd9j\" (UniqueName: \"kubernetes.io/projected/f005f766-04a5-4b03-8c50-2fd9ddf967be-kube-api-access-hwd9j\") pod \"openshift-controller-manager-operator-756b6f6bc6-h47fn\" (UID: \"f005f766-04a5-4b03-8c50-2fd9ddf967be\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-h47fn" Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.346313 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-xx949" Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.356315 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-ch9jh" Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.359841 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ph8kj\" (UniqueName: \"kubernetes.io/projected/57864f38-0ce2-401c-a4c4-96fc7cce8346-kube-api-access-ph8kj\") pod \"machine-approver-56656f9798-24jcf\" (UID: \"57864f38-0ce2-401c-a4c4-96fc7cce8346\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-24jcf" Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.367693 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-chlnk" Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.376037 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-gwkk8" Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.382374 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgs9z\" (UniqueName: \"kubernetes.io/projected/828d985f-2b1a-47df-b653-907e8684d1f5-kube-api-access-jgs9z\") pod \"openshift-apiserver-operator-796bbdcf4f-vhndn\" (UID: \"828d985f-2b1a-47df-b653-907e8684d1f5\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vhndn" Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.385678 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:10 crc kubenswrapper[4689]: E1201 08:41:10.386209 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:10.886174382 +0000 UTC m=+150.958462276 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.393093 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-29dmp" Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.395063 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbj5g\" (UniqueName: \"kubernetes.io/projected/3bb10b5c-893e-422d-a60f-101f4717b0bc-kube-api-access-rbj5g\") pod \"router-default-5444994796-hb577\" (UID: \"3bb10b5c-893e-422d-a60f-101f4717b0bc\") " pod="openshift-ingress/router-default-5444994796-hb577" Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.412104 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4wck\" (UniqueName: \"kubernetes.io/projected/c1a4774c-b15d-424e-bb37-d6880da5ad85-kube-api-access-j4wck\") pod \"package-server-manager-789f6589d5-zvzpg\" (UID: \"c1a4774c-b15d-424e-bb37-d6880da5ad85\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zvzpg" Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.439060 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-k7w5z"] Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.439723 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8j65\" (UniqueName: \"kubernetes.io/projected/304e31dc-6fcd-4654-9c3d-ef693f7c71a6-kube-api-access-j8j65\") pod \"console-operator-58897d9998-z629s\" (UID: \"304e31dc-6fcd-4654-9c3d-ef693f7c71a6\") " pod="openshift-console-operator/console-operator-58897d9998-z629s" Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.471944 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vhndn" Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.472199 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxcv2\" (UniqueName: \"kubernetes.io/projected/1120a89b-2c45-428f-8577-eb6eb712961b-kube-api-access-zxcv2\") pod \"cluster-samples-operator-665b6dd947-4nmmg\" (UID: \"1120a89b-2c45-428f-8577-eb6eb712961b\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4nmmg" Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.480066 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmvxh\" (UniqueName: \"kubernetes.io/projected/59ce6c52-f027-43b4-904f-c402047a39f0-kube-api-access-cmvxh\") pod \"service-ca-operator-777779d784-59vc4\" (UID: \"59ce6c52-f027-43b4-904f-c402047a39f0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-59vc4" Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.487399 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:10 crc kubenswrapper[4689]: E1201 08:41:10.487956 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:10.987940181 +0000 UTC m=+151.060228085 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.488111 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-j5r2f" Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.508067 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-hb577" Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.503330 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cf58a7d2-9013-4cdf-a435-67695f7677a1-bound-sa-token\") pod \"ingress-operator-5b745b69d9-sc9lr\" (UID: \"cf58a7d2-9013-4cdf-a435-67695f7677a1\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-sc9lr" Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.517614 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8956x\" (UniqueName: \"kubernetes.io/projected/065b75bb-d7a1-478c-bb62-cec913693a7e-kube-api-access-8956x\") pod \"multus-admission-controller-857f4d67dd-g9pxf\" (UID: \"065b75bb-d7a1-478c-bb62-cec913693a7e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-g9pxf" Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.544217 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-h47fn" Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.567309 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bp692\" (UniqueName: \"kubernetes.io/projected/2499ecbd-1cda-49a9-8c8a-e80d44127f01-kube-api-access-bp692\") pod \"control-plane-machine-set-operator-78cbb6b69f-xjxwg\" (UID: \"2499ecbd-1cda-49a9-8c8a-e80d44127f01\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xjxwg" Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.581542 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-g9pxf" Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.588297 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:10 crc kubenswrapper[4689]: E1201 08:41:10.588921 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:11.088906735 +0000 UTC m=+151.161194629 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.605006 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkh8x\" (UniqueName: \"kubernetes.io/projected/a2f6cac4-8eb9-4d62-8ef2-3ceb354076bf-kube-api-access-fkh8x\") pod \"machine-config-operator-74547568cd-b7hxr\" (UID: \"a2f6cac4-8eb9-4d62-8ef2-3ceb354076bf\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b7hxr" Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.605518 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zvzpg" Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.625154 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/48cda75c-3d0c-44e1-8f98-c191a4e79e1b-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ffvcj\" (UID: \"48cda75c-3d0c-44e1-8f98-c191a4e79e1b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ffvcj" Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.633831 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szh8z\" (UniqueName: \"kubernetes.io/projected/e85d92ae-30aa-4302-b217-43a48dcadd8a-kube-api-access-szh8z\") pod \"oauth-openshift-558db77b4-8rfdp\" (UID: \"e85d92ae-30aa-4302-b217-43a48dcadd8a\") " pod="openshift-authentication/oauth-openshift-558db77b4-8rfdp" Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.637334 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlctf\" (UniqueName: \"kubernetes.io/projected/be1070d3-8d5b-4910-aee6-3fee2a360934-kube-api-access-zlctf\") pod \"collect-profiles-29409630-9rhzp\" (UID: \"be1070d3-8d5b-4910-aee6-3fee2a360934\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409630-9rhzp" Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.650500 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2z6gq\" (UniqueName: \"kubernetes.io/projected/5fb20738-492b-4b13-bf8a-5c32aabc0f32-kube-api-access-2z6gq\") pod \"controller-manager-879f6c89f-9nx2j\" (UID: \"5fb20738-492b-4b13-bf8a-5c32aabc0f32\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9nx2j" Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.654343 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-59vc4" Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.662156 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5j8mk\" (UniqueName: \"kubernetes.io/projected/45f382fb-86c3-493f-a2ab-eb9b51923752-kube-api-access-5j8mk\") pod \"machine-config-controller-84d6567774-f6t2n\" (UID: \"45f382fb-86c3-493f-a2ab-eb9b51923752\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f6t2n" Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.669108 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409630-9rhzp" Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.692131 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:10 crc kubenswrapper[4689]: E1201 08:41:10.692565 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:11.192544095 +0000 UTC m=+151.264831999 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.702168 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4nmmg" Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.707629 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-9nx2j" Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.716711 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xffcg\" (UniqueName: \"kubernetes.io/projected/2fd47e85-de9d-475a-8907-4e805cb1cfc8-kube-api-access-xffcg\") pod \"marketplace-operator-79b997595-l54ll\" (UID: \"2fd47e85-de9d-475a-8907-4e805cb1cfc8\") " pod="openshift-marketplace/marketplace-operator-79b997595-l54ll" Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.717040 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-8rfdp" Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.726002 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-z629s" Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.786105 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xjxwg" Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.800616 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ffvcj" Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.801556 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:10 crc kubenswrapper[4689]: E1201 08:41:10.801893 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:11.301878675 +0000 UTC m=+151.374166579 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.864149 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hm94m\" (UniqueName: \"kubernetes.io/projected/b1b970c0-59a2-4782-8664-b17a7d7a8202-kube-api-access-hm94m\") pod \"olm-operator-6b444d44fb-lz96b\" (UID: \"b1b970c0-59a2-4782-8664-b17a7d7a8202\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lz96b" Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.868802 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dadd1959-680d-4f67-9af9-65d8519398df-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-bsffw\" (UID: \"dadd1959-680d-4f67-9af9-65d8519398df\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bsffw" Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.870148 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z422r\" (UniqueName: \"kubernetes.io/projected/79369af1-c9d2-4d8e-a675-a5174bc0e4ad-kube-api-access-z422r\") pod \"csi-hostpathplugin-8pg9k\" (UID: \"79369af1-c9d2-4d8e-a675-a5174bc0e4ad\") " pod="hostpath-provisioner/csi-hostpathplugin-8pg9k" Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.871678 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b7hxr" Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.872282 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcwnk\" (UniqueName: \"kubernetes.io/projected/3d0af3ff-5d7b-41ae-be27-4dea7a282d86-kube-api-access-xcwnk\") pod \"kube-storage-version-migrator-operator-b67b599dd-vf9h5\" (UID: \"3d0af3ff-5d7b-41ae-be27-4dea7a282d86\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vf9h5" Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.898331 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-l54ll" Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.902594 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:10 crc kubenswrapper[4689]: E1201 08:41:10.902877 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:11.40285983 +0000 UTC m=+151.475147734 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.903166 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f6t2n" Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.911792 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxzzf\" (UniqueName: \"kubernetes.io/projected/bd8122c2-aaf0-4148-849c-ca4502dd0f55-kube-api-access-zxzzf\") pod \"route-controller-manager-6576b87f9c-6z2v4\" (UID: \"bd8122c2-aaf0-4148-849c-ca4502dd0f55\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6z2v4" Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.935382 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lz96b" Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.936470 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ns49\" (UniqueName: \"kubernetes.io/projected/df446b8a-9d6b-41e5-9b7f-2ffa97a1217c-kube-api-access-4ns49\") pod \"ingress-canary-bnznn\" (UID: \"df446b8a-9d6b-41e5-9b7f-2ffa97a1217c\") " pod="openshift-ingress-canary/ingress-canary-bnznn" Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.941085 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d52jc\" (UniqueName: \"kubernetes.io/projected/cf58a7d2-9013-4cdf-a435-67695f7677a1-kube-api-access-d52jc\") pod \"ingress-operator-5b745b69d9-sc9lr\" (UID: \"cf58a7d2-9013-4cdf-a435-67695f7677a1\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-sc9lr" Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.970486 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-gzrk2"] Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.974339 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.975961 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-bnznn" Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.978018 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57864f38-0ce2-401c-a4c4-96fc7cce8346-config\") pod \"machine-approver-56656f9798-24jcf\" (UID: \"57864f38-0ce2-401c-a4c4-96fc7cce8346\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-24jcf" Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.995519 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkmmm\" (UniqueName: \"kubernetes.io/projected/6e2d0411-e6d8-49a0-90c5-e1454e71bf44-kube-api-access-nkmmm\") pod \"service-ca-9c57cc56f-fdcdc\" (UID: \"6e2d0411-e6d8-49a0-90c5-e1454e71bf44\") " pod="openshift-service-ca/service-ca-9c57cc56f-fdcdc" Dec 01 08:41:10 crc kubenswrapper[4689]: I1201 08:41:10.995565 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6j2p\" (UniqueName: \"kubernetes.io/projected/9eaef062-e274-4f3c-8ce2-3ea23e7106da-kube-api-access-s6j2p\") pod \"migrator-59844c95c7-pr577\" (UID: \"9eaef062-e274-4f3c-8ce2-3ea23e7106da\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-pr577" Dec 01 08:41:11 crc kubenswrapper[4689]: I1201 08:41:11.005981 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:11 crc kubenswrapper[4689]: E1201 08:41:11.006406 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:11.506391995 +0000 UTC m=+151.578679899 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:11 crc kubenswrapper[4689]: I1201 08:41:11.009280 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xnjd\" (UniqueName: \"kubernetes.io/projected/73a295f0-f461-4f83-af34-72b309949e99-kube-api-access-2xnjd\") pod \"machine-config-server-qszrn\" (UID: \"73a295f0-f461-4f83-af34-72b309949e99\") " pod="openshift-machine-config-operator/machine-config-server-qszrn" Dec 01 08:41:11 crc kubenswrapper[4689]: I1201 08:41:11.011280 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pb277\" (UniqueName: \"kubernetes.io/projected/dcf239d3-b34c-4fa0-aafa-4e82e29e0c9b-kube-api-access-pb277\") pod \"catalog-operator-68c6474976-ltkzh\" (UID: \"dcf239d3-b34c-4fa0-aafa-4e82e29e0c9b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ltkzh" Dec 01 08:41:11 crc kubenswrapper[4689]: I1201 08:41:11.012116 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjzt6\" (UniqueName: \"kubernetes.io/projected/2546c757-03ab-4ba3-95d0-aa537cd615fb-kube-api-access-mjzt6\") pod \"packageserver-d55dfcdfc-bjlxg\" (UID: \"2546c757-03ab-4ba3-95d0-aa537cd615fb\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bjlxg" Dec 01 08:41:11 crc kubenswrapper[4689]: I1201 08:41:11.016084 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7754ebe-fe0e-44a5-b463-1d005035d249-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-pzx5g\" (UID: \"e7754ebe-fe0e-44a5-b463-1d005035d249\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-pzx5g" Dec 01 08:41:11 crc kubenswrapper[4689]: I1201 08:41:11.018416 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7r8t\" (UniqueName: \"kubernetes.io/projected/4095fada-3a3f-4938-a63b-07eb736ad683-kube-api-access-k7r8t\") pod \"dns-default-wrw72\" (UID: \"4095fada-3a3f-4938-a63b-07eb736ad683\") " pod="openshift-dns/dns-default-wrw72" Dec 01 08:41:11 crc kubenswrapper[4689]: I1201 08:41:11.022651 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-8pg9k" Dec 01 08:41:11 crc kubenswrapper[4689]: I1201 08:41:11.028064 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-24jcf" Dec 01 08:41:11 crc kubenswrapper[4689]: I1201 08:41:11.033740 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-wrw72" Dec 01 08:41:11 crc kubenswrapper[4689]: I1201 08:41:11.036805 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bsffw" Dec 01 08:41:11 crc kubenswrapper[4689]: I1201 08:41:11.051979 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-qszrn" Dec 01 08:41:11 crc kubenswrapper[4689]: I1201 08:41:11.052591 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-pr577" Dec 01 08:41:11 crc kubenswrapper[4689]: I1201 08:41:11.055252 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6z2v4" Dec 01 08:41:11 crc kubenswrapper[4689]: I1201 08:41:11.065068 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-sc9lr" Dec 01 08:41:11 crc kubenswrapper[4689]: I1201 08:41:11.079546 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-pzx5g" Dec 01 08:41:11 crc kubenswrapper[4689]: I1201 08:41:11.093725 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vf9h5" Dec 01 08:41:11 crc kubenswrapper[4689]: I1201 08:41:11.106749 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:11 crc kubenswrapper[4689]: E1201 08:41:11.107299 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:11.607279377 +0000 UTC m=+151.679567291 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:11 crc kubenswrapper[4689]: I1201 08:41:11.211803 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:11 crc kubenswrapper[4689]: E1201 08:41:11.212880 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:11.712840967 +0000 UTC m=+151.785128871 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:11 crc kubenswrapper[4689]: I1201 08:41:11.220113 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ltkzh" Dec 01 08:41:11 crc kubenswrapper[4689]: I1201 08:41:11.244468 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bjlxg" Dec 01 08:41:11 crc kubenswrapper[4689]: I1201 08:41:11.261692 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-fdcdc" Dec 01 08:41:11 crc kubenswrapper[4689]: I1201 08:41:11.301359 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-k7w5z" event={"ID":"7646df48-faa9-486b-b7fc-c8b1d97ead27","Type":"ContainerStarted","Data":"1d229adf5620d12f6427fb9c01506240a8d7ada5168486225bc963c1f91ae749"} Dec 01 08:41:11 crc kubenswrapper[4689]: I1201 08:41:11.303839 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nnx7f" event={"ID":"70e552a9-22d9-4efc-b40a-25232123691b","Type":"ContainerStarted","Data":"16267bd678e43d81ceee3b1460eaea499aaa73e9e332cb16d1d130a8abae6327"} Dec 01 08:41:11 crc kubenswrapper[4689]: I1201 08:41:11.318080 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:11 crc kubenswrapper[4689]: E1201 08:41:11.318559 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:11.818541102 +0000 UTC m=+151.890829006 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:11 crc kubenswrapper[4689]: I1201 08:41:11.378526 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"bfc5333315674ef1a4da68dde81b2500a6b2dd15d36f114f38a1b2a00e0e9de1"} Dec 01 08:41:11 crc kubenswrapper[4689]: I1201 08:41:11.400088 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-hb577" event={"ID":"3bb10b5c-893e-422d-a60f-101f4717b0bc","Type":"ContainerStarted","Data":"638790024c49d03966e22daa4efc60b10bf423afe6c4551a259b849d1ac7bbbb"} Dec 01 08:41:11 crc kubenswrapper[4689]: I1201 08:41:11.421115 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:11 crc kubenswrapper[4689]: E1201 08:41:11.423334 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:11.923318677 +0000 UTC m=+151.995606601 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:11 crc kubenswrapper[4689]: I1201 08:41:11.521971 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:11 crc kubenswrapper[4689]: E1201 08:41:11.523590 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:12.023567318 +0000 UTC m=+152.095855222 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:11 crc kubenswrapper[4689]: I1201 08:41:11.625240 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:11 crc kubenswrapper[4689]: E1201 08:41:11.625798 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:12.125779772 +0000 UTC m=+152.198067676 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:11 crc kubenswrapper[4689]: I1201 08:41:11.727934 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:11 crc kubenswrapper[4689]: E1201 08:41:11.728699 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:12.228680187 +0000 UTC m=+152.300968091 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:11 crc kubenswrapper[4689]: I1201 08:41:11.743316 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-gwkk8"] Dec 01 08:41:11 crc kubenswrapper[4689]: I1201 08:41:11.830047 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:11 crc kubenswrapper[4689]: E1201 08:41:11.830583 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:12.330561421 +0000 UTC m=+152.402849325 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:11 crc kubenswrapper[4689]: I1201 08:41:11.931242 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:11 crc kubenswrapper[4689]: E1201 08:41:11.931509 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:12.431485624 +0000 UTC m=+152.503773528 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:11 crc kubenswrapper[4689]: I1201 08:41:11.932597 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:11 crc kubenswrapper[4689]: E1201 08:41:11.932933 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:12.432918329 +0000 UTC m=+152.505206233 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:11 crc kubenswrapper[4689]: W1201 08:41:11.962381 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod73a295f0_f461_4f83_af34_72b309949e99.slice/crio-e54e601cc971de71330b8e783974807f30afb90a496f9b9796e19c484a945b7e WatchSource:0}: Error finding container e54e601cc971de71330b8e783974807f30afb90a496f9b9796e19c484a945b7e: Status 404 returned error can't find the container with id e54e601cc971de71330b8e783974807f30afb90a496f9b9796e19c484a945b7e Dec 01 08:41:12 crc kubenswrapper[4689]: I1201 08:41:12.028264 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-xx949"] Dec 01 08:41:12 crc kubenswrapper[4689]: I1201 08:41:12.029591 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nnx7f" podStartSLOduration=130.029556646 podStartE2EDuration="2m10.029556646s" podCreationTimestamp="2025-12-01 08:39:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:41:12.02371161 +0000 UTC m=+152.095999514" watchObservedRunningTime="2025-12-01 08:41:12.029556646 +0000 UTC m=+152.101844550" Dec 01 08:41:12 crc kubenswrapper[4689]: I1201 08:41:12.037012 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:12 crc kubenswrapper[4689]: E1201 08:41:12.037382 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:12.537342293 +0000 UTC m=+152.609630197 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:12 crc kubenswrapper[4689]: I1201 08:41:12.137909 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:12 crc kubenswrapper[4689]: E1201 08:41:12.138613 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:12.638599967 +0000 UTC m=+152.710887871 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:12 crc kubenswrapper[4689]: I1201 08:41:12.259434 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:12 crc kubenswrapper[4689]: E1201 08:41:12.259722 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:12.759690681 +0000 UTC m=+152.831978575 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:12 crc kubenswrapper[4689]: I1201 08:41:12.370680 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:12 crc kubenswrapper[4689]: E1201 08:41:12.371057 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:12.871043746 +0000 UTC m=+152.943331650 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:12 crc kubenswrapper[4689]: I1201 08:41:12.471790 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:12 crc kubenswrapper[4689]: E1201 08:41:12.472318 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:12.972295349 +0000 UTC m=+153.044583253 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:12 crc kubenswrapper[4689]: I1201 08:41:12.573361 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:12 crc kubenswrapper[4689]: E1201 08:41:12.573936 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:13.073915894 +0000 UTC m=+153.146203798 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:12 crc kubenswrapper[4689]: I1201 08:41:12.648969 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bhjqx"] Dec 01 08:41:12 crc kubenswrapper[4689]: I1201 08:41:12.655496 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-24jcf" event={"ID":"57864f38-0ce2-401c-a4c4-96fc7cce8346","Type":"ContainerStarted","Data":"8281c0e91d88b527aa002b158db5bdcb1010362f75539f1aa2979f3ebdc23199"} Dec 01 08:41:12 crc kubenswrapper[4689]: I1201 08:41:12.676234 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:12 crc kubenswrapper[4689]: E1201 08:41:12.677591 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:13.177571903 +0000 UTC m=+153.249859817 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:12 crc kubenswrapper[4689]: I1201 08:41:12.687313 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-gzrk2" event={"ID":"e552c7d1-a8ad-4033-b89e-951c6f58588b","Type":"ContainerStarted","Data":"32f42df2f4e0afd5e0ac0c8ba539a05c0262a965a938cbf04358ef7978853e5b"} Dec 01 08:41:12 crc kubenswrapper[4689]: I1201 08:41:12.690984 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-hb577" event={"ID":"3bb10b5c-893e-422d-a60f-101f4717b0bc","Type":"ContainerStarted","Data":"b98b6a2484c2a7439f680d75e6cf39bc3d0fae0ab2bbecb3f717cde47545550c"} Dec 01 08:41:12 crc kubenswrapper[4689]: I1201 08:41:12.781909 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-k7w5z" event={"ID":"7646df48-faa9-486b-b7fc-c8b1d97ead27","Type":"ContainerStarted","Data":"739918adc0af303d9209b03473015434df0c813c21ea5598a42428f41514241e"} Dec 01 08:41:12 crc kubenswrapper[4689]: I1201 08:41:12.783531 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:12 crc kubenswrapper[4689]: E1201 08:41:12.784958 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:13.284939911 +0000 UTC m=+153.357227815 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:12 crc kubenswrapper[4689]: I1201 08:41:12.788091 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-qszrn" event={"ID":"73a295f0-f461-4f83-af34-72b309949e99","Type":"ContainerStarted","Data":"e54e601cc971de71330b8e783974807f30afb90a496f9b9796e19c484a945b7e"} Dec 01 08:41:12 crc kubenswrapper[4689]: I1201 08:41:12.789258 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-gwkk8" event={"ID":"8bebf2e0-afe5-4e98-8cdf-496c5d355ef9","Type":"ContainerStarted","Data":"720930af0bcb8d7307bed073c37109c181d324decc886642c1e082b3ff22681b"} Dec 01 08:41:12 crc kubenswrapper[4689]: I1201 08:41:12.875787 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-hb577" podStartSLOduration=131.875741622 podStartE2EDuration="2m11.875741622s" podCreationTimestamp="2025-12-01 08:39:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:41:12.853004028 +0000 UTC m=+152.925291932" watchObservedRunningTime="2025-12-01 08:41:12.875741622 +0000 UTC m=+152.948029526" Dec 01 08:41:12 crc kubenswrapper[4689]: I1201 08:41:12.884160 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:12 crc kubenswrapper[4689]: E1201 08:41:12.884722 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:13.384705667 +0000 UTC m=+153.456993561 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:12 crc kubenswrapper[4689]: I1201 08:41:12.884783 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:12 crc kubenswrapper[4689]: E1201 08:41:12.886188 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:13.386178203 +0000 UTC m=+153.458466107 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:12 crc kubenswrapper[4689]: I1201 08:41:12.985797 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:12 crc kubenswrapper[4689]: E1201 08:41:12.985947 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:13.485913518 +0000 UTC m=+153.558201422 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:12 crc kubenswrapper[4689]: I1201 08:41:12.986160 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:12 crc kubenswrapper[4689]: E1201 08:41:12.986626 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:13.48660891 +0000 UTC m=+153.558896834 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:13 crc kubenswrapper[4689]: I1201 08:41:13.087167 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:13 crc kubenswrapper[4689]: E1201 08:41:13.087601 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:13.587578044 +0000 UTC m=+153.659865948 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:13 crc kubenswrapper[4689]: I1201 08:41:13.210426 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:13 crc kubenswrapper[4689]: E1201 08:41:13.211030 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:13.711015354 +0000 UTC m=+153.783303258 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:13 crc kubenswrapper[4689]: I1201 08:41:13.338933 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:13 crc kubenswrapper[4689]: E1201 08:41:13.339274 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:13.839259106 +0000 UTC m=+153.911547010 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:13 crc kubenswrapper[4689]: I1201 08:41:13.441094 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:13 crc kubenswrapper[4689]: E1201 08:41:13.441574 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:13.941556973 +0000 UTC m=+154.013844877 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:13 crc kubenswrapper[4689]: I1201 08:41:13.511931 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-hb577" Dec 01 08:41:13 crc kubenswrapper[4689]: I1201 08:41:13.518442 4689 patch_prober.go:28] interesting pod/router-default-5444994796-hb577 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 08:41:13 crc kubenswrapper[4689]: [-]has-synced failed: reason withheld Dec 01 08:41:13 crc kubenswrapper[4689]: [+]process-running ok Dec 01 08:41:13 crc kubenswrapper[4689]: healthz check failed Dec 01 08:41:13 crc kubenswrapper[4689]: I1201 08:41:13.518507 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hb577" podUID="3bb10b5c-893e-422d-a60f-101f4717b0bc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 08:41:13 crc kubenswrapper[4689]: I1201 08:41:13.541962 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:13 crc kubenswrapper[4689]: E1201 08:41:13.542088 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:14.042069833 +0000 UTC m=+154.114357737 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:13 crc kubenswrapper[4689]: I1201 08:41:13.542333 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:13 crc kubenswrapper[4689]: E1201 08:41:13.542686 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:14.042676722 +0000 UTC m=+154.114964626 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:13 crc kubenswrapper[4689]: I1201 08:41:13.686893 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:13 crc kubenswrapper[4689]: E1201 08:41:13.687169 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:14.187154111 +0000 UTC m=+154.259442015 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:13 crc kubenswrapper[4689]: I1201 08:41:13.804080 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:13 crc kubenswrapper[4689]: E1201 08:41:13.804570 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:14.304547888 +0000 UTC m=+154.376835792 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:13 crc kubenswrapper[4689]: I1201 08:41:13.829972 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-k7w5z" event={"ID":"7646df48-faa9-486b-b7fc-c8b1d97ead27","Type":"ContainerStarted","Data":"22e77eadc45f3c3a72318a539b4fe6ef4612fd13a695dbac2459c07ee99b2c41"} Dec 01 08:41:13 crc kubenswrapper[4689]: I1201 08:41:13.863906 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-gwkk8" event={"ID":"8bebf2e0-afe5-4e98-8cdf-496c5d355ef9","Type":"ContainerStarted","Data":"dc508a3f80cbf4dae0a0b11d0b83ea8c5d42865d985a5c8de156d4c36e99c83d"} Dec 01 08:41:13 crc kubenswrapper[4689]: I1201 08:41:13.868108 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bhjqx" event={"ID":"b209cd69-6557-4b86-b7b2-680d3bcf8ec0","Type":"ContainerStarted","Data":"2da4d7fa8bbe1a100d0f286b2c652d52cca2d0ff9e575ca72e4aab715491bc29"} Dec 01 08:41:13 crc kubenswrapper[4689]: I1201 08:41:13.868150 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bhjqx" event={"ID":"b209cd69-6557-4b86-b7b2-680d3bcf8ec0","Type":"ContainerStarted","Data":"67d289db7c5af2a8093d53bcf5acf1a09e03c11121cf0be6d55c8e408e94edf6"} Dec 01 08:41:13 crc kubenswrapper[4689]: I1201 08:41:13.881803 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-24jcf" event={"ID":"57864f38-0ce2-401c-a4c4-96fc7cce8346","Type":"ContainerStarted","Data":"6324d497d46500d6b141a438a167fcd9be623a11b27fb9e4fe3fb62240677b78"} Dec 01 08:41:13 crc kubenswrapper[4689]: I1201 08:41:13.881855 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-24jcf" event={"ID":"57864f38-0ce2-401c-a4c4-96fc7cce8346","Type":"ContainerStarted","Data":"cbe0ae603d3628fbd09d22cda3e1d29de95bbd451b74014d33ef4115db9914be"} Dec 01 08:41:13 crc kubenswrapper[4689]: I1201 08:41:13.883427 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-xx949" event={"ID":"bd24264f-fc40-410e-9bed-3f8e340035b5","Type":"ContainerStarted","Data":"efa025fea1ec8337bd709a13be3919080f774ea2493595d595a90da6dd2b01d3"} Dec 01 08:41:13 crc kubenswrapper[4689]: I1201 08:41:13.883474 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-xx949" event={"ID":"bd24264f-fc40-410e-9bed-3f8e340035b5","Type":"ContainerStarted","Data":"1b90a3b5398f3bf176c31e2a23d57b9616e73db624e40a5bd078c520e2c3871d"} Dec 01 08:41:13 crc kubenswrapper[4689]: I1201 08:41:13.884161 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-xx949" Dec 01 08:41:13 crc kubenswrapper[4689]: I1201 08:41:13.886427 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-qszrn" event={"ID":"73a295f0-f461-4f83-af34-72b309949e99","Type":"ContainerStarted","Data":"bce8633a7e5e3469d53bc784cfa57a0c459578842fe003685a8a518492a759c9"} Dec 01 08:41:13 crc kubenswrapper[4689]: I1201 08:41:13.887793 4689 patch_prober.go:28] interesting pod/downloads-7954f5f757-xx949 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" start-of-body= Dec 01 08:41:13 crc kubenswrapper[4689]: I1201 08:41:13.887840 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-xx949" podUID="bd24264f-fc40-410e-9bed-3f8e340035b5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" Dec 01 08:41:13 crc kubenswrapper[4689]: I1201 08:41:13.891265 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-gzrk2" event={"ID":"e552c7d1-a8ad-4033-b89e-951c6f58588b","Type":"ContainerStarted","Data":"a794046b76f3aad8c7c56cfc418bfd89b86bf00c41a278b3e6441b42d454abf0"} Dec 01 08:41:13 crc kubenswrapper[4689]: I1201 08:41:13.905213 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:13 crc kubenswrapper[4689]: E1201 08:41:13.907604 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:14.407575047 +0000 UTC m=+154.479862951 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:13 crc kubenswrapper[4689]: I1201 08:41:13.938157 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-k7w5z" podStartSLOduration=132.93811191 podStartE2EDuration="2m12.93811191s" podCreationTimestamp="2025-12-01 08:39:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:41:13.864084624 +0000 UTC m=+153.936372528" watchObservedRunningTime="2025-12-01 08:41:13.93811191 +0000 UTC m=+154.010399814" Dec 01 08:41:13 crc kubenswrapper[4689]: I1201 08:41:13.940283 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-gwkk8" podStartSLOduration=133.940259948 podStartE2EDuration="2m13.940259948s" podCreationTimestamp="2025-12-01 08:39:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:41:13.93027931 +0000 UTC m=+154.002567224" watchObservedRunningTime="2025-12-01 08:41:13.940259948 +0000 UTC m=+154.012547852" Dec 01 08:41:14 crc kubenswrapper[4689]: I1201 08:41:13.981043 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-gzrk2" podStartSLOduration=132.981010705 podStartE2EDuration="2m12.981010705s" podCreationTimestamp="2025-12-01 08:39:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:41:13.975417887 +0000 UTC m=+154.047705801" watchObservedRunningTime="2025-12-01 08:41:13.981010705 +0000 UTC m=+154.053298609" Dec 01 08:41:14 crc kubenswrapper[4689]: I1201 08:41:13.985110 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vhndn"] Dec 01 08:41:14 crc kubenswrapper[4689]: I1201 08:41:14.007155 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-xx949" podStartSLOduration=133.007128787 podStartE2EDuration="2m13.007128787s" podCreationTimestamp="2025-12-01 08:39:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:41:14.006896159 +0000 UTC m=+154.079184063" watchObservedRunningTime="2025-12-01 08:41:14.007128787 +0000 UTC m=+154.079416691" Dec 01 08:41:14 crc kubenswrapper[4689]: I1201 08:41:14.007902 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:14 crc kubenswrapper[4689]: E1201 08:41:14.008241 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:14.508225591 +0000 UTC m=+154.580513495 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:14 crc kubenswrapper[4689]: I1201 08:41:14.067434 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-24jcf" podStartSLOduration=134.067405035 podStartE2EDuration="2m14.067405035s" podCreationTimestamp="2025-12-01 08:39:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:41:14.038307209 +0000 UTC m=+154.110595133" watchObservedRunningTime="2025-12-01 08:41:14.067405035 +0000 UTC m=+154.139692959" Dec 01 08:41:14 crc kubenswrapper[4689]: I1201 08:41:14.079495 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-29dmp"] Dec 01 08:41:14 crc kubenswrapper[4689]: I1201 08:41:14.089864 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bhjqx" podStartSLOduration=133.089840149 podStartE2EDuration="2m13.089840149s" podCreationTimestamp="2025-12-01 08:39:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:41:14.088821107 +0000 UTC m=+154.161109021" watchObservedRunningTime="2025-12-01 08:41:14.089840149 +0000 UTC m=+154.162128043" Dec 01 08:41:14 crc kubenswrapper[4689]: I1201 08:41:14.109679 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:14 crc kubenswrapper[4689]: E1201 08:41:14.109945 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:14.609926059 +0000 UTC m=+154.682213963 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:14 crc kubenswrapper[4689]: I1201 08:41:14.138322 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-qszrn" podStartSLOduration=6.138298452 podStartE2EDuration="6.138298452s" podCreationTimestamp="2025-12-01 08:41:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:41:14.124526553 +0000 UTC m=+154.196814457" watchObservedRunningTime="2025-12-01 08:41:14.138298452 +0000 UTC m=+154.210586356" Dec 01 08:41:14 crc kubenswrapper[4689]: I1201 08:41:14.212781 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:14 crc kubenswrapper[4689]: E1201 08:41:14.213784 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:14.713732123 +0000 UTC m=+154.786020027 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:14 crc kubenswrapper[4689]: I1201 08:41:14.281148 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-j5r2f"] Dec 01 08:41:14 crc kubenswrapper[4689]: I1201 08:41:14.299805 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-chlnk"] Dec 01 08:41:14 crc kubenswrapper[4689]: I1201 08:41:14.314921 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:14 crc kubenswrapper[4689]: E1201 08:41:14.315273 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:14.815254485 +0000 UTC m=+154.887542389 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:14 crc kubenswrapper[4689]: I1201 08:41:14.320842 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-ch9jh"] Dec 01 08:41:14 crc kubenswrapper[4689]: I1201 08:41:14.334455 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bsffw"] Dec 01 08:41:14 crc kubenswrapper[4689]: I1201 08:41:14.338872 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409630-9rhzp"] Dec 01 08:41:14 crc kubenswrapper[4689]: I1201 08:41:14.370023 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ffvcj"] Dec 01 08:41:14 crc kubenswrapper[4689]: I1201 08:41:14.417559 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:14 crc kubenswrapper[4689]: E1201 08:41:14.418890 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:14.91874998 +0000 UTC m=+154.991037884 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:14 crc kubenswrapper[4689]: I1201 08:41:14.420872 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-59vc4"] Dec 01 08:41:14 crc kubenswrapper[4689]: I1201 08:41:14.465442 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zvzpg"] Dec 01 08:41:14 crc kubenswrapper[4689]: W1201 08:41:14.470895 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48cda75c_3d0c_44e1_8f98_c191a4e79e1b.slice/crio-2b83085f9335a4670a607b6b3d00e75393c23c0a17e562f8f3e76b98ee034435 WatchSource:0}: Error finding container 2b83085f9335a4670a607b6b3d00e75393c23c0a17e562f8f3e76b98ee034435: Status 404 returned error can't find the container with id 2b83085f9335a4670a607b6b3d00e75393c23c0a17e562f8f3e76b98ee034435 Dec 01 08:41:14 crc kubenswrapper[4689]: I1201 08:41:14.518046 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nnx7f" Dec 01 08:41:14 crc kubenswrapper[4689]: I1201 08:41:14.518546 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nnx7f" Dec 01 08:41:14 crc kubenswrapper[4689]: I1201 08:41:14.519163 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:14 crc kubenswrapper[4689]: E1201 08:41:14.520781 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:15.020758177 +0000 UTC m=+155.093046221 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:14 crc kubenswrapper[4689]: W1201 08:41:14.558445 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod59ce6c52_f027_43b4_904f_c402047a39f0.slice/crio-5e5ad2ea4690f4ba95ff3d3f7bdc8f2db24100c80ae09b24810eb50451a7d615 WatchSource:0}: Error finding container 5e5ad2ea4690f4ba95ff3d3f7bdc8f2db24100c80ae09b24810eb50451a7d615: Status 404 returned error can't find the container with id 5e5ad2ea4690f4ba95ff3d3f7bdc8f2db24100c80ae09b24810eb50451a7d615 Dec 01 08:41:14 crc kubenswrapper[4689]: I1201 08:41:14.558667 4689 patch_prober.go:28] interesting pod/router-default-5444994796-hb577 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 08:41:14 crc kubenswrapper[4689]: [-]has-synced failed: reason withheld Dec 01 08:41:14 crc kubenswrapper[4689]: [+]process-running ok Dec 01 08:41:14 crc kubenswrapper[4689]: healthz check failed Dec 01 08:41:14 crc kubenswrapper[4689]: I1201 08:41:14.558743 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hb577" podUID="3bb10b5c-893e-422d-a60f-101f4717b0bc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 08:41:14 crc kubenswrapper[4689]: I1201 08:41:14.561670 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-l54ll"] Dec 01 08:41:14 crc kubenswrapper[4689]: I1201 08:41:14.576673 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nnx7f" Dec 01 08:41:14 crc kubenswrapper[4689]: I1201 08:41:14.589509 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-g9pxf"] Dec 01 08:41:14 crc kubenswrapper[4689]: I1201 08:41:14.624012 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:14 crc kubenswrapper[4689]: E1201 08:41:14.631088 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:15.131059737 +0000 UTC m=+155.203347641 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:14 crc kubenswrapper[4689]: I1201 08:41:14.644790 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4nmmg"] Dec 01 08:41:14 crc kubenswrapper[4689]: I1201 08:41:14.675112 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-b7hxr"] Dec 01 08:41:14 crc kubenswrapper[4689]: I1201 08:41:14.724838 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:14 crc kubenswrapper[4689]: E1201 08:41:14.725719 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:15.225597327 +0000 UTC m=+155.297885231 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:14 crc kubenswrapper[4689]: I1201 08:41:14.753135 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-bnznn"] Dec 01 08:41:14 crc kubenswrapper[4689]: I1201 08:41:14.768176 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-h47fn"] Dec 01 08:41:14 crc kubenswrapper[4689]: I1201 08:41:14.771233 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-8rfdp"] Dec 01 08:41:14 crc kubenswrapper[4689]: I1201 08:41:14.802627 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-f6t2n"] Dec 01 08:41:14 crc kubenswrapper[4689]: I1201 08:41:14.803958 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-fdcdc"] Dec 01 08:41:14 crc kubenswrapper[4689]: I1201 08:41:14.828094 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:14 crc kubenswrapper[4689]: E1201 08:41:14.828468 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:15.328453421 +0000 UTC m=+155.400741325 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:14 crc kubenswrapper[4689]: I1201 08:41:14.828600 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-6z2v4"] Dec 01 08:41:14 crc kubenswrapper[4689]: I1201 08:41:14.833468 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lz96b"] Dec 01 08:41:14 crc kubenswrapper[4689]: I1201 08:41:14.834843 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-wrw72"] Dec 01 08:41:14 crc kubenswrapper[4689]: I1201 08:41:14.849290 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-8pg9k"] Dec 01 08:41:14 crc kubenswrapper[4689]: I1201 08:41:14.852351 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ltkzh"] Dec 01 08:41:14 crc kubenswrapper[4689]: I1201 08:41:14.875164 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-pzx5g"] Dec 01 08:41:14 crc kubenswrapper[4689]: I1201 08:41:14.911774 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-sc9lr"] Dec 01 08:41:14 crc kubenswrapper[4689]: I1201 08:41:14.928962 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:14 crc kubenswrapper[4689]: E1201 08:41:14.930431 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:15.430404126 +0000 UTC m=+155.502692040 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:14 crc kubenswrapper[4689]: I1201 08:41:14.946099 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-9nx2j"] Dec 01 08:41:14 crc kubenswrapper[4689]: I1201 08:41:14.960402 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b7hxr" event={"ID":"a2f6cac4-8eb9-4d62-8ef2-3ceb354076bf","Type":"ContainerStarted","Data":"965ef3d07fa31cb6e999ef22e3ef95582517ca08554130db8a587f6f0a359522"} Dec 01 08:41:14 crc kubenswrapper[4689]: I1201 08:41:14.985896 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vhndn" event={"ID":"828d985f-2b1a-47df-b653-907e8684d1f5","Type":"ContainerStarted","Data":"2eca679824b89cf9ad2440c1b9b33a6f3a737373c06bac4968fd2f079fe0b5ff"} Dec 01 08:41:14 crc kubenswrapper[4689]: I1201 08:41:14.985956 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vhndn" event={"ID":"828d985f-2b1a-47df-b653-907e8684d1f5","Type":"ContainerStarted","Data":"3e523ebaa7bf944822e41aced2facb806d5bb05e04417228da336c8ca8d5ac32"} Dec 01 08:41:15 crc kubenswrapper[4689]: I1201 08:41:15.004933 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bjlxg"] Dec 01 08:41:15 crc kubenswrapper[4689]: I1201 08:41:15.013672 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-g9pxf" event={"ID":"065b75bb-d7a1-478c-bb62-cec913693a7e","Type":"ContainerStarted","Data":"15198cf0259341f9135506e5932b5709eff8fe33646f23876bb563fad680e5de"} Dec 01 08:41:15 crc kubenswrapper[4689]: I1201 08:41:15.014979 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vhndn" podStartSLOduration=135.014962508 podStartE2EDuration="2m15.014962508s" podCreationTimestamp="2025-12-01 08:39:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:41:15.014249305 +0000 UTC m=+155.086537219" watchObservedRunningTime="2025-12-01 08:41:15.014962508 +0000 UTC m=+155.087250412" Dec 01 08:41:15 crc kubenswrapper[4689]: W1201 08:41:15.016551 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd8122c2_aaf0_4148_849c_ca4502dd0f55.slice/crio-55e58fa576eba472106579380938f50b800bf2c97237f09253ead86b0cc82e12 WatchSource:0}: Error finding container 55e58fa576eba472106579380938f50b800bf2c97237f09253ead86b0cc82e12: Status 404 returned error can't find the container with id 55e58fa576eba472106579380938f50b800bf2c97237f09253ead86b0cc82e12 Dec 01 08:41:15 crc kubenswrapper[4689]: I1201 08:41:15.033075 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:15 crc kubenswrapper[4689]: E1201 08:41:15.033805 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:15.533775577 +0000 UTC m=+155.606063481 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:15 crc kubenswrapper[4689]: I1201 08:41:15.034415 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-chlnk" event={"ID":"c062b92b-1709-4892-9b40-b1d2405d5812","Type":"ContainerStarted","Data":"dd9f9d2aaa1d6f2a3221a52f6339c2cd6437cc953f2a74b00bd8beb745c77f65"} Dec 01 08:41:15 crc kubenswrapper[4689]: I1201 08:41:15.039180 4689 generic.go:334] "Generic (PLEG): container finished" podID="21eaf97a-bf73-4e70-a9bc-153b17b8a799" containerID="4aef06b827fcf5804c20bc37fb56df76f65c93d73a82dd0bc3d0c005736e1c4f" exitCode=0 Dec 01 08:41:15 crc kubenswrapper[4689]: I1201 08:41:15.039576 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-29dmp" event={"ID":"21eaf97a-bf73-4e70-a9bc-153b17b8a799","Type":"ContainerDied","Data":"4aef06b827fcf5804c20bc37fb56df76f65c93d73a82dd0bc3d0c005736e1c4f"} Dec 01 08:41:15 crc kubenswrapper[4689]: I1201 08:41:15.039607 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-29dmp" event={"ID":"21eaf97a-bf73-4e70-a9bc-153b17b8a799","Type":"ContainerStarted","Data":"fd87f277bbd48244a54cd7dea5c1cfd5e3d879f74c9d985076b7b5e5db87fc5b"} Dec 01 08:41:15 crc kubenswrapper[4689]: I1201 08:41:15.078565 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-j5r2f" event={"ID":"710ccb76-093a-484d-a784-737ae81e7c21","Type":"ContainerStarted","Data":"5671ca8829ff117d44511aa04de10cd2bf9fe76b37a026879c8d47ca73a4e996"} Dec 01 08:41:15 crc kubenswrapper[4689]: I1201 08:41:15.078607 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-pr577"] Dec 01 08:41:15 crc kubenswrapper[4689]: I1201 08:41:15.104332 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-l54ll" event={"ID":"2fd47e85-de9d-475a-8907-4e805cb1cfc8","Type":"ContainerStarted","Data":"98586172a1630518d8d2fb97cf9c11296c621272754524499e6ac7c4d592ade5"} Dec 01 08:41:15 crc kubenswrapper[4689]: I1201 08:41:15.111143 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zvzpg" event={"ID":"c1a4774c-b15d-424e-bb37-d6880da5ad85","Type":"ContainerStarted","Data":"45463abccba7772a66c3b06721acf307bb288b7122958cd995a7c5c31da99c42"} Dec 01 08:41:15 crc kubenswrapper[4689]: W1201 08:41:15.117843 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9eaef062_e274_4f3c_8ce2_3ea23e7106da.slice/crio-9b8d09ef7a30bf3c4e9529cd7b5595a138654f98c28562adb119310c17195378 WatchSource:0}: Error finding container 9b8d09ef7a30bf3c4e9529cd7b5595a138654f98c28562adb119310c17195378: Status 404 returned error can't find the container with id 9b8d09ef7a30bf3c4e9529cd7b5595a138654f98c28562adb119310c17195378 Dec 01 08:41:15 crc kubenswrapper[4689]: W1201 08:41:15.122633 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2546c757_03ab_4ba3_95d0_aa537cd615fb.slice/crio-aa63d9fd1821aada0209b7849a164cdfafa4db6cd29a03c70b47c7e74b77239a WatchSource:0}: Error finding container aa63d9fd1821aada0209b7849a164cdfafa4db6cd29a03c70b47c7e74b77239a: Status 404 returned error can't find the container with id aa63d9fd1821aada0209b7849a164cdfafa4db6cd29a03c70b47c7e74b77239a Dec 01 08:41:15 crc kubenswrapper[4689]: I1201 08:41:15.123276 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-59vc4" event={"ID":"59ce6c52-f027-43b4-904f-c402047a39f0","Type":"ContainerStarted","Data":"5e5ad2ea4690f4ba95ff3d3f7bdc8f2db24100c80ae09b24810eb50451a7d615"} Dec 01 08:41:15 crc kubenswrapper[4689]: I1201 08:41:15.137868 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-h47fn" event={"ID":"f005f766-04a5-4b03-8c50-2fd9ddf967be","Type":"ContainerStarted","Data":"ac7074aed7aa0a83e094ae53828e32fc3a2507f8d8800abede6e02123f8648f8"} Dec 01 08:41:15 crc kubenswrapper[4689]: I1201 08:41:15.142721 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29409630-9rhzp" event={"ID":"be1070d3-8d5b-4910-aee6-3fee2a360934","Type":"ContainerStarted","Data":"8d1617d9415632ee5cef3857cd977255ded82f87194b14486f0e77266a250239"} Dec 01 08:41:15 crc kubenswrapper[4689]: I1201 08:41:15.144712 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xjxwg"] Dec 01 08:41:15 crc kubenswrapper[4689]: I1201 08:41:15.144928 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:15 crc kubenswrapper[4689]: E1201 08:41:15.145229 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:15.645206214 +0000 UTC m=+155.717494118 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:15 crc kubenswrapper[4689]: I1201 08:41:15.160789 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bsffw" event={"ID":"dadd1959-680d-4f67-9af9-65d8519398df","Type":"ContainerStarted","Data":"99041859f5970eed2a094b6fc5288504bbfdb1d0f38c75ca79d839a314a64ef6"} Dec 01 08:41:15 crc kubenswrapper[4689]: I1201 08:41:15.165830 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-ch9jh" event={"ID":"c389f615-2c0f-467b-924e-ad740d3fff07","Type":"ContainerStarted","Data":"c092f6fb891be8c84c33a6f6a28e3011c2f80af73b31b482f4ee27d4b5d19085"} Dec 01 08:41:15 crc kubenswrapper[4689]: I1201 08:41:15.184276 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ffvcj" event={"ID":"48cda75c-3d0c-44e1-8f98-c191a4e79e1b","Type":"ContainerStarted","Data":"2b83085f9335a4670a607b6b3d00e75393c23c0a17e562f8f3e76b98ee034435"} Dec 01 08:41:15 crc kubenswrapper[4689]: I1201 08:41:15.199445 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29409630-9rhzp" podStartSLOduration=134.199407849 podStartE2EDuration="2m14.199407849s" podCreationTimestamp="2025-12-01 08:39:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:41:15.176340485 +0000 UTC m=+155.248628409" watchObservedRunningTime="2025-12-01 08:41:15.199407849 +0000 UTC m=+155.271695753" Dec 01 08:41:15 crc kubenswrapper[4689]: I1201 08:41:15.201343 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vf9h5"] Dec 01 08:41:15 crc kubenswrapper[4689]: I1201 08:41:15.225380 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4nmmg" event={"ID":"1120a89b-2c45-428f-8577-eb6eb712961b","Type":"ContainerStarted","Data":"46762b2e0169f224f5992db7bbbd45f0017bbd3474e0c6116c7b26c805a129f6"} Dec 01 08:41:15 crc kubenswrapper[4689]: I1201 08:41:15.236951 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nnx7f" Dec 01 08:41:15 crc kubenswrapper[4689]: I1201 08:41:15.252572 4689 patch_prober.go:28] interesting pod/downloads-7954f5f757-xx949 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" start-of-body= Dec 01 08:41:15 crc kubenswrapper[4689]: I1201 08:41:15.252746 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-xx949" podUID="bd24264f-fc40-410e-9bed-3f8e340035b5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" Dec 01 08:41:15 crc kubenswrapper[4689]: I1201 08:41:15.253393 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:15 crc kubenswrapper[4689]: E1201 08:41:15.265622 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:15.765596257 +0000 UTC m=+155.837884161 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:15 crc kubenswrapper[4689]: I1201 08:41:15.302613 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-z629s"] Dec 01 08:41:15 crc kubenswrapper[4689]: I1201 08:41:15.360756 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:15 crc kubenswrapper[4689]: E1201 08:41:15.361031 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:15.860982433 +0000 UTC m=+155.933270337 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:15 crc kubenswrapper[4689]: I1201 08:41:15.361231 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:15 crc kubenswrapper[4689]: E1201 08:41:15.362220 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:15.862194981 +0000 UTC m=+155.934482885 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:15 crc kubenswrapper[4689]: I1201 08:41:15.470334 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:15 crc kubenswrapper[4689]: E1201 08:41:15.470524 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:15.970495459 +0000 UTC m=+156.042783363 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:15 crc kubenswrapper[4689]: I1201 08:41:15.470909 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:15 crc kubenswrapper[4689]: E1201 08:41:15.471259 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:15.971244483 +0000 UTC m=+156.043532387 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:15 crc kubenswrapper[4689]: I1201 08:41:15.535526 4689 patch_prober.go:28] interesting pod/router-default-5444994796-hb577 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 08:41:15 crc kubenswrapper[4689]: [-]has-synced failed: reason withheld Dec 01 08:41:15 crc kubenswrapper[4689]: [+]process-running ok Dec 01 08:41:15 crc kubenswrapper[4689]: healthz check failed Dec 01 08:41:15 crc kubenswrapper[4689]: I1201 08:41:15.535632 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hb577" podUID="3bb10b5c-893e-422d-a60f-101f4717b0bc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 08:41:15 crc kubenswrapper[4689]: I1201 08:41:15.605769 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:15 crc kubenswrapper[4689]: E1201 08:41:15.606661 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:16.106639703 +0000 UTC m=+156.178927627 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:15 crc kubenswrapper[4689]: I1201 08:41:15.707484 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:15 crc kubenswrapper[4689]: E1201 08:41:15.708009 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:16.207988849 +0000 UTC m=+156.280276743 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:15 crc kubenswrapper[4689]: I1201 08:41:15.814147 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:15 crc kubenswrapper[4689]: E1201 08:41:15.814607 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:16.314569882 +0000 UTC m=+156.386857786 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:15 crc kubenswrapper[4689]: I1201 08:41:15.815006 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:15 crc kubenswrapper[4689]: E1201 08:41:15.815458 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:16.315434319 +0000 UTC m=+156.387722223 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:15 crc kubenswrapper[4689]: I1201 08:41:15.919019 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:15 crc kubenswrapper[4689]: E1201 08:41:15.919336 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:16.419292925 +0000 UTC m=+156.491580829 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:15 crc kubenswrapper[4689]: I1201 08:41:15.919424 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:15 crc kubenswrapper[4689]: E1201 08:41:15.920839 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:16.420808414 +0000 UTC m=+156.493096478 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:16 crc kubenswrapper[4689]: I1201 08:41:16.024416 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:16 crc kubenswrapper[4689]: E1201 08:41:16.024928 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:16.524906018 +0000 UTC m=+156.597193922 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:16 crc kubenswrapper[4689]: I1201 08:41:16.126812 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:16 crc kubenswrapper[4689]: E1201 08:41:16.127289 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:16.627267095 +0000 UTC m=+156.699554999 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:16 crc kubenswrapper[4689]: I1201 08:41:16.229989 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:16 crc kubenswrapper[4689]: E1201 08:41:16.230530 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:16.730508972 +0000 UTC m=+156.802796876 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:16 crc kubenswrapper[4689]: I1201 08:41:16.338095 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:16 crc kubenswrapper[4689]: E1201 08:41:16.338579 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:16.838563562 +0000 UTC m=+156.910851466 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:16 crc kubenswrapper[4689]: I1201 08:41:16.350289 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f6t2n" event={"ID":"45f382fb-86c3-493f-a2ab-eb9b51923752","Type":"ContainerStarted","Data":"23c0a22abf973496cb516798bf3d782042bd21fa388fa7fcee1b9b0e10340a8c"} Dec 01 08:41:16 crc kubenswrapper[4689]: I1201 08:41:16.350382 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f6t2n" event={"ID":"45f382fb-86c3-493f-a2ab-eb9b51923752","Type":"ContainerStarted","Data":"ae0b964921b91eb8d35ce6e5bfd1c81ca000636cd09df30d2303dc8a4a469157"} Dec 01 08:41:16 crc kubenswrapper[4689]: I1201 08:41:16.352814 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-8rfdp" event={"ID":"e85d92ae-30aa-4302-b217-43a48dcadd8a","Type":"ContainerStarted","Data":"e1d5b4003e6ac7729d5c114ed97812492f57ce66682981c0795e083d02dd1752"} Dec 01 08:41:16 crc kubenswrapper[4689]: I1201 08:41:16.354710 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-59vc4" event={"ID":"59ce6c52-f027-43b4-904f-c402047a39f0","Type":"ContainerStarted","Data":"685b5abdcb5db4531f55b18d3510c14fda3245345effe0f5a3ec66cb6b7c2976"} Dec 01 08:41:16 crc kubenswrapper[4689]: I1201 08:41:16.357277 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29409630-9rhzp" event={"ID":"be1070d3-8d5b-4910-aee6-3fee2a360934","Type":"ContainerStarted","Data":"43cb72af69f0beb62deeddfe1a7cceed942748ca60e676f91be5e13083a6d95c"} Dec 01 08:41:16 crc kubenswrapper[4689]: I1201 08:41:16.359452 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vf9h5" event={"ID":"3d0af3ff-5d7b-41ae-be27-4dea7a282d86","Type":"ContainerStarted","Data":"b33f1ed533a3ca9e7ef89cc41260128661804afcd6181525856ea9d877bf7334"} Dec 01 08:41:16 crc kubenswrapper[4689]: I1201 08:41:16.360613 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ffvcj" event={"ID":"48cda75c-3d0c-44e1-8f98-c191a4e79e1b","Type":"ContainerStarted","Data":"e3df41afe592fb0411c93614e4494708902fe411655b6f462676c16e705c3de8"} Dec 01 08:41:16 crc kubenswrapper[4689]: I1201 08:41:16.362297 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-9nx2j" event={"ID":"5fb20738-492b-4b13-bf8a-5c32aabc0f32","Type":"ContainerStarted","Data":"58416c2f7a7456ce974f1716db49d1c6087a0dd645aad9bc922f1ae2ea44c60a"} Dec 01 08:41:16 crc kubenswrapper[4689]: I1201 08:41:16.364539 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bjlxg" event={"ID":"2546c757-03ab-4ba3-95d0-aa537cd615fb","Type":"ContainerStarted","Data":"aa63d9fd1821aada0209b7849a164cdfafa4db6cd29a03c70b47c7e74b77239a"} Dec 01 08:41:16 crc kubenswrapper[4689]: I1201 08:41:16.383403 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-59vc4" podStartSLOduration=134.383357088 podStartE2EDuration="2m14.383357088s" podCreationTimestamp="2025-12-01 08:39:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:41:16.381425276 +0000 UTC m=+156.453713200" watchObservedRunningTime="2025-12-01 08:41:16.383357088 +0000 UTC m=+156.455644992" Dec 01 08:41:16 crc kubenswrapper[4689]: I1201 08:41:16.386064 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-chlnk" event={"ID":"c062b92b-1709-4892-9b40-b1d2405d5812","Type":"ContainerStarted","Data":"fb3e3d0edaa3ad6647fa6e90e34a9ccab5711c92f247f3765d3716fbf69713ef"} Dec 01 08:41:16 crc kubenswrapper[4689]: I1201 08:41:16.390081 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ltkzh" event={"ID":"dcf239d3-b34c-4fa0-aafa-4e82e29e0c9b","Type":"ContainerStarted","Data":"6d4f124ae2456995cb25717ec6e797f340cb98e3ce37478784eecc71b56f85d1"} Dec 01 08:41:16 crc kubenswrapper[4689]: I1201 08:41:16.417913 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-bnznn" event={"ID":"df446b8a-9d6b-41e5-9b7f-2ffa97a1217c","Type":"ContainerStarted","Data":"e108c2e74116d0e8ca95773045a3c8acfbdbb3ece7c4af6830f772d8169c7556"} Dec 01 08:41:16 crc kubenswrapper[4689]: I1201 08:41:16.434111 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zvzpg" event={"ID":"c1a4774c-b15d-424e-bb37-d6880da5ad85","Type":"ContainerStarted","Data":"9ab4c162d133a2cb29bea264da6c61551de6387505cd94cabd85eccef7857c3c"} Dec 01 08:41:16 crc kubenswrapper[4689]: I1201 08:41:16.437309 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-z629s" event={"ID":"304e31dc-6fcd-4654-9c3d-ef693f7c71a6","Type":"ContainerStarted","Data":"a8c749b54f27ebadc16309c33a5efeb4ddd3517a2edcd71ec5f26b7965599368"} Dec 01 08:41:16 crc kubenswrapper[4689]: I1201 08:41:16.439101 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:16 crc kubenswrapper[4689]: E1201 08:41:16.446913 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:16.946856918 +0000 UTC m=+157.019144822 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:16 crc kubenswrapper[4689]: I1201 08:41:16.454799 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-fdcdc" event={"ID":"6e2d0411-e6d8-49a0-90c5-e1454e71bf44","Type":"ContainerStarted","Data":"b9c43884c08923ba36ebb2f068c2f60f4f9ceee3c4c424e37ea5a64a88d20781"} Dec 01 08:41:16 crc kubenswrapper[4689]: I1201 08:41:16.456616 4689 generic.go:334] "Generic (PLEG): container finished" podID="c389f615-2c0f-467b-924e-ad740d3fff07" containerID="d6bc9ff09da492c72ad278096f50abcad1f8343c11a6f29922f81278a180d7b3" exitCode=0 Dec 01 08:41:16 crc kubenswrapper[4689]: I1201 08:41:16.456671 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-ch9jh" event={"ID":"c389f615-2c0f-467b-924e-ad740d3fff07","Type":"ContainerDied","Data":"d6bc9ff09da492c72ad278096f50abcad1f8343c11a6f29922f81278a180d7b3"} Dec 01 08:41:16 crc kubenswrapper[4689]: I1201 08:41:16.461301 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-8pg9k" event={"ID":"79369af1-c9d2-4d8e-a675-a5174bc0e4ad","Type":"ContainerStarted","Data":"626ee922b3f83e081fd9becea8486301b8113368c588a488a8d4ea2441014a17"} Dec 01 08:41:16 crc kubenswrapper[4689]: I1201 08:41:16.462146 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-wrw72" event={"ID":"4095fada-3a3f-4938-a63b-07eb736ad683","Type":"ContainerStarted","Data":"bea6a07b927a33742a67be5e2f2fbb589e52a55948c972645110b53afeddca2b"} Dec 01 08:41:16 crc kubenswrapper[4689]: I1201 08:41:16.477903 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-l54ll" event={"ID":"2fd47e85-de9d-475a-8907-4e805cb1cfc8","Type":"ContainerStarted","Data":"ee010a2ebb95779aba05be2b8efd21cb507083874d8809a3e81b3d022a5ed38f"} Dec 01 08:41:16 crc kubenswrapper[4689]: I1201 08:41:16.479128 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-l54ll" Dec 01 08:41:16 crc kubenswrapper[4689]: I1201 08:41:16.480249 4689 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-l54ll container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" start-of-body= Dec 01 08:41:16 crc kubenswrapper[4689]: I1201 08:41:16.480298 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-l54ll" podUID="2fd47e85-de9d-475a-8907-4e805cb1cfc8" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" Dec 01 08:41:16 crc kubenswrapper[4689]: I1201 08:41:16.484601 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6z2v4" event={"ID":"bd8122c2-aaf0-4148-849c-ca4502dd0f55","Type":"ContainerStarted","Data":"55e58fa576eba472106579380938f50b800bf2c97237f09253ead86b0cc82e12"} Dec 01 08:41:16 crc kubenswrapper[4689]: I1201 08:41:16.504938 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-pr577" event={"ID":"9eaef062-e274-4f3c-8ce2-3ea23e7106da","Type":"ContainerStarted","Data":"9b8d09ef7a30bf3c4e9529cd7b5595a138654f98c28562adb119310c17195378"} Dec 01 08:41:16 crc kubenswrapper[4689]: I1201 08:41:16.512547 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-pzx5g" event={"ID":"e7754ebe-fe0e-44a5-b463-1d005035d249","Type":"ContainerStarted","Data":"a1856f39e4b6416547382e7ef006cf155e4251b67523aaef054bddbdd8dbe4a6"} Dec 01 08:41:16 crc kubenswrapper[4689]: I1201 08:41:16.516711 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xjxwg" event={"ID":"2499ecbd-1cda-49a9-8c8a-e80d44127f01","Type":"ContainerStarted","Data":"199a1f75591998b418e414ec1728c99b54c6f9810064f9aa141db0f3422aa059"} Dec 01 08:41:16 crc kubenswrapper[4689]: I1201 08:41:16.517515 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-sc9lr" event={"ID":"cf58a7d2-9013-4cdf-a435-67695f7677a1","Type":"ContainerStarted","Data":"76e355c7b0c9867936e9771935c55fe1a542328f6c9e82e8cfd36cc2bb46ace8"} Dec 01 08:41:16 crc kubenswrapper[4689]: I1201 08:41:16.518142 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lz96b" event={"ID":"b1b970c0-59a2-4782-8664-b17a7d7a8202","Type":"ContainerStarted","Data":"15078446a0a90f236c1eecb0c8c9410afd351be6a61002cf46c21099c03537a4"} Dec 01 08:41:16 crc kubenswrapper[4689]: I1201 08:41:16.519204 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-j5r2f" event={"ID":"710ccb76-093a-484d-a784-737ae81e7c21","Type":"ContainerStarted","Data":"3c18e4e46777bf2673a83d5ea4786a14057f72bd0db2cdad749ab894fa7b0556"} Dec 01 08:41:16 crc kubenswrapper[4689]: I1201 08:41:16.529735 4689 patch_prober.go:28] interesting pod/router-default-5444994796-hb577 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 08:41:16 crc kubenswrapper[4689]: [-]has-synced failed: reason withheld Dec 01 08:41:16 crc kubenswrapper[4689]: [+]process-running ok Dec 01 08:41:16 crc kubenswrapper[4689]: healthz check failed Dec 01 08:41:16 crc kubenswrapper[4689]: I1201 08:41:16.529812 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hb577" podUID="3bb10b5c-893e-422d-a60f-101f4717b0bc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 08:41:16 crc kubenswrapper[4689]: I1201 08:41:16.549097 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:16 crc kubenswrapper[4689]: E1201 08:41:16.549511 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:17.049494036 +0000 UTC m=+157.121781940 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:16 crc kubenswrapper[4689]: I1201 08:41:16.608315 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ffvcj" podStartSLOduration=135.608289238 podStartE2EDuration="2m15.608289238s" podCreationTimestamp="2025-12-01 08:39:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:41:16.413626802 +0000 UTC m=+156.485914716" watchObservedRunningTime="2025-12-01 08:41:16.608289238 +0000 UTC m=+156.680577142" Dec 01 08:41:16 crc kubenswrapper[4689]: I1201 08:41:16.654878 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:16 crc kubenswrapper[4689]: E1201 08:41:16.657192 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:17.157174174 +0000 UTC m=+157.229462068 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:16 crc kubenswrapper[4689]: I1201 08:41:16.758990 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:16 crc kubenswrapper[4689]: E1201 08:41:16.759489 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:17.25946246 +0000 UTC m=+157.331750364 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:16 crc kubenswrapper[4689]: I1201 08:41:16.818062 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-j5r2f" podStartSLOduration=135.818029514 podStartE2EDuration="2m15.818029514s" podCreationTimestamp="2025-12-01 08:39:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:41:16.741817129 +0000 UTC m=+156.814105043" watchObservedRunningTime="2025-12-01 08:41:16.818029514 +0000 UTC m=+156.890317418" Dec 01 08:41:16 crc kubenswrapper[4689]: I1201 08:41:16.818313 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-l54ll" podStartSLOduration=134.818308633 podStartE2EDuration="2m14.818308633s" podCreationTimestamp="2025-12-01 08:39:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:41:16.815888946 +0000 UTC m=+156.888176850" watchObservedRunningTime="2025-12-01 08:41:16.818308633 +0000 UTC m=+156.890596537" Dec 01 08:41:16 crc kubenswrapper[4689]: I1201 08:41:16.863114 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:16 crc kubenswrapper[4689]: E1201 08:41:16.863343 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:17.363304895 +0000 UTC m=+157.435592799 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:16 crc kubenswrapper[4689]: I1201 08:41:16.863590 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:16 crc kubenswrapper[4689]: E1201 08:41:16.864021 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:17.364012898 +0000 UTC m=+157.436300792 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:16 crc kubenswrapper[4689]: I1201 08:41:16.945281 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4nmmg" event={"ID":"1120a89b-2c45-428f-8577-eb6eb712961b","Type":"ContainerStarted","Data":"5df5785e150c45924684f3bbb1a10cc2564ca6bf446242da0a450055000f1a24"} Dec 01 08:41:16 crc kubenswrapper[4689]: I1201 08:41:16.950931 4689 patch_prober.go:28] interesting pod/downloads-7954f5f757-xx949 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" start-of-body= Dec 01 08:41:16 crc kubenswrapper[4689]: I1201 08:41:16.951001 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-xx949" podUID="bd24264f-fc40-410e-9bed-3f8e340035b5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" Dec 01 08:41:16 crc kubenswrapper[4689]: I1201 08:41:16.973164 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:16 crc kubenswrapper[4689]: E1201 08:41:16.973805 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:17.473750242 +0000 UTC m=+157.546038146 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:16 crc kubenswrapper[4689]: I1201 08:41:16.974082 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:16 crc kubenswrapper[4689]: E1201 08:41:16.975967 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:17.475952461 +0000 UTC m=+157.548240365 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:17 crc kubenswrapper[4689]: I1201 08:41:17.077151 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:17 crc kubenswrapper[4689]: E1201 08:41:17.077751 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:17.577728641 +0000 UTC m=+157.650016555 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:17 crc kubenswrapper[4689]: I1201 08:41:17.198451 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:17 crc kubenswrapper[4689]: E1201 08:41:17.198849 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:17.698836736 +0000 UTC m=+157.771124630 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:17 crc kubenswrapper[4689]: I1201 08:41:17.323416 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:17 crc kubenswrapper[4689]: E1201 08:41:17.323850 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:17.823832106 +0000 UTC m=+157.896120010 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:17 crc kubenswrapper[4689]: I1201 08:41:17.434207 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:17 crc kubenswrapper[4689]: E1201 08:41:17.434918 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:17.934905031 +0000 UTC m=+158.007192935 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:17 crc kubenswrapper[4689]: I1201 08:41:17.523888 4689 patch_prober.go:28] interesting pod/router-default-5444994796-hb577 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 08:41:17 crc kubenswrapper[4689]: [-]has-synced failed: reason withheld Dec 01 08:41:17 crc kubenswrapper[4689]: [+]process-running ok Dec 01 08:41:17 crc kubenswrapper[4689]: healthz check failed Dec 01 08:41:17 crc kubenswrapper[4689]: I1201 08:41:17.523964 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hb577" podUID="3bb10b5c-893e-422d-a60f-101f4717b0bc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 08:41:17 crc kubenswrapper[4689]: I1201 08:41:17.543857 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:17 crc kubenswrapper[4689]: E1201 08:41:17.544380 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:18.044342584 +0000 UTC m=+158.116630488 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:17 crc kubenswrapper[4689]: I1201 08:41:17.645452 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:17 crc kubenswrapper[4689]: E1201 08:41:17.645880 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:18.145864476 +0000 UTC m=+158.218152380 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:17 crc kubenswrapper[4689]: I1201 08:41:17.748190 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:17 crc kubenswrapper[4689]: E1201 08:41:17.749126 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:18.249106762 +0000 UTC m=+158.321394666 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:17 crc kubenswrapper[4689]: I1201 08:41:17.850425 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:17 crc kubenswrapper[4689]: E1201 08:41:17.850942 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:18.350920044 +0000 UTC m=+158.423207948 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:17 crc kubenswrapper[4689]: I1201 08:41:17.952468 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:17 crc kubenswrapper[4689]: E1201 08:41:17.952964 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:18.452925791 +0000 UTC m=+158.525213695 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:17 crc kubenswrapper[4689]: I1201 08:41:17.954330 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:17 crc kubenswrapper[4689]: E1201 08:41:17.954890 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:18.454879513 +0000 UTC m=+158.527167417 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:18 crc kubenswrapper[4689]: I1201 08:41:18.072093 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:18 crc kubenswrapper[4689]: E1201 08:41:18.072333 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:18.572296981 +0000 UTC m=+158.644584885 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:18 crc kubenswrapper[4689]: I1201 08:41:18.072490 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:18 crc kubenswrapper[4689]: E1201 08:41:18.072927 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:18.57290251 +0000 UTC m=+158.645190414 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:18 crc kubenswrapper[4689]: I1201 08:41:18.170616 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lz96b" event={"ID":"b1b970c0-59a2-4782-8664-b17a7d7a8202","Type":"ContainerStarted","Data":"cba92047a7a632925f896dab9b77969c1150e1427687ee0c155b4db192ee4e3d"} Dec 01 08:41:18 crc kubenswrapper[4689]: I1201 08:41:18.173769 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lz96b" Dec 01 08:41:18 crc kubenswrapper[4689]: I1201 08:41:18.173962 4689 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-lz96b container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" start-of-body= Dec 01 08:41:18 crc kubenswrapper[4689]: I1201 08:41:18.174017 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lz96b" podUID="b1b970c0-59a2-4782-8664-b17a7d7a8202" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" Dec 01 08:41:18 crc kubenswrapper[4689]: I1201 08:41:18.174049 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:18 crc kubenswrapper[4689]: E1201 08:41:18.174730 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:18.674713801 +0000 UTC m=+158.747001705 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:18 crc kubenswrapper[4689]: I1201 08:41:18.192019 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-chlnk" event={"ID":"c062b92b-1709-4892-9b40-b1d2405d5812","Type":"ContainerStarted","Data":"4bbd80839acd5a5d0681c7172d58e5104f534ef97365c6d736675be81d0add4d"} Dec 01 08:41:18 crc kubenswrapper[4689]: I1201 08:41:18.209773 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-bnznn" event={"ID":"df446b8a-9d6b-41e5-9b7f-2ffa97a1217c","Type":"ContainerStarted","Data":"56d93e4d1c00cde1c9dc19b3df190628fb096677abdfa6b67a7ac510b7a8e7ee"} Dec 01 08:41:18 crc kubenswrapper[4689]: I1201 08:41:18.227501 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-sc9lr" event={"ID":"cf58a7d2-9013-4cdf-a435-67695f7677a1","Type":"ContainerStarted","Data":"cb7e3273f09ceea90940515c71c9c80b25e15af903c20a9b2ccc898aa20456d2"} Dec 01 08:41:18 crc kubenswrapper[4689]: I1201 08:41:18.228524 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b7hxr" event={"ID":"a2f6cac4-8eb9-4d62-8ef2-3ceb354076bf","Type":"ContainerStarted","Data":"4eafb0b8150fe3852e8d5ee67027183193cc5d4db2b8823448b7632537191ed4"} Dec 01 08:41:18 crc kubenswrapper[4689]: I1201 08:41:18.229486 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-h47fn" event={"ID":"f005f766-04a5-4b03-8c50-2fd9ddf967be","Type":"ContainerStarted","Data":"c6d5ded92f2063eee3e3023a57da88851a450a8cafe87153a8abc7353c7047b2"} Dec 01 08:41:18 crc kubenswrapper[4689]: I1201 08:41:18.250481 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xjxwg" event={"ID":"2499ecbd-1cda-49a9-8c8a-e80d44127f01","Type":"ContainerStarted","Data":"349bffad104dbf82304daede045fee0ed27cab4b8d5bc08224518f5fea037b4a"} Dec 01 08:41:18 crc kubenswrapper[4689]: I1201 08:41:18.252786 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bsffw" event={"ID":"dadd1959-680d-4f67-9af9-65d8519398df","Type":"ContainerStarted","Data":"9d8b37ced9ba9dbb512272fa635ba114b721e8fd1b89a9edb1917e6ea93ab525"} Dec 01 08:41:18 crc kubenswrapper[4689]: I1201 08:41:18.257819 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-9nx2j" Dec 01 08:41:18 crc kubenswrapper[4689]: I1201 08:41:18.257906 4689 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-9nx2j container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Dec 01 08:41:18 crc kubenswrapper[4689]: I1201 08:41:18.257942 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-9nx2j" podUID="5fb20738-492b-4b13-bf8a-5c32aabc0f32" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" Dec 01 08:41:18 crc kubenswrapper[4689]: I1201 08:41:18.259417 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lz96b" podStartSLOduration=136.259405516 podStartE2EDuration="2m16.259405516s" podCreationTimestamp="2025-12-01 08:39:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:41:18.217776442 +0000 UTC m=+158.290064356" watchObservedRunningTime="2025-12-01 08:41:18.259405516 +0000 UTC m=+158.331693420" Dec 01 08:41:18 crc kubenswrapper[4689]: I1201 08:41:18.271313 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-g9pxf" event={"ID":"065b75bb-d7a1-478c-bb62-cec913693a7e","Type":"ContainerStarted","Data":"f9ae1b1ef0009e6fed2d33da83686e76ac8bb6756d1c621e2b7401b305cee88b"} Dec 01 08:41:18 crc kubenswrapper[4689]: I1201 08:41:18.272599 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-fdcdc" event={"ID":"6e2d0411-e6d8-49a0-90c5-e1454e71bf44","Type":"ContainerStarted","Data":"39bd17099e0c2b7799087f8dfe38827e7cc07f8ea5a560df8de46abac69290ee"} Dec 01 08:41:18 crc kubenswrapper[4689]: I1201 08:41:18.277486 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:18 crc kubenswrapper[4689]: E1201 08:41:18.278730 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:18.778712561 +0000 UTC m=+158.851000465 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:18 crc kubenswrapper[4689]: I1201 08:41:18.291139 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-8rfdp" event={"ID":"e85d92ae-30aa-4302-b217-43a48dcadd8a","Type":"ContainerStarted","Data":"70b59839ace2b780fff325d9fb0084344cdf75ad86e5fe3de51c16dff7ae0f73"} Dec 01 08:41:18 crc kubenswrapper[4689]: I1201 08:41:18.291539 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-8rfdp" Dec 01 08:41:18 crc kubenswrapper[4689]: I1201 08:41:18.292647 4689 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-8rfdp container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.16:6443/healthz\": dial tcp 10.217.0.16:6443: connect: connection refused" start-of-body= Dec 01 08:41:18 crc kubenswrapper[4689]: I1201 08:41:18.292698 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-8rfdp" podUID="e85d92ae-30aa-4302-b217-43a48dcadd8a" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.16:6443/healthz\": dial tcp 10.217.0.16:6443: connect: connection refused" Dec 01 08:41:18 crc kubenswrapper[4689]: I1201 08:41:18.293437 4689 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-l54ll container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" start-of-body= Dec 01 08:41:18 crc kubenswrapper[4689]: I1201 08:41:18.293467 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-l54ll" podUID="2fd47e85-de9d-475a-8907-4e805cb1cfc8" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" Dec 01 08:41:18 crc kubenswrapper[4689]: I1201 08:41:18.311119 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-chlnk" podStartSLOduration=137.311090522 podStartE2EDuration="2m17.311090522s" podCreationTimestamp="2025-12-01 08:39:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:41:18.260799091 +0000 UTC m=+158.333087005" watchObservedRunningTime="2025-12-01 08:41:18.311090522 +0000 UTC m=+158.383378426" Dec 01 08:41:18 crc kubenswrapper[4689]: I1201 08:41:18.311648 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-bnznn" podStartSLOduration=10.311641239 podStartE2EDuration="10.311641239s" podCreationTimestamp="2025-12-01 08:41:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:41:18.309834122 +0000 UTC m=+158.382122026" watchObservedRunningTime="2025-12-01 08:41:18.311641239 +0000 UTC m=+158.383929143" Dec 01 08:41:18 crc kubenswrapper[4689]: I1201 08:41:18.386145 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:18 crc kubenswrapper[4689]: E1201 08:41:18.388972 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:18.888946151 +0000 UTC m=+158.961234055 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:18 crc kubenswrapper[4689]: I1201 08:41:18.402843 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-h47fn" podStartSLOduration=137.402814472 podStartE2EDuration="2m17.402814472s" podCreationTimestamp="2025-12-01 08:39:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:41:18.340420005 +0000 UTC m=+158.412707909" watchObservedRunningTime="2025-12-01 08:41:18.402814472 +0000 UTC m=+158.475102376" Dec 01 08:41:18 crc kubenswrapper[4689]: I1201 08:41:18.403295 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xjxwg" podStartSLOduration=137.403287077 podStartE2EDuration="2m17.403287077s" podCreationTimestamp="2025-12-01 08:39:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:41:18.402550434 +0000 UTC m=+158.474838338" watchObservedRunningTime="2025-12-01 08:41:18.403287077 +0000 UTC m=+158.475575001" Dec 01 08:41:18 crc kubenswrapper[4689]: I1201 08:41:18.481678 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-8rfdp" podStartSLOduration=138.481638171 podStartE2EDuration="2m18.481638171s" podCreationTimestamp="2025-12-01 08:39:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:41:18.465843548 +0000 UTC m=+158.538131452" watchObservedRunningTime="2025-12-01 08:41:18.481638171 +0000 UTC m=+158.553926095" Dec 01 08:41:18 crc kubenswrapper[4689]: I1201 08:41:18.491397 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:18 crc kubenswrapper[4689]: E1201 08:41:18.492489 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:18.992468656 +0000 UTC m=+159.064756560 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:18 crc kubenswrapper[4689]: I1201 08:41:18.520172 4689 patch_prober.go:28] interesting pod/router-default-5444994796-hb577 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 08:41:18 crc kubenswrapper[4689]: [-]has-synced failed: reason withheld Dec 01 08:41:18 crc kubenswrapper[4689]: [+]process-running ok Dec 01 08:41:18 crc kubenswrapper[4689]: healthz check failed Dec 01 08:41:18 crc kubenswrapper[4689]: I1201 08:41:18.520257 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hb577" podUID="3bb10b5c-893e-422d-a60f-101f4717b0bc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 08:41:18 crc kubenswrapper[4689]: I1201 08:41:18.588806 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-9nx2j" podStartSLOduration=137.588779121 podStartE2EDuration="2m17.588779121s" podCreationTimestamp="2025-12-01 08:39:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:41:18.542330942 +0000 UTC m=+158.614618846" watchObservedRunningTime="2025-12-01 08:41:18.588779121 +0000 UTC m=+158.661067025" Dec 01 08:41:18 crc kubenswrapper[4689]: I1201 08:41:18.592620 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-pzx5g" podStartSLOduration=137.592580862 podStartE2EDuration="2m17.592580862s" podCreationTimestamp="2025-12-01 08:39:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:41:18.587041046 +0000 UTC m=+158.659328950" watchObservedRunningTime="2025-12-01 08:41:18.592580862 +0000 UTC m=+158.664868766" Dec 01 08:41:18 crc kubenswrapper[4689]: I1201 08:41:18.602733 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:18 crc kubenswrapper[4689]: E1201 08:41:18.603412 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:19.103390246 +0000 UTC m=+159.175678150 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:18 crc kubenswrapper[4689]: I1201 08:41:18.651124 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-fdcdc" podStartSLOduration=136.651091885 podStartE2EDuration="2m16.651091885s" podCreationTimestamp="2025-12-01 08:39:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:41:18.650827466 +0000 UTC m=+158.723115390" watchObservedRunningTime="2025-12-01 08:41:18.651091885 +0000 UTC m=+158.723379789" Dec 01 08:41:18 crc kubenswrapper[4689]: I1201 08:41:18.708442 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:18 crc kubenswrapper[4689]: E1201 08:41:18.708860 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:19.208839874 +0000 UTC m=+159.281127778 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:18 crc kubenswrapper[4689]: I1201 08:41:18.809347 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:18 crc kubenswrapper[4689]: E1201 08:41:18.809618 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:19.30957184 +0000 UTC m=+159.381859744 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:18 crc kubenswrapper[4689]: I1201 08:41:18.809927 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:18 crc kubenswrapper[4689]: E1201 08:41:18.810317 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:19.310299813 +0000 UTC m=+159.382587717 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:19 crc kubenswrapper[4689]: I1201 08:41:19.017943 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:19 crc kubenswrapper[4689]: E1201 08:41:19.018221 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:19.518147699 +0000 UTC m=+159.590435603 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:19 crc kubenswrapper[4689]: I1201 08:41:19.119343 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:19 crc kubenswrapper[4689]: E1201 08:41:19.120007 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:19.619978801 +0000 UTC m=+159.692266705 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:19 crc kubenswrapper[4689]: I1201 08:41:19.220524 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:19 crc kubenswrapper[4689]: E1201 08:41:19.220762 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:19.720721758 +0000 UTC m=+159.793009662 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:19 crc kubenswrapper[4689]: I1201 08:41:19.220868 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:19 crc kubenswrapper[4689]: E1201 08:41:19.221325 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:19.721317317 +0000 UTC m=+159.793605221 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:19 crc kubenswrapper[4689]: I1201 08:41:19.331776 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:19 crc kubenswrapper[4689]: E1201 08:41:19.332265 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:19.832235527 +0000 UTC m=+159.904523431 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:19 crc kubenswrapper[4689]: I1201 08:41:19.376643 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-pzx5g" event={"ID":"e7754ebe-fe0e-44a5-b463-1d005035d249","Type":"ContainerStarted","Data":"937d1c8b9f88cc1ab43334240aed0dba29cd3adf706c83233ed39a283ee4e1b0"} Dec 01 08:41:19 crc kubenswrapper[4689]: I1201 08:41:19.433746 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:19 crc kubenswrapper[4689]: E1201 08:41:19.435286 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:19.935260897 +0000 UTC m=+160.007548801 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:19 crc kubenswrapper[4689]: I1201 08:41:19.527341 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bsffw" podStartSLOduration=138.527299377 podStartE2EDuration="2m18.527299377s" podCreationTimestamp="2025-12-01 08:39:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:41:18.693182095 +0000 UTC m=+158.765469999" watchObservedRunningTime="2025-12-01 08:41:19.527299377 +0000 UTC m=+159.599587281" Dec 01 08:41:19 crc kubenswrapper[4689]: I1201 08:41:19.536841 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:19 crc kubenswrapper[4689]: I1201 08:41:19.538050 4689 patch_prober.go:28] interesting pod/router-default-5444994796-hb577 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 08:41:19 crc kubenswrapper[4689]: [-]has-synced failed: reason withheld Dec 01 08:41:19 crc kubenswrapper[4689]: [+]process-running ok Dec 01 08:41:19 crc kubenswrapper[4689]: healthz check failed Dec 01 08:41:19 crc kubenswrapper[4689]: I1201 08:41:19.538302 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hb577" podUID="3bb10b5c-893e-422d-a60f-101f4717b0bc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 08:41:19 crc kubenswrapper[4689]: E1201 08:41:19.538842 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:20.038814513 +0000 UTC m=+160.111102437 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:19 crc kubenswrapper[4689]: I1201 08:41:19.553926 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-z629s" Dec 01 08:41:19 crc kubenswrapper[4689]: I1201 08:41:19.557081 4689 patch_prober.go:28] interesting pod/console-operator-58897d9998-z629s container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/readyz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Dec 01 08:41:19 crc kubenswrapper[4689]: I1201 08:41:19.557135 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-z629s" podUID="304e31dc-6fcd-4654-9c3d-ef693f7c71a6" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/readyz\": dial tcp 10.217.0.20:8443: connect: connection refused" Dec 01 08:41:19 crc kubenswrapper[4689]: I1201 08:41:19.566273 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bjlxg" event={"ID":"2546c757-03ab-4ba3-95d0-aa537cd615fb","Type":"ContainerStarted","Data":"50e9ba744b575b62eb83f7027a5b84db034444b88237a9b0f5b58348f7d38b79"} Dec 01 08:41:19 crc kubenswrapper[4689]: I1201 08:41:19.568017 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bjlxg" Dec 01 08:41:19 crc kubenswrapper[4689]: I1201 08:41:19.571684 4689 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-bjlxg container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:5443/healthz\": dial tcp 10.217.0.31:5443: connect: connection refused" start-of-body= Dec 01 08:41:19 crc kubenswrapper[4689]: I1201 08:41:19.571742 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bjlxg" podUID="2546c757-03ab-4ba3-95d0-aa537cd615fb" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.31:5443/healthz\": dial tcp 10.217.0.31:5443: connect: connection refused" Dec 01 08:41:19 crc kubenswrapper[4689]: I1201 08:41:19.592914 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-g9pxf" podStartSLOduration=137.592888705 podStartE2EDuration="2m17.592888705s" podCreationTimestamp="2025-12-01 08:39:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:41:19.592491222 +0000 UTC m=+159.664779136" watchObservedRunningTime="2025-12-01 08:41:19.592888705 +0000 UTC m=+159.665176609" Dec 01 08:41:19 crc kubenswrapper[4689]: I1201 08:41:19.594909 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4nmmg" podStartSLOduration=138.594900479 podStartE2EDuration="2m18.594900479s" podCreationTimestamp="2025-12-01 08:39:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:41:19.525561181 +0000 UTC m=+159.597849085" watchObservedRunningTime="2025-12-01 08:41:19.594900479 +0000 UTC m=+159.667188383" Dec 01 08:41:19 crc kubenswrapper[4689]: I1201 08:41:19.598061 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6z2v4" Dec 01 08:41:19 crc kubenswrapper[4689]: I1201 08:41:19.608169 4689 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-6z2v4 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" start-of-body= Dec 01 08:41:19 crc kubenswrapper[4689]: I1201 08:41:19.608274 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6z2v4" podUID="bd8122c2-aaf0-4148-849c-ca4502dd0f55" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" Dec 01 08:41:19 crc kubenswrapper[4689]: I1201 08:41:19.610806 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ltkzh" event={"ID":"dcf239d3-b34c-4fa0-aafa-4e82e29e0c9b","Type":"ContainerStarted","Data":"1986c8ce300cf1cdb7c3455a45a9b0bf56a6767193d9c1fb845bbe4e7a5cbfb9"} Dec 01 08:41:19 crc kubenswrapper[4689]: I1201 08:41:19.612088 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ltkzh" Dec 01 08:41:19 crc kubenswrapper[4689]: I1201 08:41:19.613308 4689 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-ltkzh container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Dec 01 08:41:19 crc kubenswrapper[4689]: I1201 08:41:19.613358 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ltkzh" podUID="dcf239d3-b34c-4fa0-aafa-4e82e29e0c9b" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" Dec 01 08:41:19 crc kubenswrapper[4689]: I1201 08:41:19.642641 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-wrw72" event={"ID":"4095fada-3a3f-4938-a63b-07eb736ad683","Type":"ContainerStarted","Data":"8ab0bf9d39486f9671efd64fc06fdd0b624c596b038d4a68b0f90d2cb073ffa7"} Dec 01 08:41:19 crc kubenswrapper[4689]: I1201 08:41:19.644144 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-wrw72" Dec 01 08:41:19 crc kubenswrapper[4689]: I1201 08:41:19.645064 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:19 crc kubenswrapper[4689]: E1201 08:41:19.646073 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:20.146041316 +0000 UTC m=+160.218329220 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:19 crc kubenswrapper[4689]: I1201 08:41:19.790418 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:19 crc kubenswrapper[4689]: E1201 08:41:19.791738 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:20.291686373 +0000 UTC m=+160.363974277 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:19 crc kubenswrapper[4689]: I1201 08:41:19.840519 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-29dmp" Dec 01 08:41:19 crc kubenswrapper[4689]: I1201 08:41:19.855105 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-9nx2j" event={"ID":"5fb20738-492b-4b13-bf8a-5c32aabc0f32","Type":"ContainerStarted","Data":"1cfced87066f601fd4e4bbc26e411cfd82fa6de9dcd5fee9ce9c0459836affaa"} Dec 01 08:41:19 crc kubenswrapper[4689]: I1201 08:41:19.856728 4689 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-9nx2j container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Dec 01 08:41:19 crc kubenswrapper[4689]: I1201 08:41:19.856810 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-9nx2j" podUID="5fb20738-492b-4b13-bf8a-5c32aabc0f32" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" Dec 01 08:41:19 crc kubenswrapper[4689]: I1201 08:41:19.874529 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f6t2n" podStartSLOduration=137.874492259 podStartE2EDuration="2m17.874492259s" podCreationTimestamp="2025-12-01 08:39:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:41:19.805339917 +0000 UTC m=+159.877627841" watchObservedRunningTime="2025-12-01 08:41:19.874492259 +0000 UTC m=+159.946780163" Dec 01 08:41:19 crc kubenswrapper[4689]: I1201 08:41:19.904819 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:19 crc kubenswrapper[4689]: E1201 08:41:19.908824 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:20.408800461 +0000 UTC m=+160.481088365 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:19 crc kubenswrapper[4689]: I1201 08:41:19.995748 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bjlxg" podStartSLOduration=137.995715448 podStartE2EDuration="2m17.995715448s" podCreationTimestamp="2025-12-01 08:39:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:41:19.881432009 +0000 UTC m=+159.953719913" watchObservedRunningTime="2025-12-01 08:41:19.995715448 +0000 UTC m=+160.068003362" Dec 01 08:41:20 crc kubenswrapper[4689]: I1201 08:41:19.999847 4689 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-l54ll container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" start-of-body= Dec 01 08:41:20 crc kubenswrapper[4689]: I1201 08:41:19.999938 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-l54ll" podUID="2fd47e85-de9d-475a-8907-4e805cb1cfc8" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" Dec 01 08:41:20 crc kubenswrapper[4689]: I1201 08:41:20.000121 4689 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-8rfdp container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.16:6443/healthz\": dial tcp 10.217.0.16:6443: connect: connection refused" start-of-body= Dec 01 08:41:20 crc kubenswrapper[4689]: I1201 08:41:20.000201 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-8rfdp" podUID="e85d92ae-30aa-4302-b217-43a48dcadd8a" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.16:6443/healthz\": dial tcp 10.217.0.16:6443: connect: connection refused" Dec 01 08:41:20 crc kubenswrapper[4689]: I1201 08:41:20.006243 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:20 crc kubenswrapper[4689]: E1201 08:41:20.006785 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:20.506764489 +0000 UTC m=+160.579052393 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:20 crc kubenswrapper[4689]: I1201 08:41:20.012289 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lz96b" Dec 01 08:41:20 crc kubenswrapper[4689]: I1201 08:41:20.125400 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:20 crc kubenswrapper[4689]: E1201 08:41:20.155205 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:20.655181484 +0000 UTC m=+160.727469398 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:20 crc kubenswrapper[4689]: I1201 08:41:20.204501 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-z629s" podStartSLOduration=139.204445982 podStartE2EDuration="2m19.204445982s" podCreationTimestamp="2025-12-01 08:39:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:41:20.008458344 +0000 UTC m=+160.080746268" watchObservedRunningTime="2025-12-01 08:41:20.204445982 +0000 UTC m=+160.276733886" Dec 01 08:41:20 crc kubenswrapper[4689]: I1201 08:41:20.227744 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:20 crc kubenswrapper[4689]: E1201 08:41:20.228449 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:20.728211188 +0000 UTC m=+160.800499092 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:20 crc kubenswrapper[4689]: I1201 08:41:20.228901 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:20 crc kubenswrapper[4689]: E1201 08:41:20.229494 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:20.729468499 +0000 UTC m=+160.801756403 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:20 crc kubenswrapper[4689]: I1201 08:41:20.242852 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vf9h5" podStartSLOduration=139.242829893 podStartE2EDuration="2m19.242829893s" podCreationTimestamp="2025-12-01 08:39:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:41:20.242380589 +0000 UTC m=+160.314668493" watchObservedRunningTime="2025-12-01 08:41:20.242829893 +0000 UTC m=+160.315117797" Dec 01 08:41:20 crc kubenswrapper[4689]: I1201 08:41:20.244154 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-sc9lr" podStartSLOduration=139.244146026 podStartE2EDuration="2m19.244146026s" podCreationTimestamp="2025-12-01 08:39:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:41:20.189861028 +0000 UTC m=+160.262148952" watchObservedRunningTime="2025-12-01 08:41:20.244146026 +0000 UTC m=+160.316433930" Dec 01 08:41:20 crc kubenswrapper[4689]: I1201 08:41:20.320507 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6z2v4" podStartSLOduration=138.320475435 podStartE2EDuration="2m18.320475435s" podCreationTimestamp="2025-12-01 08:39:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:41:20.318238154 +0000 UTC m=+160.390526048" watchObservedRunningTime="2025-12-01 08:41:20.320475435 +0000 UTC m=+160.392763329" Dec 01 08:41:20 crc kubenswrapper[4689]: I1201 08:41:20.329906 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:20 crc kubenswrapper[4689]: E1201 08:41:20.330239 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:20.830212416 +0000 UTC m=+160.902500320 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:20 crc kubenswrapper[4689]: I1201 08:41:20.348298 4689 patch_prober.go:28] interesting pod/downloads-7954f5f757-xx949 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" start-of-body= Dec 01 08:41:20 crc kubenswrapper[4689]: I1201 08:41:20.348405 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-xx949" podUID="bd24264f-fc40-410e-9bed-3f8e340035b5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" Dec 01 08:41:20 crc kubenswrapper[4689]: I1201 08:41:20.349621 4689 patch_prober.go:28] interesting pod/downloads-7954f5f757-xx949 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" start-of-body= Dec 01 08:41:20 crc kubenswrapper[4689]: I1201 08:41:20.349736 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-xx949" podUID="bd24264f-fc40-410e-9bed-3f8e340035b5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" Dec 01 08:41:20 crc kubenswrapper[4689]: I1201 08:41:20.398066 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-wrw72" podStartSLOduration=12.397989843 podStartE2EDuration="12.397989843s" podCreationTimestamp="2025-12-01 08:41:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:41:20.396545297 +0000 UTC m=+160.468833211" watchObservedRunningTime="2025-12-01 08:41:20.397989843 +0000 UTC m=+160.470277747" Dec 01 08:41:20 crc kubenswrapper[4689]: I1201 08:41:20.433000 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:20 crc kubenswrapper[4689]: E1201 08:41:20.433552 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:20.933534225 +0000 UTC m=+161.005822129 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:20 crc kubenswrapper[4689]: I1201 08:41:20.467640 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ltkzh" podStartSLOduration=138.4676024 podStartE2EDuration="2m18.4676024s" podCreationTimestamp="2025-12-01 08:39:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:41:20.461808595 +0000 UTC m=+160.534096499" watchObservedRunningTime="2025-12-01 08:41:20.4676024 +0000 UTC m=+160.539890304" Dec 01 08:41:20 crc kubenswrapper[4689]: I1201 08:41:20.491959 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-j5r2f" Dec 01 08:41:20 crc kubenswrapper[4689]: I1201 08:41:20.492675 4689 patch_prober.go:28] interesting pod/console-f9d7485db-j5r2f container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.6:8443/health\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Dec 01 08:41:20 crc kubenswrapper[4689]: I1201 08:41:20.492750 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-j5r2f" podUID="710ccb76-093a-484d-a784-737ae81e7c21" containerName="console" probeResult="failure" output="Get \"https://10.217.0.6:8443/health\": dial tcp 10.217.0.6:8443: connect: connection refused" Dec 01 08:41:20 crc kubenswrapper[4689]: I1201 08:41:20.493120 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-j5r2f" Dec 01 08:41:20 crc kubenswrapper[4689]: I1201 08:41:20.511516 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-hb577" Dec 01 08:41:20 crc kubenswrapper[4689]: I1201 08:41:20.521689 4689 patch_prober.go:28] interesting pod/router-default-5444994796-hb577 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 08:41:20 crc kubenswrapper[4689]: [-]has-synced failed: reason withheld Dec 01 08:41:20 crc kubenswrapper[4689]: [+]process-running ok Dec 01 08:41:20 crc kubenswrapper[4689]: healthz check failed Dec 01 08:41:20 crc kubenswrapper[4689]: I1201 08:41:20.521798 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hb577" podUID="3bb10b5c-893e-422d-a60f-101f4717b0bc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 08:41:20 crc kubenswrapper[4689]: I1201 08:41:20.533825 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:20 crc kubenswrapper[4689]: E1201 08:41:20.533938 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:21.03391833 +0000 UTC m=+161.106206244 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:20 crc kubenswrapper[4689]: I1201 08:41:20.534620 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:20 crc kubenswrapper[4689]: E1201 08:41:20.535068 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:21.035060617 +0000 UTC m=+161.107348521 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:20 crc kubenswrapper[4689]: I1201 08:41:20.636109 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:20 crc kubenswrapper[4689]: E1201 08:41:20.636266 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:21.136237307 +0000 UTC m=+161.208525221 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:20 crc kubenswrapper[4689]: I1201 08:41:20.636888 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:20 crc kubenswrapper[4689]: E1201 08:41:20.637468 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:21.137440775 +0000 UTC m=+161.209728679 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:20 crc kubenswrapper[4689]: I1201 08:41:20.709770 4689 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-9nx2j container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Dec 01 08:41:20 crc kubenswrapper[4689]: I1201 08:41:20.709913 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-9nx2j" podUID="5fb20738-492b-4b13-bf8a-5c32aabc0f32" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" Dec 01 08:41:20 crc kubenswrapper[4689]: I1201 08:41:20.727192 4689 patch_prober.go:28] interesting pod/console-operator-58897d9998-z629s container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/readyz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Dec 01 08:41:20 crc kubenswrapper[4689]: I1201 08:41:20.727282 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-z629s" podUID="304e31dc-6fcd-4654-9c3d-ef693f7c71a6" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/readyz\": dial tcp 10.217.0.20:8443: connect: connection refused" Dec 01 08:41:20 crc kubenswrapper[4689]: I1201 08:41:20.727398 4689 patch_prober.go:28] interesting pod/console-operator-58897d9998-z629s container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Dec 01 08:41:20 crc kubenswrapper[4689]: I1201 08:41:20.727496 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-z629s" podUID="304e31dc-6fcd-4654-9c3d-ef693f7c71a6" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" Dec 01 08:41:20 crc kubenswrapper[4689]: I1201 08:41:20.730923 4689 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-8rfdp container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.16:6443/healthz\": dial tcp 10.217.0.16:6443: connect: connection refused" start-of-body= Dec 01 08:41:20 crc kubenswrapper[4689]: I1201 08:41:20.730964 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-8rfdp" podUID="e85d92ae-30aa-4302-b217-43a48dcadd8a" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.16:6443/healthz\": dial tcp 10.217.0.16:6443: connect: connection refused" Dec 01 08:41:20 crc kubenswrapper[4689]: I1201 08:41:20.740179 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:20 crc kubenswrapper[4689]: E1201 08:41:20.740642 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:21.240616259 +0000 UTC m=+161.312904163 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:20 crc kubenswrapper[4689]: I1201 08:41:20.790336 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b7hxr" podStartSLOduration=138.790307222 podStartE2EDuration="2m18.790307222s" podCreationTimestamp="2025-12-01 08:39:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:41:20.590938196 +0000 UTC m=+160.663226100" watchObservedRunningTime="2025-12-01 08:41:20.790307222 +0000 UTC m=+160.862595126" Dec 01 08:41:20 crc kubenswrapper[4689]: I1201 08:41:20.846350 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:20 crc kubenswrapper[4689]: E1201 08:41:20.846955 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:21.346935154 +0000 UTC m=+161.419223058 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:20 crc kubenswrapper[4689]: I1201 08:41:20.848286 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-29dmp" podStartSLOduration=139.848255846 podStartE2EDuration="2m19.848255846s" podCreationTimestamp="2025-12-01 08:39:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:41:20.789132294 +0000 UTC m=+160.861420198" watchObservedRunningTime="2025-12-01 08:41:20.848255846 +0000 UTC m=+160.920543750" Dec 01 08:41:20 crc kubenswrapper[4689]: I1201 08:41:20.925834 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-l54ll" Dec 01 08:41:20 crc kubenswrapper[4689]: I1201 08:41:20.948189 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:20 crc kubenswrapper[4689]: E1201 08:41:20.949183 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:21.449159278 +0000 UTC m=+161.521447192 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:20 crc kubenswrapper[4689]: I1201 08:41:20.999699 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-pr577" event={"ID":"9eaef062-e274-4f3c-8ce2-3ea23e7106da","Type":"ContainerStarted","Data":"6c9a6c11b13f8a75d29aaa5d8db77523e50d6e9232c153c4d81ad7bb3d0d8bcd"} Dec 01 08:41:20 crc kubenswrapper[4689]: I1201 08:41:20.999782 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-pr577" event={"ID":"9eaef062-e274-4f3c-8ce2-3ea23e7106da","Type":"ContainerStarted","Data":"cd7d711febb7bf01dd4c5e807ebe5d505fed306739ebb9beadac20916effa4ae"} Dec 01 08:41:21 crc kubenswrapper[4689]: I1201 08:41:21.003905 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-ch9jh" event={"ID":"c389f615-2c0f-467b-924e-ad740d3fff07","Type":"ContainerStarted","Data":"27ed95177f90ad8e31a02737a7c8b3dd4f9ffc96a3284b2c7ad72d5728130b48"} Dec 01 08:41:21 crc kubenswrapper[4689]: I1201 08:41:21.003960 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-ch9jh" event={"ID":"c389f615-2c0f-467b-924e-ad740d3fff07","Type":"ContainerStarted","Data":"347711c67788c5bd6cd22bd959bbb9cf9b047790440d41460a50c96195bf7f2e"} Dec 01 08:41:21 crc kubenswrapper[4689]: I1201 08:41:21.006227 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-8pg9k" event={"ID":"79369af1-c9d2-4d8e-a675-a5174bc0e4ad","Type":"ContainerStarted","Data":"e7119411225504c561ac6d93bd0e9ed289f2ac51fd34aa6400b28fa26668878e"} Dec 01 08:41:21 crc kubenswrapper[4689]: I1201 08:41:21.007775 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vf9h5" event={"ID":"3d0af3ff-5d7b-41ae-be27-4dea7a282d86","Type":"ContainerStarted","Data":"2e8e4f3d1aef23117d70916bab5b518cd21aa2508b2ef121f03d9377ca88cf3b"} Dec 01 08:41:21 crc kubenswrapper[4689]: I1201 08:41:21.010132 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-z629s" event={"ID":"304e31dc-6fcd-4654-9c3d-ef693f7c71a6","Type":"ContainerStarted","Data":"c099adf2e4ed6ab3907d7b9cee98d370f405b1fc52d0b864e548ec3172dd2661"} Dec 01 08:41:21 crc kubenswrapper[4689]: I1201 08:41:21.011578 4689 patch_prober.go:28] interesting pod/console-operator-58897d9998-z629s container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/readyz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Dec 01 08:41:21 crc kubenswrapper[4689]: I1201 08:41:21.011657 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-z629s" podUID="304e31dc-6fcd-4654-9c3d-ef693f7c71a6" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/readyz\": dial tcp 10.217.0.20:8443: connect: connection refused" Dec 01 08:41:21 crc kubenswrapper[4689]: I1201 08:41:21.014904 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-29dmp" event={"ID":"21eaf97a-bf73-4e70-a9bc-153b17b8a799","Type":"ContainerStarted","Data":"a85e9de4547f742cf05dd249b25308b5a4189c6cb2d4fc75a6012f15e9f9e0ed"} Dec 01 08:41:21 crc kubenswrapper[4689]: I1201 08:41:21.018002 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4nmmg" event={"ID":"1120a89b-2c45-428f-8577-eb6eb712961b","Type":"ContainerStarted","Data":"f508d072696180dd2e7fc7bc5828f75197dde9242f3b7d6c5231e54ea34fe2d0"} Dec 01 08:41:21 crc kubenswrapper[4689]: I1201 08:41:21.032321 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-g9pxf" event={"ID":"065b75bb-d7a1-478c-bb62-cec913693a7e","Type":"ContainerStarted","Data":"be5b23a310f1c5066e9bdb6d7ed878e4f3a7bfc730f55076277f35812becc128"} Dec 01 08:41:21 crc kubenswrapper[4689]: I1201 08:41:21.046160 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f6t2n" event={"ID":"45f382fb-86c3-493f-a2ab-eb9b51923752","Type":"ContainerStarted","Data":"c7a8e7bee27579db78764c5097af497f543969c74e2681095f581edc61ee5033"} Dec 01 08:41:21 crc kubenswrapper[4689]: I1201 08:41:21.050668 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:21 crc kubenswrapper[4689]: E1201 08:41:21.052997 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:21.552980613 +0000 UTC m=+161.625268507 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:21 crc kubenswrapper[4689]: I1201 08:41:21.085705 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-sc9lr" event={"ID":"cf58a7d2-9013-4cdf-a435-67695f7677a1","Type":"ContainerStarted","Data":"db923dd31a0b889771bcc0946379b36fca999ed04cb149c4ce0a280c4c2df39a"} Dec 01 08:41:21 crc kubenswrapper[4689]: I1201 08:41:21.085793 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zvzpg" Dec 01 08:41:21 crc kubenswrapper[4689]: I1201 08:41:21.085808 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zvzpg" event={"ID":"c1a4774c-b15d-424e-bb37-d6880da5ad85","Type":"ContainerStarted","Data":"e38846c972f75ff37f5b226440203df9c5732e186891f7f0dcc83aeb029752f5"} Dec 01 08:41:21 crc kubenswrapper[4689]: I1201 08:41:21.088894 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-wrw72" event={"ID":"4095fada-3a3f-4938-a63b-07eb736ad683","Type":"ContainerStarted","Data":"286208dd699a42ea4e15606edc3f277f9bd05075c83bc6f2c6adc710cbfb92e1"} Dec 01 08:41:21 crc kubenswrapper[4689]: I1201 08:41:21.098647 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b7hxr" event={"ID":"a2f6cac4-8eb9-4d62-8ef2-3ceb354076bf","Type":"ContainerStarted","Data":"9ed1a1c15a3b80f03b34180b2572c1383e9538d7a3dd74436d4718cb2e7f9529"} Dec 01 08:41:21 crc kubenswrapper[4689]: I1201 08:41:21.102570 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6z2v4" event={"ID":"bd8122c2-aaf0-4148-849c-ca4502dd0f55","Type":"ContainerStarted","Data":"7298609177a1a785d5c63f463609e12770765cb02525245e890c6e41230a272e"} Dec 01 08:41:21 crc kubenswrapper[4689]: I1201 08:41:21.104458 4689 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-bjlxg container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:5443/healthz\": dial tcp 10.217.0.31:5443: connect: connection refused" start-of-body= Dec 01 08:41:21 crc kubenswrapper[4689]: I1201 08:41:21.104620 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bjlxg" podUID="2546c757-03ab-4ba3-95d0-aa537cd615fb" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.31:5443/healthz\": dial tcp 10.217.0.31:5443: connect: connection refused" Dec 01 08:41:21 crc kubenswrapper[4689]: I1201 08:41:21.104691 4689 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-ltkzh container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Dec 01 08:41:21 crc kubenswrapper[4689]: I1201 08:41:21.104827 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ltkzh" podUID="dcf239d3-b34c-4fa0-aafa-4e82e29e0c9b" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" Dec 01 08:41:21 crc kubenswrapper[4689]: I1201 08:41:21.104981 4689 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-9nx2j container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Dec 01 08:41:21 crc kubenswrapper[4689]: I1201 08:41:21.105085 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-9nx2j" podUID="5fb20738-492b-4b13-bf8a-5c32aabc0f32" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" Dec 01 08:41:21 crc kubenswrapper[4689]: I1201 08:41:21.170101 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:21 crc kubenswrapper[4689]: E1201 08:41:21.170675 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:21.670639389 +0000 UTC m=+161.742927293 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:21 crc kubenswrapper[4689]: I1201 08:41:21.171233 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:21 crc kubenswrapper[4689]: E1201 08:41:21.173400 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:21.673391676 +0000 UTC m=+161.745679580 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:21 crc kubenswrapper[4689]: I1201 08:41:21.179938 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-ch9jh" podStartSLOduration=141.179916844 podStartE2EDuration="2m21.179916844s" podCreationTimestamp="2025-12-01 08:39:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:41:21.177967761 +0000 UTC m=+161.250255665" watchObservedRunningTime="2025-12-01 08:41:21.179916844 +0000 UTC m=+161.252204748" Dec 01 08:41:21 crc kubenswrapper[4689]: I1201 08:41:21.249118 4689 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-bjlxg container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.31:5443/healthz\": dial tcp 10.217.0.31:5443: connect: connection refused" start-of-body= Dec 01 08:41:21 crc kubenswrapper[4689]: I1201 08:41:21.249488 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bjlxg" podUID="2546c757-03ab-4ba3-95d0-aa537cd615fb" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.31:5443/healthz\": dial tcp 10.217.0.31:5443: connect: connection refused" Dec 01 08:41:21 crc kubenswrapper[4689]: I1201 08:41:21.249525 4689 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-bjlxg container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:5443/healthz\": dial tcp 10.217.0.31:5443: connect: connection refused" start-of-body= Dec 01 08:41:21 crc kubenswrapper[4689]: I1201 08:41:21.249721 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bjlxg" podUID="2546c757-03ab-4ba3-95d0-aa537cd615fb" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.31:5443/healthz\": dial tcp 10.217.0.31:5443: connect: connection refused" Dec 01 08:41:21 crc kubenswrapper[4689]: I1201 08:41:21.249248 4689 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-ltkzh container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Dec 01 08:41:21 crc kubenswrapper[4689]: I1201 08:41:21.249872 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ltkzh" podUID="dcf239d3-b34c-4fa0-aafa-4e82e29e0c9b" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" Dec 01 08:41:21 crc kubenswrapper[4689]: I1201 08:41:21.249265 4689 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-ltkzh container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Dec 01 08:41:21 crc kubenswrapper[4689]: I1201 08:41:21.250036 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ltkzh" podUID="dcf239d3-b34c-4fa0-aafa-4e82e29e0c9b" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" Dec 01 08:41:21 crc kubenswrapper[4689]: I1201 08:41:21.272715 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:21 crc kubenswrapper[4689]: E1201 08:41:21.273250 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:21.773223584 +0000 UTC m=+161.845511488 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:21 crc kubenswrapper[4689]: I1201 08:41:21.374660 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-pr577" podStartSLOduration=140.374641763 podStartE2EDuration="2m20.374641763s" podCreationTimestamp="2025-12-01 08:39:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:41:21.374553389 +0000 UTC m=+161.446841283" watchObservedRunningTime="2025-12-01 08:41:21.374641763 +0000 UTC m=+161.446929667" Dec 01 08:41:21 crc kubenswrapper[4689]: I1201 08:41:21.376812 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:21 crc kubenswrapper[4689]: E1201 08:41:21.377299 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:21.877278846 +0000 UTC m=+161.949566750 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:21 crc kubenswrapper[4689]: I1201 08:41:21.481047 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:21 crc kubenswrapper[4689]: E1201 08:41:21.481421 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:21.981279397 +0000 UTC m=+162.053567301 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:21 crc kubenswrapper[4689]: I1201 08:41:21.482334 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:21 crc kubenswrapper[4689]: E1201 08:41:21.484851 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:21.98482945 +0000 UTC m=+162.057117354 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:21 crc kubenswrapper[4689]: I1201 08:41:21.531701 4689 patch_prober.go:28] interesting pod/router-default-5444994796-hb577 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 08:41:21 crc kubenswrapper[4689]: [-]has-synced failed: reason withheld Dec 01 08:41:21 crc kubenswrapper[4689]: [+]process-running ok Dec 01 08:41:21 crc kubenswrapper[4689]: healthz check failed Dec 01 08:41:21 crc kubenswrapper[4689]: I1201 08:41:21.532216 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hb577" podUID="3bb10b5c-893e-422d-a60f-101f4717b0bc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 08:41:21 crc kubenswrapper[4689]: I1201 08:41:21.577977 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zvzpg" podStartSLOduration=139.577939664 podStartE2EDuration="2m19.577939664s" podCreationTimestamp="2025-12-01 08:39:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:41:21.576613282 +0000 UTC m=+161.648901176" watchObservedRunningTime="2025-12-01 08:41:21.577939664 +0000 UTC m=+161.650227558" Dec 01 08:41:21 crc kubenswrapper[4689]: I1201 08:41:21.586157 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:21 crc kubenswrapper[4689]: E1201 08:41:21.586513 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:22.086495706 +0000 UTC m=+162.158783600 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:21 crc kubenswrapper[4689]: I1201 08:41:21.688232 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:21 crc kubenswrapper[4689]: E1201 08:41:21.688903 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:22.188876695 +0000 UTC m=+162.261164599 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:21 crc kubenswrapper[4689]: I1201 08:41:21.833994 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:21 crc kubenswrapper[4689]: E1201 08:41:21.834495 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:22.33445801 +0000 UTC m=+162.406745914 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:22 crc kubenswrapper[4689]: I1201 08:41:22.002552 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:22 crc kubenswrapper[4689]: E1201 08:41:22.003005 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:22.502992364 +0000 UTC m=+162.575280268 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:22 crc kubenswrapper[4689]: I1201 08:41:22.086469 4689 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-6z2v4 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 01 08:41:22 crc kubenswrapper[4689]: I1201 08:41:22.086561 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6z2v4" podUID="bd8122c2-aaf0-4148-849c-ca4502dd0f55" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 01 08:41:22 crc kubenswrapper[4689]: I1201 08:41:22.103303 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:22 crc kubenswrapper[4689]: E1201 08:41:22.103668 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:22.603651669 +0000 UTC m=+162.675939563 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:22 crc kubenswrapper[4689]: I1201 08:41:22.115763 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-8pg9k" event={"ID":"79369af1-c9d2-4d8e-a675-a5174bc0e4ad","Type":"ContainerStarted","Data":"94dd5f04d5f65be24b9aa120dd0617d90c34357089282c804ec56fa4f816d1df"} Dec 01 08:41:22 crc kubenswrapper[4689]: I1201 08:41:22.120404 4689 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-ltkzh container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Dec 01 08:41:22 crc kubenswrapper[4689]: I1201 08:41:22.120454 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ltkzh" podUID="dcf239d3-b34c-4fa0-aafa-4e82e29e0c9b" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" Dec 01 08:41:22 crc kubenswrapper[4689]: I1201 08:41:22.120774 4689 patch_prober.go:28] interesting pod/console-operator-58897d9998-z629s container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/readyz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Dec 01 08:41:22 crc kubenswrapper[4689]: I1201 08:41:22.120796 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-z629s" podUID="304e31dc-6fcd-4654-9c3d-ef693f7c71a6" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/readyz\": dial tcp 10.217.0.20:8443: connect: connection refused" Dec 01 08:41:22 crc kubenswrapper[4689]: I1201 08:41:22.225202 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:22 crc kubenswrapper[4689]: E1201 08:41:22.230157 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:22.730140845 +0000 UTC m=+162.802428749 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:22 crc kubenswrapper[4689]: I1201 08:41:22.345110 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:22 crc kubenswrapper[4689]: E1201 08:41:22.345734 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:22.845715004 +0000 UTC m=+162.918002908 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:22 crc kubenswrapper[4689]: I1201 08:41:22.448172 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:22 crc kubenswrapper[4689]: E1201 08:41:22.448794 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:22.948780615 +0000 UTC m=+163.021068519 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:22 crc kubenswrapper[4689]: I1201 08:41:22.475130 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6z2v4" Dec 01 08:41:22 crc kubenswrapper[4689]: I1201 08:41:22.554958 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:22 crc kubenswrapper[4689]: E1201 08:41:22.555649 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:23.055627706 +0000 UTC m=+163.127915610 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:22 crc kubenswrapper[4689]: I1201 08:41:22.586582 4689 patch_prober.go:28] interesting pod/router-default-5444994796-hb577 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 08:41:22 crc kubenswrapper[4689]: [-]has-synced failed: reason withheld Dec 01 08:41:22 crc kubenswrapper[4689]: [+]process-running ok Dec 01 08:41:22 crc kubenswrapper[4689]: healthz check failed Dec 01 08:41:22 crc kubenswrapper[4689]: I1201 08:41:22.586736 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hb577" podUID="3bb10b5c-893e-422d-a60f-101f4717b0bc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 08:41:22 crc kubenswrapper[4689]: I1201 08:41:22.718091 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:22 crc kubenswrapper[4689]: E1201 08:41:22.719061 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:23.219019477 +0000 UTC m=+163.291307381 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:22 crc kubenswrapper[4689]: I1201 08:41:22.819081 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:22 crc kubenswrapper[4689]: E1201 08:41:22.819492 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:23.319449534 +0000 UTC m=+163.391737438 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:22 crc kubenswrapper[4689]: I1201 08:41:22.921469 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:22 crc kubenswrapper[4689]: E1201 08:41:22.922021 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:23.422001359 +0000 UTC m=+163.494289263 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:23 crc kubenswrapper[4689]: I1201 08:41:23.023427 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:23 crc kubenswrapper[4689]: E1201 08:41:23.024035 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:23.524004365 +0000 UTC m=+163.596292269 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:23 crc kubenswrapper[4689]: I1201 08:41:23.114853 4689 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-bjlxg container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 01 08:41:23 crc kubenswrapper[4689]: I1201 08:41:23.114951 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bjlxg" podUID="2546c757-03ab-4ba3-95d0-aa537cd615fb" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.31:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 01 08:41:23 crc kubenswrapper[4689]: I1201 08:41:23.119461 4689 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-29dmp container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 01 08:41:23 crc kubenswrapper[4689]: I1201 08:41:23.119698 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-29dmp" podUID="21eaf97a-bf73-4e70-a9bc-153b17b8a799" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 01 08:41:23 crc kubenswrapper[4689]: I1201 08:41:23.125654 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:23 crc kubenswrapper[4689]: E1201 08:41:23.126148 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:23.626131727 +0000 UTC m=+163.698419631 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:23 crc kubenswrapper[4689]: I1201 08:41:23.227014 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:23 crc kubenswrapper[4689]: E1201 08:41:23.229070 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:23.729026091 +0000 UTC m=+163.801313995 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:23 crc kubenswrapper[4689]: I1201 08:41:23.331068 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:23 crc kubenswrapper[4689]: E1201 08:41:23.331588 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:23.831567866 +0000 UTC m=+163.903855770 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:23 crc kubenswrapper[4689]: E1201 08:41:23.433439 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:23.933418898 +0000 UTC m=+164.005706792 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:23 crc kubenswrapper[4689]: I1201 08:41:23.433300 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:23 crc kubenswrapper[4689]: I1201 08:41:23.433951 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:23 crc kubenswrapper[4689]: E1201 08:41:23.434430 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:23.93442175 +0000 UTC m=+164.006709654 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:23 crc kubenswrapper[4689]: I1201 08:41:23.448797 4689 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-29dmp container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 01 08:41:23 crc kubenswrapper[4689]: I1201 08:41:23.449175 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-29dmp" podUID="21eaf97a-bf73-4e70-a9bc-153b17b8a799" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 01 08:41:23 crc kubenswrapper[4689]: I1201 08:41:23.514452 4689 patch_prober.go:28] interesting pod/router-default-5444994796-hb577 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 08:41:23 crc kubenswrapper[4689]: [-]has-synced failed: reason withheld Dec 01 08:41:23 crc kubenswrapper[4689]: [+]process-running ok Dec 01 08:41:23 crc kubenswrapper[4689]: healthz check failed Dec 01 08:41:23 crc kubenswrapper[4689]: I1201 08:41:23.514567 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hb577" podUID="3bb10b5c-893e-422d-a60f-101f4717b0bc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 08:41:23 crc kubenswrapper[4689]: I1201 08:41:23.536284 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:23 crc kubenswrapper[4689]: E1201 08:41:23.536536 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:24.036495629 +0000 UTC m=+164.108783523 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:23 crc kubenswrapper[4689]: I1201 08:41:23.537230 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:23 crc kubenswrapper[4689]: E1201 08:41:23.537887 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:24.037871893 +0000 UTC m=+164.110159797 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:23 crc kubenswrapper[4689]: I1201 08:41:23.638529 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:23 crc kubenswrapper[4689]: E1201 08:41:23.638961 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:24.138945431 +0000 UTC m=+164.211233335 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:23 crc kubenswrapper[4689]: I1201 08:41:23.742594 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:23 crc kubenswrapper[4689]: E1201 08:41:23.743261 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:24.24323229 +0000 UTC m=+164.315520194 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:23 crc kubenswrapper[4689]: I1201 08:41:23.844599 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:23 crc kubenswrapper[4689]: E1201 08:41:23.845241 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:24.345224796 +0000 UTC m=+164.417512700 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:23 crc kubenswrapper[4689]: I1201 08:41:23.946578 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:23 crc kubenswrapper[4689]: E1201 08:41:23.946930 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:24.446916724 +0000 UTC m=+164.519204628 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:24 crc kubenswrapper[4689]: I1201 08:41:24.046302 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-29dmp" Dec 01 08:41:24 crc kubenswrapper[4689]: I1201 08:41:24.047969 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:24 crc kubenswrapper[4689]: E1201 08:41:24.048122 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:24.548088595 +0000 UTC m=+164.620376499 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:24 crc kubenswrapper[4689]: I1201 08:41:24.048588 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:24 crc kubenswrapper[4689]: E1201 08:41:24.049507 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:24.549442267 +0000 UTC m=+164.621730361 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:24 crc kubenswrapper[4689]: I1201 08:41:24.129245 4689 generic.go:334] "Generic (PLEG): container finished" podID="be1070d3-8d5b-4910-aee6-3fee2a360934" containerID="43cb72af69f0beb62deeddfe1a7cceed942748ca60e676f91be5e13083a6d95c" exitCode=0 Dec 01 08:41:24 crc kubenswrapper[4689]: I1201 08:41:24.129629 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29409630-9rhzp" event={"ID":"be1070d3-8d5b-4910-aee6-3fee2a360934","Type":"ContainerDied","Data":"43cb72af69f0beb62deeddfe1a7cceed942748ca60e676f91be5e13083a6d95c"} Dec 01 08:41:24 crc kubenswrapper[4689]: I1201 08:41:24.132259 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-8pg9k" event={"ID":"79369af1-c9d2-4d8e-a675-a5174bc0e4ad","Type":"ContainerStarted","Data":"4ac8110def21c55047ac193f8b99263ad6eeb40938b85da02429705d1ce27908"} Dec 01 08:41:24 crc kubenswrapper[4689]: I1201 08:41:24.150112 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:24 crc kubenswrapper[4689]: E1201 08:41:24.151511 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:24.651494096 +0000 UTC m=+164.723782000 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:24 crc kubenswrapper[4689]: I1201 08:41:24.325590 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:24 crc kubenswrapper[4689]: E1201 08:41:24.326052 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:24.826036882 +0000 UTC m=+164.898324786 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:24 crc kubenswrapper[4689]: I1201 08:41:24.517575 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:24 crc kubenswrapper[4689]: E1201 08:41:24.517704 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:25.017686162 +0000 UTC m=+165.089974066 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:24 crc kubenswrapper[4689]: I1201 08:41:24.517907 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:24 crc kubenswrapper[4689]: E1201 08:41:24.518313 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:25.018305863 +0000 UTC m=+165.090593767 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:24 crc kubenswrapper[4689]: I1201 08:41:24.529971 4689 patch_prober.go:28] interesting pod/router-default-5444994796-hb577 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 08:41:24 crc kubenswrapper[4689]: [-]has-synced failed: reason withheld Dec 01 08:41:24 crc kubenswrapper[4689]: [+]process-running ok Dec 01 08:41:24 crc kubenswrapper[4689]: healthz check failed Dec 01 08:41:24 crc kubenswrapper[4689]: I1201 08:41:24.530052 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hb577" podUID="3bb10b5c-893e-422d-a60f-101f4717b0bc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 08:41:24 crc kubenswrapper[4689]: I1201 08:41:24.618910 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:24 crc kubenswrapper[4689]: E1201 08:41:24.619192 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:25.119154883 +0000 UTC m=+165.191442787 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:24 crc kubenswrapper[4689]: I1201 08:41:24.619242 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:24 crc kubenswrapper[4689]: E1201 08:41:24.619745 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:25.119727721 +0000 UTC m=+165.192015635 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:24 crc kubenswrapper[4689]: I1201 08:41:24.720202 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:24 crc kubenswrapper[4689]: E1201 08:41:24.720610 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:25.220594693 +0000 UTC m=+165.292882597 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:24 crc kubenswrapper[4689]: I1201 08:41:24.822189 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:24 crc kubenswrapper[4689]: E1201 08:41:24.822759 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:25.322742783 +0000 UTC m=+165.395030687 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:24 crc kubenswrapper[4689]: I1201 08:41:24.948193 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:24 crc kubenswrapper[4689]: E1201 08:41:24.948653 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:25.44862219 +0000 UTC m=+165.520910094 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:25 crc kubenswrapper[4689]: I1201 08:41:25.049699 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:25 crc kubenswrapper[4689]: E1201 08:41:25.050388 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:25.550346479 +0000 UTC m=+165.622634383 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:25 crc kubenswrapper[4689]: I1201 08:41:25.150535 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:25 crc kubenswrapper[4689]: E1201 08:41:25.151409 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:25.651360165 +0000 UTC m=+165.723648069 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:25 crc kubenswrapper[4689]: I1201 08:41:25.151710 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:25 crc kubenswrapper[4689]: E1201 08:41:25.152135 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:25.652127709 +0000 UTC m=+165.724415613 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:25 crc kubenswrapper[4689]: I1201 08:41:25.175697 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-twqmb"] Dec 01 08:41:25 crc kubenswrapper[4689]: I1201 08:41:25.177002 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-twqmb" Dec 01 08:41:25 crc kubenswrapper[4689]: I1201 08:41:25.209322 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-twqmb"] Dec 01 08:41:25 crc kubenswrapper[4689]: W1201 08:41:25.211571 4689 reflector.go:561] object-"openshift-marketplace"/"community-operators-dockercfg-dmngl": failed to list *v1.Secret: secrets "community-operators-dockercfg-dmngl" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-marketplace": no relationship found between node 'crc' and this object Dec 01 08:41:25 crc kubenswrapper[4689]: E1201 08:41:25.211644 4689 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"community-operators-dockercfg-dmngl\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"community-operators-dockercfg-dmngl\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-marketplace\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 01 08:41:25 crc kubenswrapper[4689]: I1201 08:41:25.214792 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4kvdm"] Dec 01 08:41:25 crc kubenswrapper[4689]: I1201 08:41:25.215960 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4kvdm" Dec 01 08:41:25 crc kubenswrapper[4689]: I1201 08:41:25.233165 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 01 08:41:25 crc kubenswrapper[4689]: I1201 08:41:25.265111 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:25 crc kubenswrapper[4689]: E1201 08:41:25.265492 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:25.765473877 +0000 UTC m=+165.837761781 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:25 crc kubenswrapper[4689]: I1201 08:41:25.269108 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4kvdm"] Dec 01 08:41:25 crc kubenswrapper[4689]: I1201 08:41:25.347345 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6gkfv"] Dec 01 08:41:25 crc kubenswrapper[4689]: I1201 08:41:25.352483 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6gkfv" Dec 01 08:41:25 crc kubenswrapper[4689]: I1201 08:41:25.359631 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-ch9jh" Dec 01 08:41:25 crc kubenswrapper[4689]: I1201 08:41:25.365472 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-ch9jh" Dec 01 08:41:25 crc kubenswrapper[4689]: I1201 08:41:25.375549 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5e4c105-766f-4c1a-befe-a059da17406f-utilities\") pod \"certified-operators-4kvdm\" (UID: \"e5e4c105-766f-4c1a-befe-a059da17406f\") " pod="openshift-marketplace/certified-operators-4kvdm" Dec 01 08:41:25 crc kubenswrapper[4689]: I1201 08:41:25.375614 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a49ba834-1d80-4003-bf95-6dfd68b25a49-utilities\") pod \"community-operators-twqmb\" (UID: \"a49ba834-1d80-4003-bf95-6dfd68b25a49\") " pod="openshift-marketplace/community-operators-twqmb" Dec 01 08:41:25 crc kubenswrapper[4689]: I1201 08:41:25.375742 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:25 crc kubenswrapper[4689]: I1201 08:41:25.375778 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a49ba834-1d80-4003-bf95-6dfd68b25a49-catalog-content\") pod \"community-operators-twqmb\" (UID: \"a49ba834-1d80-4003-bf95-6dfd68b25a49\") " pod="openshift-marketplace/community-operators-twqmb" Dec 01 08:41:25 crc kubenswrapper[4689]: I1201 08:41:25.375802 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6z2f\" (UniqueName: \"kubernetes.io/projected/a49ba834-1d80-4003-bf95-6dfd68b25a49-kube-api-access-j6z2f\") pod \"community-operators-twqmb\" (UID: \"a49ba834-1d80-4003-bf95-6dfd68b25a49\") " pod="openshift-marketplace/community-operators-twqmb" Dec 01 08:41:25 crc kubenswrapper[4689]: I1201 08:41:25.375826 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrb74\" (UniqueName: \"kubernetes.io/projected/e5e4c105-766f-4c1a-befe-a059da17406f-kube-api-access-nrb74\") pod \"certified-operators-4kvdm\" (UID: \"e5e4c105-766f-4c1a-befe-a059da17406f\") " pod="openshift-marketplace/certified-operators-4kvdm" Dec 01 08:41:25 crc kubenswrapper[4689]: I1201 08:41:25.375860 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5e4c105-766f-4c1a-befe-a059da17406f-catalog-content\") pod \"certified-operators-4kvdm\" (UID: \"e5e4c105-766f-4c1a-befe-a059da17406f\") " pod="openshift-marketplace/certified-operators-4kvdm" Dec 01 08:41:25 crc kubenswrapper[4689]: E1201 08:41:25.382730 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:25.882698088 +0000 UTC m=+165.954985992 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:25 crc kubenswrapper[4689]: I1201 08:41:25.411599 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6gkfv"] Dec 01 08:41:25 crc kubenswrapper[4689]: I1201 08:41:25.477315 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:25 crc kubenswrapper[4689]: I1201 08:41:25.478115 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2790f0e0-bca7-4070-8d79-72ae564043ef-catalog-content\") pod \"community-operators-6gkfv\" (UID: \"2790f0e0-bca7-4070-8d79-72ae564043ef\") " pod="openshift-marketplace/community-operators-6gkfv" Dec 01 08:41:25 crc kubenswrapper[4689]: I1201 08:41:25.478204 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5e4c105-766f-4c1a-befe-a059da17406f-utilities\") pod \"certified-operators-4kvdm\" (UID: \"e5e4c105-766f-4c1a-befe-a059da17406f\") " pod="openshift-marketplace/certified-operators-4kvdm" Dec 01 08:41:25 crc kubenswrapper[4689]: I1201 08:41:25.478280 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a49ba834-1d80-4003-bf95-6dfd68b25a49-utilities\") pod \"community-operators-twqmb\" (UID: \"a49ba834-1d80-4003-bf95-6dfd68b25a49\") " pod="openshift-marketplace/community-operators-twqmb" Dec 01 08:41:25 crc kubenswrapper[4689]: I1201 08:41:25.478399 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2790f0e0-bca7-4070-8d79-72ae564043ef-utilities\") pod \"community-operators-6gkfv\" (UID: \"2790f0e0-bca7-4070-8d79-72ae564043ef\") " pod="openshift-marketplace/community-operators-6gkfv" Dec 01 08:41:25 crc kubenswrapper[4689]: I1201 08:41:25.478518 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a49ba834-1d80-4003-bf95-6dfd68b25a49-catalog-content\") pod \"community-operators-twqmb\" (UID: \"a49ba834-1d80-4003-bf95-6dfd68b25a49\") " pod="openshift-marketplace/community-operators-twqmb" Dec 01 08:41:25 crc kubenswrapper[4689]: I1201 08:41:25.478633 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6z2f\" (UniqueName: \"kubernetes.io/projected/a49ba834-1d80-4003-bf95-6dfd68b25a49-kube-api-access-j6z2f\") pod \"community-operators-twqmb\" (UID: \"a49ba834-1d80-4003-bf95-6dfd68b25a49\") " pod="openshift-marketplace/community-operators-twqmb" Dec 01 08:41:25 crc kubenswrapper[4689]: I1201 08:41:25.478718 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrb74\" (UniqueName: \"kubernetes.io/projected/e5e4c105-766f-4c1a-befe-a059da17406f-kube-api-access-nrb74\") pod \"certified-operators-4kvdm\" (UID: \"e5e4c105-766f-4c1a-befe-a059da17406f\") " pod="openshift-marketplace/certified-operators-4kvdm" Dec 01 08:41:25 crc kubenswrapper[4689]: I1201 08:41:25.478828 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2n49\" (UniqueName: \"kubernetes.io/projected/2790f0e0-bca7-4070-8d79-72ae564043ef-kube-api-access-x2n49\") pod \"community-operators-6gkfv\" (UID: \"2790f0e0-bca7-4070-8d79-72ae564043ef\") " pod="openshift-marketplace/community-operators-6gkfv" Dec 01 08:41:25 crc kubenswrapper[4689]: I1201 08:41:25.478996 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5e4c105-766f-4c1a-befe-a059da17406f-catalog-content\") pod \"certified-operators-4kvdm\" (UID: \"e5e4c105-766f-4c1a-befe-a059da17406f\") " pod="openshift-marketplace/certified-operators-4kvdm" Dec 01 08:41:25 crc kubenswrapper[4689]: I1201 08:41:25.479654 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5e4c105-766f-4c1a-befe-a059da17406f-catalog-content\") pod \"certified-operators-4kvdm\" (UID: \"e5e4c105-766f-4c1a-befe-a059da17406f\") " pod="openshift-marketplace/certified-operators-4kvdm" Dec 01 08:41:25 crc kubenswrapper[4689]: E1201 08:41:25.479678 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:25.979643425 +0000 UTC m=+166.051931329 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:25 crc kubenswrapper[4689]: I1201 08:41:25.480252 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5e4c105-766f-4c1a-befe-a059da17406f-utilities\") pod \"certified-operators-4kvdm\" (UID: \"e5e4c105-766f-4c1a-befe-a059da17406f\") " pod="openshift-marketplace/certified-operators-4kvdm" Dec 01 08:41:25 crc kubenswrapper[4689]: I1201 08:41:25.480498 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a49ba834-1d80-4003-bf95-6dfd68b25a49-catalog-content\") pod \"community-operators-twqmb\" (UID: \"a49ba834-1d80-4003-bf95-6dfd68b25a49\") " pod="openshift-marketplace/community-operators-twqmb" Dec 01 08:41:25 crc kubenswrapper[4689]: I1201 08:41:25.480660 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a49ba834-1d80-4003-bf95-6dfd68b25a49-utilities\") pod \"community-operators-twqmb\" (UID: \"a49ba834-1d80-4003-bf95-6dfd68b25a49\") " pod="openshift-marketplace/community-operators-twqmb" Dec 01 08:41:25 crc kubenswrapper[4689]: I1201 08:41:25.530277 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6z2f\" (UniqueName: \"kubernetes.io/projected/a49ba834-1d80-4003-bf95-6dfd68b25a49-kube-api-access-j6z2f\") pod \"community-operators-twqmb\" (UID: \"a49ba834-1d80-4003-bf95-6dfd68b25a49\") " pod="openshift-marketplace/community-operators-twqmb" Dec 01 08:41:25 crc kubenswrapper[4689]: I1201 08:41:25.530653 4689 patch_prober.go:28] interesting pod/router-default-5444994796-hb577 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 08:41:25 crc kubenswrapper[4689]: [-]has-synced failed: reason withheld Dec 01 08:41:25 crc kubenswrapper[4689]: [+]process-running ok Dec 01 08:41:25 crc kubenswrapper[4689]: healthz check failed Dec 01 08:41:25 crc kubenswrapper[4689]: I1201 08:41:25.530734 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hb577" podUID="3bb10b5c-893e-422d-a60f-101f4717b0bc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 08:41:25 crc kubenswrapper[4689]: I1201 08:41:25.549797 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrb74\" (UniqueName: \"kubernetes.io/projected/e5e4c105-766f-4c1a-befe-a059da17406f-kube-api-access-nrb74\") pod \"certified-operators-4kvdm\" (UID: \"e5e4c105-766f-4c1a-befe-a059da17406f\") " pod="openshift-marketplace/certified-operators-4kvdm" Dec 01 08:41:25 crc kubenswrapper[4689]: I1201 08:41:25.579798 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2n49\" (UniqueName: \"kubernetes.io/projected/2790f0e0-bca7-4070-8d79-72ae564043ef-kube-api-access-x2n49\") pod \"community-operators-6gkfv\" (UID: \"2790f0e0-bca7-4070-8d79-72ae564043ef\") " pod="openshift-marketplace/community-operators-6gkfv" Dec 01 08:41:25 crc kubenswrapper[4689]: I1201 08:41:25.579879 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2790f0e0-bca7-4070-8d79-72ae564043ef-catalog-content\") pod \"community-operators-6gkfv\" (UID: \"2790f0e0-bca7-4070-8d79-72ae564043ef\") " pod="openshift-marketplace/community-operators-6gkfv" Dec 01 08:41:25 crc kubenswrapper[4689]: I1201 08:41:25.579913 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2790f0e0-bca7-4070-8d79-72ae564043ef-utilities\") pod \"community-operators-6gkfv\" (UID: \"2790f0e0-bca7-4070-8d79-72ae564043ef\") " pod="openshift-marketplace/community-operators-6gkfv" Dec 01 08:41:25 crc kubenswrapper[4689]: I1201 08:41:25.579934 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:25 crc kubenswrapper[4689]: E1201 08:41:25.580293 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:26.080280628 +0000 UTC m=+166.152568532 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:25 crc kubenswrapper[4689]: I1201 08:41:25.580538 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2790f0e0-bca7-4070-8d79-72ae564043ef-catalog-content\") pod \"community-operators-6gkfv\" (UID: \"2790f0e0-bca7-4070-8d79-72ae564043ef\") " pod="openshift-marketplace/community-operators-6gkfv" Dec 01 08:41:25 crc kubenswrapper[4689]: I1201 08:41:25.580610 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2790f0e0-bca7-4070-8d79-72ae564043ef-utilities\") pod \"community-operators-6gkfv\" (UID: \"2790f0e0-bca7-4070-8d79-72ae564043ef\") " pod="openshift-marketplace/community-operators-6gkfv" Dec 01 08:41:25 crc kubenswrapper[4689]: I1201 08:41:25.580804 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hwvv4"] Dec 01 08:41:25 crc kubenswrapper[4689]: I1201 08:41:25.581902 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hwvv4" Dec 01 08:41:25 crc kubenswrapper[4689]: I1201 08:41:25.630498 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hwvv4"] Dec 01 08:41:25 crc kubenswrapper[4689]: I1201 08:41:25.659153 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2n49\" (UniqueName: \"kubernetes.io/projected/2790f0e0-bca7-4070-8d79-72ae564043ef-kube-api-access-x2n49\") pod \"community-operators-6gkfv\" (UID: \"2790f0e0-bca7-4070-8d79-72ae564043ef\") " pod="openshift-marketplace/community-operators-6gkfv" Dec 01 08:41:25 crc kubenswrapper[4689]: I1201 08:41:25.680792 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:25 crc kubenswrapper[4689]: E1201 08:41:25.681283 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:26.181262033 +0000 UTC m=+166.253549937 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:25 crc kubenswrapper[4689]: I1201 08:41:25.823204 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cdd81b3a-e9ab-4f49-b621-3e16eed7ac73-catalog-content\") pod \"certified-operators-hwvv4\" (UID: \"cdd81b3a-e9ab-4f49-b621-3e16eed7ac73\") " pod="openshift-marketplace/certified-operators-hwvv4" Dec 01 08:41:25 crc kubenswrapper[4689]: I1201 08:41:25.823278 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5d6a08d0-a948-4c69-b3f0-f5e084adb453-metrics-certs\") pod \"network-metrics-daemon-jtwvs\" (UID: \"5d6a08d0-a948-4c69-b3f0-f5e084adb453\") " pod="openshift-multus/network-metrics-daemon-jtwvs" Dec 01 08:41:25 crc kubenswrapper[4689]: I1201 08:41:25.823322 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cdd81b3a-e9ab-4f49-b621-3e16eed7ac73-utilities\") pod \"certified-operators-hwvv4\" (UID: \"cdd81b3a-e9ab-4f49-b621-3e16eed7ac73\") " pod="openshift-marketplace/certified-operators-hwvv4" Dec 01 08:41:25 crc kubenswrapper[4689]: I1201 08:41:25.823347 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vc7zj\" (UniqueName: \"kubernetes.io/projected/cdd81b3a-e9ab-4f49-b621-3e16eed7ac73-kube-api-access-vc7zj\") pod \"certified-operators-hwvv4\" (UID: \"cdd81b3a-e9ab-4f49-b621-3e16eed7ac73\") " pod="openshift-marketplace/certified-operators-hwvv4" Dec 01 08:41:25 crc kubenswrapper[4689]: I1201 08:41:25.823400 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:25 crc kubenswrapper[4689]: E1201 08:41:25.823802 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:26.323777029 +0000 UTC m=+166.396064923 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:25 crc kubenswrapper[4689]: I1201 08:41:25.831775 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4kvdm" Dec 01 08:41:25 crc kubenswrapper[4689]: I1201 08:41:25.841903 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5d6a08d0-a948-4c69-b3f0-f5e084adb453-metrics-certs\") pod \"network-metrics-daemon-jtwvs\" (UID: \"5d6a08d0-a948-4c69-b3f0-f5e084adb453\") " pod="openshift-multus/network-metrics-daemon-jtwvs" Dec 01 08:41:25 crc kubenswrapper[4689]: I1201 08:41:25.880768 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jtwvs" Dec 01 08:41:25 crc kubenswrapper[4689]: I1201 08:41:25.928683 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:25 crc kubenswrapper[4689]: I1201 08:41:25.928952 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cdd81b3a-e9ab-4f49-b621-3e16eed7ac73-utilities\") pod \"certified-operators-hwvv4\" (UID: \"cdd81b3a-e9ab-4f49-b621-3e16eed7ac73\") " pod="openshift-marketplace/certified-operators-hwvv4" Dec 01 08:41:25 crc kubenswrapper[4689]: I1201 08:41:25.928984 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vc7zj\" (UniqueName: \"kubernetes.io/projected/cdd81b3a-e9ab-4f49-b621-3e16eed7ac73-kube-api-access-vc7zj\") pod \"certified-operators-hwvv4\" (UID: \"cdd81b3a-e9ab-4f49-b621-3e16eed7ac73\") " pod="openshift-marketplace/certified-operators-hwvv4" Dec 01 08:41:25 crc kubenswrapper[4689]: I1201 08:41:25.929065 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cdd81b3a-e9ab-4f49-b621-3e16eed7ac73-catalog-content\") pod \"certified-operators-hwvv4\" (UID: \"cdd81b3a-e9ab-4f49-b621-3e16eed7ac73\") " pod="openshift-marketplace/certified-operators-hwvv4" Dec 01 08:41:25 crc kubenswrapper[4689]: I1201 08:41:25.929619 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cdd81b3a-e9ab-4f49-b621-3e16eed7ac73-catalog-content\") pod \"certified-operators-hwvv4\" (UID: \"cdd81b3a-e9ab-4f49-b621-3e16eed7ac73\") " pod="openshift-marketplace/certified-operators-hwvv4" Dec 01 08:41:25 crc kubenswrapper[4689]: E1201 08:41:25.929701 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:26.42968238 +0000 UTC m=+166.501970284 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:25 crc kubenswrapper[4689]: I1201 08:41:25.929923 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cdd81b3a-e9ab-4f49-b621-3e16eed7ac73-utilities\") pod \"certified-operators-hwvv4\" (UID: \"cdd81b3a-e9ab-4f49-b621-3e16eed7ac73\") " pod="openshift-marketplace/certified-operators-hwvv4" Dec 01 08:41:25 crc kubenswrapper[4689]: I1201 08:41:25.986145 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vc7zj\" (UniqueName: \"kubernetes.io/projected/cdd81b3a-e9ab-4f49-b621-3e16eed7ac73-kube-api-access-vc7zj\") pod \"certified-operators-hwvv4\" (UID: \"cdd81b3a-e9ab-4f49-b621-3e16eed7ac73\") " pod="openshift-marketplace/certified-operators-hwvv4" Dec 01 08:41:26 crc kubenswrapper[4689]: I1201 08:41:26.031488 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:26 crc kubenswrapper[4689]: E1201 08:41:26.031884 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:26.531871323 +0000 UTC m=+166.604159227 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:26 crc kubenswrapper[4689]: I1201 08:41:26.132480 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:26 crc kubenswrapper[4689]: E1201 08:41:26.132805 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:26.632786226 +0000 UTC m=+166.705074130 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:26 crc kubenswrapper[4689]: I1201 08:41:26.132974 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 01 08:41:26 crc kubenswrapper[4689]: I1201 08:41:26.133880 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 01 08:41:26 crc kubenswrapper[4689]: I1201 08:41:26.143756 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Dec 01 08:41:26 crc kubenswrapper[4689]: I1201 08:41:26.194881 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Dec 01 08:41:26 crc kubenswrapper[4689]: I1201 08:41:26.195098 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 01 08:41:26 crc kubenswrapper[4689]: I1201 08:41:26.204237 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hwvv4" Dec 01 08:41:26 crc kubenswrapper[4689]: I1201 08:41:26.211900 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-8pg9k" event={"ID":"79369af1-c9d2-4d8e-a675-a5174bc0e4ad","Type":"ContainerStarted","Data":"a73f6d02fcf7dd2b15338e42212aba4f059e43e61731804f5b65d8ce2e3c8e1c"} Dec 01 08:41:26 crc kubenswrapper[4689]: I1201 08:41:26.234521 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b055b7a2-406e-470c-87ea-9f71ecc3caf8-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"b055b7a2-406e-470c-87ea-9f71ecc3caf8\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 01 08:41:26 crc kubenswrapper[4689]: I1201 08:41:26.234652 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b055b7a2-406e-470c-87ea-9f71ecc3caf8-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"b055b7a2-406e-470c-87ea-9f71ecc3caf8\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 01 08:41:26 crc kubenswrapper[4689]: I1201 08:41:26.234706 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:26 crc kubenswrapper[4689]: E1201 08:41:26.237268 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:26.737252611 +0000 UTC m=+166.809540515 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:26 crc kubenswrapper[4689]: I1201 08:41:26.270324 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 01 08:41:26 crc kubenswrapper[4689]: I1201 08:41:26.271240 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 01 08:41:26 crc kubenswrapper[4689]: I1201 08:41:26.289319 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 01 08:41:26 crc kubenswrapper[4689]: I1201 08:41:26.290305 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 01 08:41:26 crc kubenswrapper[4689]: I1201 08:41:26.293268 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 01 08:41:26 crc kubenswrapper[4689]: I1201 08:41:26.336250 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:26 crc kubenswrapper[4689]: I1201 08:41:26.336499 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b055b7a2-406e-470c-87ea-9f71ecc3caf8-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"b055b7a2-406e-470c-87ea-9f71ecc3caf8\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 01 08:41:26 crc kubenswrapper[4689]: I1201 08:41:26.336617 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b055b7a2-406e-470c-87ea-9f71ecc3caf8-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"b055b7a2-406e-470c-87ea-9f71ecc3caf8\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 01 08:41:26 crc kubenswrapper[4689]: E1201 08:41:26.337280 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:26.837258815 +0000 UTC m=+166.909546719 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:26 crc kubenswrapper[4689]: I1201 08:41:26.337319 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b055b7a2-406e-470c-87ea-9f71ecc3caf8-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"b055b7a2-406e-470c-87ea-9f71ecc3caf8\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 01 08:41:26 crc kubenswrapper[4689]: I1201 08:41:26.400487 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b055b7a2-406e-470c-87ea-9f71ecc3caf8-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"b055b7a2-406e-470c-87ea-9f71ecc3caf8\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 01 08:41:26 crc kubenswrapper[4689]: I1201 08:41:26.404710 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-8pg9k" podStartSLOduration=18.404681151 podStartE2EDuration="18.404681151s" podCreationTimestamp="2025-12-01 08:41:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:41:26.401713096 +0000 UTC m=+166.474001020" watchObservedRunningTime="2025-12-01 08:41:26.404681151 +0000 UTC m=+166.476969055" Dec 01 08:41:26 crc kubenswrapper[4689]: I1201 08:41:26.442613 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:26 crc kubenswrapper[4689]: I1201 08:41:26.443019 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/507dc54c-e4ea-4b41-a390-fcc3123a7859-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"507dc54c-e4ea-4b41-a390-fcc3123a7859\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 01 08:41:26 crc kubenswrapper[4689]: I1201 08:41:26.443196 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/507dc54c-e4ea-4b41-a390-fcc3123a7859-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"507dc54c-e4ea-4b41-a390-fcc3123a7859\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 01 08:41:26 crc kubenswrapper[4689]: E1201 08:41:26.443715 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:26.943701573 +0000 UTC m=+167.015989477 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:26 crc kubenswrapper[4689]: I1201 08:41:26.519342 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 01 08:41:26 crc kubenswrapper[4689]: I1201 08:41:26.528090 4689 patch_prober.go:28] interesting pod/router-default-5444994796-hb577 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 08:41:26 crc kubenswrapper[4689]: [-]has-synced failed: reason withheld Dec 01 08:41:26 crc kubenswrapper[4689]: [+]process-running ok Dec 01 08:41:26 crc kubenswrapper[4689]: healthz check failed Dec 01 08:41:26 crc kubenswrapper[4689]: I1201 08:41:26.528162 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hb577" podUID="3bb10b5c-893e-422d-a60f-101f4717b0bc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 08:41:26 crc kubenswrapper[4689]: I1201 08:41:26.560054 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:26 crc kubenswrapper[4689]: I1201 08:41:26.560279 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/507dc54c-e4ea-4b41-a390-fcc3123a7859-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"507dc54c-e4ea-4b41-a390-fcc3123a7859\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 01 08:41:26 crc kubenswrapper[4689]: I1201 08:41:26.560360 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/507dc54c-e4ea-4b41-a390-fcc3123a7859-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"507dc54c-e4ea-4b41-a390-fcc3123a7859\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 01 08:41:26 crc kubenswrapper[4689]: I1201 08:41:26.560468 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/507dc54c-e4ea-4b41-a390-fcc3123a7859-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"507dc54c-e4ea-4b41-a390-fcc3123a7859\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 01 08:41:26 crc kubenswrapper[4689]: E1201 08:41:26.560561 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:27.060542092 +0000 UTC m=+167.132829996 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:26 crc kubenswrapper[4689]: I1201 08:41:26.572357 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 01 08:41:26 crc kubenswrapper[4689]: I1201 08:41:26.573702 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6gkfv" Dec 01 08:41:26 crc kubenswrapper[4689]: I1201 08:41:26.576633 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-twqmb" Dec 01 08:41:26 crc kubenswrapper[4689]: I1201 08:41:26.620321 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409630-9rhzp" Dec 01 08:41:26 crc kubenswrapper[4689]: I1201 08:41:26.639432 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/507dc54c-e4ea-4b41-a390-fcc3123a7859-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"507dc54c-e4ea-4b41-a390-fcc3123a7859\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 01 08:41:26 crc kubenswrapper[4689]: I1201 08:41:26.664989 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:26 crc kubenswrapper[4689]: E1201 08:41:26.665443 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:27.165429061 +0000 UTC m=+167.237716965 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:26 crc kubenswrapper[4689]: I1201 08:41:26.688433 4689 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Dec 01 08:41:26 crc kubenswrapper[4689]: I1201 08:41:26.700358 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 01 08:41:26 crc kubenswrapper[4689]: I1201 08:41:26.766380 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:26 crc kubenswrapper[4689]: I1201 08:41:26.766686 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/be1070d3-8d5b-4910-aee6-3fee2a360934-config-volume\") pod \"be1070d3-8d5b-4910-aee6-3fee2a360934\" (UID: \"be1070d3-8d5b-4910-aee6-3fee2a360934\") " Dec 01 08:41:26 crc kubenswrapper[4689]: I1201 08:41:26.766849 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zlctf\" (UniqueName: \"kubernetes.io/projected/be1070d3-8d5b-4910-aee6-3fee2a360934-kube-api-access-zlctf\") pod \"be1070d3-8d5b-4910-aee6-3fee2a360934\" (UID: \"be1070d3-8d5b-4910-aee6-3fee2a360934\") " Dec 01 08:41:26 crc kubenswrapper[4689]: I1201 08:41:26.766994 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/be1070d3-8d5b-4910-aee6-3fee2a360934-secret-volume\") pod \"be1070d3-8d5b-4910-aee6-3fee2a360934\" (UID: \"be1070d3-8d5b-4910-aee6-3fee2a360934\") " Dec 01 08:41:26 crc kubenswrapper[4689]: E1201 08:41:26.767773 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:27.267723938 +0000 UTC m=+167.340011852 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:26 crc kubenswrapper[4689]: I1201 08:41:26.768002 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:26 crc kubenswrapper[4689]: E1201 08:41:26.768730 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:27.268720489 +0000 UTC m=+167.341008403 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:26 crc kubenswrapper[4689]: I1201 08:41:26.769435 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be1070d3-8d5b-4910-aee6-3fee2a360934-config-volume" (OuterVolumeSpecName: "config-volume") pod "be1070d3-8d5b-4910-aee6-3fee2a360934" (UID: "be1070d3-8d5b-4910-aee6-3fee2a360934"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:41:26 crc kubenswrapper[4689]: I1201 08:41:26.805272 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be1070d3-8d5b-4910-aee6-3fee2a360934-kube-api-access-zlctf" (OuterVolumeSpecName: "kube-api-access-zlctf") pod "be1070d3-8d5b-4910-aee6-3fee2a360934" (UID: "be1070d3-8d5b-4910-aee6-3fee2a360934"). InnerVolumeSpecName "kube-api-access-zlctf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:41:26 crc kubenswrapper[4689]: I1201 08:41:26.831136 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be1070d3-8d5b-4910-aee6-3fee2a360934-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "be1070d3-8d5b-4910-aee6-3fee2a360934" (UID: "be1070d3-8d5b-4910-aee6-3fee2a360934"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:41:26 crc kubenswrapper[4689]: I1201 08:41:26.870149 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:26 crc kubenswrapper[4689]: I1201 08:41:26.870627 4689 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/be1070d3-8d5b-4910-aee6-3fee2a360934-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 01 08:41:26 crc kubenswrapper[4689]: I1201 08:41:26.870738 4689 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/be1070d3-8d5b-4910-aee6-3fee2a360934-config-volume\") on node \"crc\" DevicePath \"\"" Dec 01 08:41:26 crc kubenswrapper[4689]: I1201 08:41:26.870821 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zlctf\" (UniqueName: \"kubernetes.io/projected/be1070d3-8d5b-4910-aee6-3fee2a360934-kube-api-access-zlctf\") on node \"crc\" DevicePath \"\"" Dec 01 08:41:26 crc kubenswrapper[4689]: E1201 08:41:26.870975 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:27.370952873 +0000 UTC m=+167.443240777 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:26 crc kubenswrapper[4689]: I1201 08:41:26.974327 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:26 crc kubenswrapper[4689]: E1201 08:41:26.975122 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:27.475107069 +0000 UTC m=+167.547394973 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:27 crc kubenswrapper[4689]: I1201 08:41:27.093616 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:27 crc kubenswrapper[4689]: E1201 08:41:27.095086 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:27.595049646 +0000 UTC m=+167.667337550 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:27 crc kubenswrapper[4689]: I1201 08:41:27.207519 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:27 crc kubenswrapper[4689]: E1201 08:41:27.208232 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:27.708218359 +0000 UTC m=+167.780506263 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:27 crc kubenswrapper[4689]: I1201 08:41:27.255876 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409630-9rhzp" Dec 01 08:41:27 crc kubenswrapper[4689]: I1201 08:41:27.256114 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29409630-9rhzp" event={"ID":"be1070d3-8d5b-4910-aee6-3fee2a360934","Type":"ContainerDied","Data":"8d1617d9415632ee5cef3857cd977255ded82f87194b14486f0e77266a250239"} Dec 01 08:41:27 crc kubenswrapper[4689]: I1201 08:41:27.256208 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d1617d9415632ee5cef3857cd977255ded82f87194b14486f0e77266a250239" Dec 01 08:41:27 crc kubenswrapper[4689]: I1201 08:41:27.308722 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:27 crc kubenswrapper[4689]: E1201 08:41:27.310090 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:27.810072931 +0000 UTC m=+167.882360835 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:27 crc kubenswrapper[4689]: I1201 08:41:27.410044 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:27 crc kubenswrapper[4689]: E1201 08:41:27.410590 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:27.91056988 +0000 UTC m=+167.982857784 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:27 crc kubenswrapper[4689]: I1201 08:41:27.431694 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-c4zck"] Dec 01 08:41:27 crc kubenswrapper[4689]: E1201 08:41:27.431952 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be1070d3-8d5b-4910-aee6-3fee2a360934" containerName="collect-profiles" Dec 01 08:41:27 crc kubenswrapper[4689]: I1201 08:41:27.431979 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="be1070d3-8d5b-4910-aee6-3fee2a360934" containerName="collect-profiles" Dec 01 08:41:27 crc kubenswrapper[4689]: I1201 08:41:27.432102 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="be1070d3-8d5b-4910-aee6-3fee2a360934" containerName="collect-profiles" Dec 01 08:41:27 crc kubenswrapper[4689]: I1201 08:41:27.432975 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c4zck" Dec 01 08:41:27 crc kubenswrapper[4689]: I1201 08:41:27.437751 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 01 08:41:27 crc kubenswrapper[4689]: I1201 08:41:27.456512 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-c4zck"] Dec 01 08:41:27 crc kubenswrapper[4689]: I1201 08:41:27.522394 4689 patch_prober.go:28] interesting pod/router-default-5444994796-hb577 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 08:41:27 crc kubenswrapper[4689]: [-]has-synced failed: reason withheld Dec 01 08:41:27 crc kubenswrapper[4689]: [+]process-running ok Dec 01 08:41:27 crc kubenswrapper[4689]: healthz check failed Dec 01 08:41:27 crc kubenswrapper[4689]: I1201 08:41:27.522493 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hb577" podUID="3bb10b5c-893e-422d-a60f-101f4717b0bc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 08:41:27 crc kubenswrapper[4689]: I1201 08:41:27.526722 4689 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-12-01T08:41:26.688487635Z","Handler":null,"Name":""} Dec 01 08:41:27 crc kubenswrapper[4689]: I1201 08:41:27.533747 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:27 crc kubenswrapper[4689]: E1201 08:41:27.537012 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 08:41:28.036977624 +0000 UTC m=+168.109265528 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:27 crc kubenswrapper[4689]: I1201 08:41:27.537839 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1860f8a4-ce73-4d74-8dcf-0a43a90d35b9-utilities\") pod \"redhat-marketplace-c4zck\" (UID: \"1860f8a4-ce73-4d74-8dcf-0a43a90d35b9\") " pod="openshift-marketplace/redhat-marketplace-c4zck" Dec 01 08:41:27 crc kubenswrapper[4689]: I1201 08:41:27.537891 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1860f8a4-ce73-4d74-8dcf-0a43a90d35b9-catalog-content\") pod \"redhat-marketplace-c4zck\" (UID: \"1860f8a4-ce73-4d74-8dcf-0a43a90d35b9\") " pod="openshift-marketplace/redhat-marketplace-c4zck" Dec 01 08:41:27 crc kubenswrapper[4689]: I1201 08:41:27.537930 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnvrm\" (UniqueName: \"kubernetes.io/projected/1860f8a4-ce73-4d74-8dcf-0a43a90d35b9-kube-api-access-fnvrm\") pod \"redhat-marketplace-c4zck\" (UID: \"1860f8a4-ce73-4d74-8dcf-0a43a90d35b9\") " pod="openshift-marketplace/redhat-marketplace-c4zck" Dec 01 08:41:27 crc kubenswrapper[4689]: I1201 08:41:27.537982 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:27 crc kubenswrapper[4689]: E1201 08:41:27.538414 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 08:41:28.0383868 +0000 UTC m=+168.110674694 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t8fdl" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 08:41:27 crc kubenswrapper[4689]: I1201 08:41:27.548054 4689 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Dec 01 08:41:27 crc kubenswrapper[4689]: I1201 08:41:27.548115 4689 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Dec 01 08:41:27 crc kubenswrapper[4689]: I1201 08:41:27.573096 4689 patch_prober.go:28] interesting pod/apiserver-76f77b778f-ch9jh container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 01 08:41:27 crc kubenswrapper[4689]: [+]log ok Dec 01 08:41:27 crc kubenswrapper[4689]: [+]etcd ok Dec 01 08:41:27 crc kubenswrapper[4689]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 01 08:41:27 crc kubenswrapper[4689]: [+]poststarthook/generic-apiserver-start-informers ok Dec 01 08:41:27 crc kubenswrapper[4689]: [+]poststarthook/max-in-flight-filter ok Dec 01 08:41:27 crc kubenswrapper[4689]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 01 08:41:27 crc kubenswrapper[4689]: [+]poststarthook/image.openshift.io-apiserver-caches ok Dec 01 08:41:27 crc kubenswrapper[4689]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Dec 01 08:41:27 crc kubenswrapper[4689]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Dec 01 08:41:27 crc kubenswrapper[4689]: [+]poststarthook/project.openshift.io-projectcache ok Dec 01 08:41:27 crc kubenswrapper[4689]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Dec 01 08:41:27 crc kubenswrapper[4689]: [+]poststarthook/openshift.io-startinformers ok Dec 01 08:41:27 crc kubenswrapper[4689]: [+]poststarthook/openshift.io-restmapperupdater ok Dec 01 08:41:27 crc kubenswrapper[4689]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 01 08:41:27 crc kubenswrapper[4689]: livez check failed Dec 01 08:41:27 crc kubenswrapper[4689]: I1201 08:41:27.573183 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-ch9jh" podUID="c389f615-2c0f-467b-924e-ad740d3fff07" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 08:41:27 crc kubenswrapper[4689]: I1201 08:41:27.623080 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4kvdm"] Dec 01 08:41:27 crc kubenswrapper[4689]: I1201 08:41:27.639066 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 08:41:27 crc kubenswrapper[4689]: I1201 08:41:27.639262 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1860f8a4-ce73-4d74-8dcf-0a43a90d35b9-utilities\") pod \"redhat-marketplace-c4zck\" (UID: \"1860f8a4-ce73-4d74-8dcf-0a43a90d35b9\") " pod="openshift-marketplace/redhat-marketplace-c4zck" Dec 01 08:41:27 crc kubenswrapper[4689]: I1201 08:41:27.639299 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1860f8a4-ce73-4d74-8dcf-0a43a90d35b9-catalog-content\") pod \"redhat-marketplace-c4zck\" (UID: \"1860f8a4-ce73-4d74-8dcf-0a43a90d35b9\") " pod="openshift-marketplace/redhat-marketplace-c4zck" Dec 01 08:41:27 crc kubenswrapper[4689]: I1201 08:41:27.639330 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnvrm\" (UniqueName: \"kubernetes.io/projected/1860f8a4-ce73-4d74-8dcf-0a43a90d35b9-kube-api-access-fnvrm\") pod \"redhat-marketplace-c4zck\" (UID: \"1860f8a4-ce73-4d74-8dcf-0a43a90d35b9\") " pod="openshift-marketplace/redhat-marketplace-c4zck" Dec 01 08:41:27 crc kubenswrapper[4689]: I1201 08:41:27.640197 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1860f8a4-ce73-4d74-8dcf-0a43a90d35b9-utilities\") pod \"redhat-marketplace-c4zck\" (UID: \"1860f8a4-ce73-4d74-8dcf-0a43a90d35b9\") " pod="openshift-marketplace/redhat-marketplace-c4zck" Dec 01 08:41:27 crc kubenswrapper[4689]: I1201 08:41:27.640443 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1860f8a4-ce73-4d74-8dcf-0a43a90d35b9-catalog-content\") pod \"redhat-marketplace-c4zck\" (UID: \"1860f8a4-ce73-4d74-8dcf-0a43a90d35b9\") " pod="openshift-marketplace/redhat-marketplace-c4zck" Dec 01 08:41:27 crc kubenswrapper[4689]: I1201 08:41:27.680624 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-jtwvs"] Dec 01 08:41:27 crc kubenswrapper[4689]: I1201 08:41:27.685196 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnvrm\" (UniqueName: \"kubernetes.io/projected/1860f8a4-ce73-4d74-8dcf-0a43a90d35b9-kube-api-access-fnvrm\") pod \"redhat-marketplace-c4zck\" (UID: \"1860f8a4-ce73-4d74-8dcf-0a43a90d35b9\") " pod="openshift-marketplace/redhat-marketplace-c4zck" Dec 01 08:41:27 crc kubenswrapper[4689]: I1201 08:41:27.758724 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c4zck" Dec 01 08:41:27 crc kubenswrapper[4689]: I1201 08:41:27.788760 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jk49d"] Dec 01 08:41:27 crc kubenswrapper[4689]: I1201 08:41:27.790642 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jk49d" Dec 01 08:41:27 crc kubenswrapper[4689]: I1201 08:41:27.806587 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 01 08:41:27 crc kubenswrapper[4689]: W1201 08:41:27.813135 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5d6a08d0_a948_4c69_b3f0_f5e084adb453.slice/crio-9b9f14167cb052f15c80f3b76e7c20ababb7697ca7d10b4a4addc37cb7c485db WatchSource:0}: Error finding container 9b9f14167cb052f15c80f3b76e7c20ababb7697ca7d10b4a4addc37cb7c485db: Status 404 returned error can't find the container with id 9b9f14167cb052f15c80f3b76e7c20ababb7697ca7d10b4a4addc37cb7c485db Dec 01 08:41:27 crc kubenswrapper[4689]: I1201 08:41:27.821230 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jk49d"] Dec 01 08:41:27 crc kubenswrapper[4689]: I1201 08:41:27.858655 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clq27\" (UniqueName: \"kubernetes.io/projected/2a3d70f2-3da1-4712-bb46-200a641c7648-kube-api-access-clq27\") pod \"redhat-marketplace-jk49d\" (UID: \"2a3d70f2-3da1-4712-bb46-200a641c7648\") " pod="openshift-marketplace/redhat-marketplace-jk49d" Dec 01 08:41:27 crc kubenswrapper[4689]: I1201 08:41:27.858863 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a3d70f2-3da1-4712-bb46-200a641c7648-catalog-content\") pod \"redhat-marketplace-jk49d\" (UID: \"2a3d70f2-3da1-4712-bb46-200a641c7648\") " pod="openshift-marketplace/redhat-marketplace-jk49d" Dec 01 08:41:27 crc kubenswrapper[4689]: I1201 08:41:27.858999 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:27 crc kubenswrapper[4689]: I1201 08:41:27.859148 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a3d70f2-3da1-4712-bb46-200a641c7648-utilities\") pod \"redhat-marketplace-jk49d\" (UID: \"2a3d70f2-3da1-4712-bb46-200a641c7648\") " pod="openshift-marketplace/redhat-marketplace-jk49d" Dec 01 08:41:27 crc kubenswrapper[4689]: I1201 08:41:27.901597 4689 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 01 08:41:27 crc kubenswrapper[4689]: I1201 08:41:27.901662 4689 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:27 crc kubenswrapper[4689]: I1201 08:41:27.960771 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a3d70f2-3da1-4712-bb46-200a641c7648-utilities\") pod \"redhat-marketplace-jk49d\" (UID: \"2a3d70f2-3da1-4712-bb46-200a641c7648\") " pod="openshift-marketplace/redhat-marketplace-jk49d" Dec 01 08:41:27 crc kubenswrapper[4689]: I1201 08:41:27.960862 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clq27\" (UniqueName: \"kubernetes.io/projected/2a3d70f2-3da1-4712-bb46-200a641c7648-kube-api-access-clq27\") pod \"redhat-marketplace-jk49d\" (UID: \"2a3d70f2-3da1-4712-bb46-200a641c7648\") " pod="openshift-marketplace/redhat-marketplace-jk49d" Dec 01 08:41:27 crc kubenswrapper[4689]: I1201 08:41:27.960900 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a3d70f2-3da1-4712-bb46-200a641c7648-catalog-content\") pod \"redhat-marketplace-jk49d\" (UID: \"2a3d70f2-3da1-4712-bb46-200a641c7648\") " pod="openshift-marketplace/redhat-marketplace-jk49d" Dec 01 08:41:27 crc kubenswrapper[4689]: I1201 08:41:27.961500 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a3d70f2-3da1-4712-bb46-200a641c7648-catalog-content\") pod \"redhat-marketplace-jk49d\" (UID: \"2a3d70f2-3da1-4712-bb46-200a641c7648\") " pod="openshift-marketplace/redhat-marketplace-jk49d" Dec 01 08:41:27 crc kubenswrapper[4689]: I1201 08:41:27.961809 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a3d70f2-3da1-4712-bb46-200a641c7648-utilities\") pod \"redhat-marketplace-jk49d\" (UID: \"2a3d70f2-3da1-4712-bb46-200a641c7648\") " pod="openshift-marketplace/redhat-marketplace-jk49d" Dec 01 08:41:28 crc kubenswrapper[4689]: I1201 08:41:28.042470 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clq27\" (UniqueName: \"kubernetes.io/projected/2a3d70f2-3da1-4712-bb46-200a641c7648-kube-api-access-clq27\") pod \"redhat-marketplace-jk49d\" (UID: \"2a3d70f2-3da1-4712-bb46-200a641c7648\") " pod="openshift-marketplace/redhat-marketplace-jk49d" Dec 01 08:41:28 crc kubenswrapper[4689]: I1201 08:41:28.185155 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jk49d" Dec 01 08:41:28 crc kubenswrapper[4689]: I1201 08:41:28.243794 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-l9s25"] Dec 01 08:41:28 crc kubenswrapper[4689]: I1201 08:41:28.315135 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l9s25" Dec 01 08:41:28 crc kubenswrapper[4689]: I1201 08:41:28.323574 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 01 08:41:28 crc kubenswrapper[4689]: I1201 08:41:28.448317 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4kvdm" event={"ID":"e5e4c105-766f-4c1a-befe-a059da17406f","Type":"ContainerStarted","Data":"46ad426173657b5d5f0ed4f4d81dabf7c8b6aa86903a0e975aaf9b32be2a200a"} Dec 01 08:41:28 crc kubenswrapper[4689]: I1201 08:41:28.473289 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l9s25"] Dec 01 08:41:28 crc kubenswrapper[4689]: I1201 08:41:28.572990 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kq9wb\" (UniqueName: \"kubernetes.io/projected/a02d72db-aa64-4300-acc0-93b8677bf6df-kube-api-access-kq9wb\") pod \"redhat-operators-l9s25\" (UID: \"a02d72db-aa64-4300-acc0-93b8677bf6df\") " pod="openshift-marketplace/redhat-operators-l9s25" Dec 01 08:41:28 crc kubenswrapper[4689]: I1201 08:41:28.573083 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a02d72db-aa64-4300-acc0-93b8677bf6df-utilities\") pod \"redhat-operators-l9s25\" (UID: \"a02d72db-aa64-4300-acc0-93b8677bf6df\") " pod="openshift-marketplace/redhat-operators-l9s25" Dec 01 08:41:28 crc kubenswrapper[4689]: I1201 08:41:28.573222 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a02d72db-aa64-4300-acc0-93b8677bf6df-catalog-content\") pod \"redhat-operators-l9s25\" (UID: \"a02d72db-aa64-4300-acc0-93b8677bf6df\") " pod="openshift-marketplace/redhat-operators-l9s25" Dec 01 08:41:28 crc kubenswrapper[4689]: I1201 08:41:28.691227 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-z26c6"] Dec 01 08:41:28 crc kubenswrapper[4689]: I1201 08:41:28.692780 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-jtwvs" event={"ID":"5d6a08d0-a948-4c69-b3f0-f5e084adb453","Type":"ContainerStarted","Data":"9b9f14167cb052f15c80f3b76e7c20ababb7697ca7d10b4a4addc37cb7c485db"} Dec 01 08:41:28 crc kubenswrapper[4689]: I1201 08:41:28.692937 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z26c6" Dec 01 08:41:28 crc kubenswrapper[4689]: I1201 08:41:28.693011 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a02d72db-aa64-4300-acc0-93b8677bf6df-catalog-content\") pod \"redhat-operators-l9s25\" (UID: \"a02d72db-aa64-4300-acc0-93b8677bf6df\") " pod="openshift-marketplace/redhat-operators-l9s25" Dec 01 08:41:28 crc kubenswrapper[4689]: I1201 08:41:28.693079 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kq9wb\" (UniqueName: \"kubernetes.io/projected/a02d72db-aa64-4300-acc0-93b8677bf6df-kube-api-access-kq9wb\") pod \"redhat-operators-l9s25\" (UID: \"a02d72db-aa64-4300-acc0-93b8677bf6df\") " pod="openshift-marketplace/redhat-operators-l9s25" Dec 01 08:41:28 crc kubenswrapper[4689]: I1201 08:41:28.693127 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a02d72db-aa64-4300-acc0-93b8677bf6df-utilities\") pod \"redhat-operators-l9s25\" (UID: \"a02d72db-aa64-4300-acc0-93b8677bf6df\") " pod="openshift-marketplace/redhat-operators-l9s25" Dec 01 08:41:28 crc kubenswrapper[4689]: I1201 08:41:28.693762 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a02d72db-aa64-4300-acc0-93b8677bf6df-utilities\") pod \"redhat-operators-l9s25\" (UID: \"a02d72db-aa64-4300-acc0-93b8677bf6df\") " pod="openshift-marketplace/redhat-operators-l9s25" Dec 01 08:41:28 crc kubenswrapper[4689]: I1201 08:41:28.694172 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a02d72db-aa64-4300-acc0-93b8677bf6df-catalog-content\") pod \"redhat-operators-l9s25\" (UID: \"a02d72db-aa64-4300-acc0-93b8677bf6df\") " pod="openshift-marketplace/redhat-operators-l9s25" Dec 01 08:41:28 crc kubenswrapper[4689]: I1201 08:41:28.712229 4689 patch_prober.go:28] interesting pod/router-default-5444994796-hb577 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 08:41:28 crc kubenswrapper[4689]: [-]has-synced failed: reason withheld Dec 01 08:41:28 crc kubenswrapper[4689]: [+]process-running ok Dec 01 08:41:28 crc kubenswrapper[4689]: healthz check failed Dec 01 08:41:28 crc kubenswrapper[4689]: I1201 08:41:28.712312 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hb577" podUID="3bb10b5c-893e-422d-a60f-101f4717b0bc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 08:41:28 crc kubenswrapper[4689]: I1201 08:41:28.830566 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6729f1b7-260e-4a90-a2da-1258e036b9ea-utilities\") pod \"redhat-operators-z26c6\" (UID: \"6729f1b7-260e-4a90-a2da-1258e036b9ea\") " pod="openshift-marketplace/redhat-operators-z26c6" Dec 01 08:41:28 crc kubenswrapper[4689]: I1201 08:41:28.830672 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxp5n\" (UniqueName: \"kubernetes.io/projected/6729f1b7-260e-4a90-a2da-1258e036b9ea-kube-api-access-mxp5n\") pod \"redhat-operators-z26c6\" (UID: \"6729f1b7-260e-4a90-a2da-1258e036b9ea\") " pod="openshift-marketplace/redhat-operators-z26c6" Dec 01 08:41:28 crc kubenswrapper[4689]: I1201 08:41:28.830702 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6729f1b7-260e-4a90-a2da-1258e036b9ea-catalog-content\") pod \"redhat-operators-z26c6\" (UID: \"6729f1b7-260e-4a90-a2da-1258e036b9ea\") " pod="openshift-marketplace/redhat-operators-z26c6" Dec 01 08:41:28 crc kubenswrapper[4689]: I1201 08:41:28.874610 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z26c6"] Dec 01 08:41:28 crc kubenswrapper[4689]: I1201 08:41:28.940642 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxp5n\" (UniqueName: \"kubernetes.io/projected/6729f1b7-260e-4a90-a2da-1258e036b9ea-kube-api-access-mxp5n\") pod \"redhat-operators-z26c6\" (UID: \"6729f1b7-260e-4a90-a2da-1258e036b9ea\") " pod="openshift-marketplace/redhat-operators-z26c6" Dec 01 08:41:28 crc kubenswrapper[4689]: I1201 08:41:28.940756 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6729f1b7-260e-4a90-a2da-1258e036b9ea-catalog-content\") pod \"redhat-operators-z26c6\" (UID: \"6729f1b7-260e-4a90-a2da-1258e036b9ea\") " pod="openshift-marketplace/redhat-operators-z26c6" Dec 01 08:41:28 crc kubenswrapper[4689]: I1201 08:41:28.940880 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6729f1b7-260e-4a90-a2da-1258e036b9ea-utilities\") pod \"redhat-operators-z26c6\" (UID: \"6729f1b7-260e-4a90-a2da-1258e036b9ea\") " pod="openshift-marketplace/redhat-operators-z26c6" Dec 01 08:41:28 crc kubenswrapper[4689]: I1201 08:41:28.941765 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6729f1b7-260e-4a90-a2da-1258e036b9ea-utilities\") pod \"redhat-operators-z26c6\" (UID: \"6729f1b7-260e-4a90-a2da-1258e036b9ea\") " pod="openshift-marketplace/redhat-operators-z26c6" Dec 01 08:41:28 crc kubenswrapper[4689]: I1201 08:41:28.941856 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6729f1b7-260e-4a90-a2da-1258e036b9ea-catalog-content\") pod \"redhat-operators-z26c6\" (UID: \"6729f1b7-260e-4a90-a2da-1258e036b9ea\") " pod="openshift-marketplace/redhat-operators-z26c6" Dec 01 08:41:29 crc kubenswrapper[4689]: I1201 08:41:29.057575 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Dec 01 08:41:29 crc kubenswrapper[4689]: I1201 08:41:29.115894 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-wrw72" Dec 01 08:41:29 crc kubenswrapper[4689]: I1201 08:41:29.129334 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t8fdl\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:29 crc kubenswrapper[4689]: I1201 08:41:29.159686 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kq9wb\" (UniqueName: \"kubernetes.io/projected/a02d72db-aa64-4300-acc0-93b8677bf6df-kube-api-access-kq9wb\") pod \"redhat-operators-l9s25\" (UID: \"a02d72db-aa64-4300-acc0-93b8677bf6df\") " pod="openshift-marketplace/redhat-operators-l9s25" Dec 01 08:41:29 crc kubenswrapper[4689]: I1201 08:41:29.230967 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l9s25" Dec 01 08:41:29 crc kubenswrapper[4689]: I1201 08:41:29.238773 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxp5n\" (UniqueName: \"kubernetes.io/projected/6729f1b7-260e-4a90-a2da-1258e036b9ea-kube-api-access-mxp5n\") pod \"redhat-operators-z26c6\" (UID: \"6729f1b7-260e-4a90-a2da-1258e036b9ea\") " pod="openshift-marketplace/redhat-operators-z26c6" Dec 01 08:41:29 crc kubenswrapper[4689]: I1201 08:41:29.267720 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z26c6" Dec 01 08:41:29 crc kubenswrapper[4689]: I1201 08:41:29.328307 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-twqmb"] Dec 01 08:41:29 crc kubenswrapper[4689]: I1201 08:41:29.402778 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 01 08:41:29 crc kubenswrapper[4689]: I1201 08:41:29.450037 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:29 crc kubenswrapper[4689]: I1201 08:41:29.513153 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hwvv4"] Dec 01 08:41:29 crc kubenswrapper[4689]: I1201 08:41:29.526497 4689 patch_prober.go:28] interesting pod/router-default-5444994796-hb577 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 08:41:29 crc kubenswrapper[4689]: [-]has-synced failed: reason withheld Dec 01 08:41:29 crc kubenswrapper[4689]: [+]process-running ok Dec 01 08:41:29 crc kubenswrapper[4689]: healthz check failed Dec 01 08:41:29 crc kubenswrapper[4689]: I1201 08:41:29.526584 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hb577" podUID="3bb10b5c-893e-422d-a60f-101f4717b0bc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 08:41:29 crc kubenswrapper[4689]: I1201 08:41:29.616167 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6gkfv"] Dec 01 08:41:29 crc kubenswrapper[4689]: I1201 08:41:29.717598 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-jtwvs" event={"ID":"5d6a08d0-a948-4c69-b3f0-f5e084adb453","Type":"ContainerStarted","Data":"caa460174dcb6c37e3b3fabf23c6b03f0c5b3f40be7bb9b89be33e4e6975665c"} Dec 01 08:41:29 crc kubenswrapper[4689]: I1201 08:41:29.737754 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 01 08:41:29 crc kubenswrapper[4689]: I1201 08:41:29.784936 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"b055b7a2-406e-470c-87ea-9f71ecc3caf8","Type":"ContainerStarted","Data":"a491116cc3a96050d37ee7ca73a7358b9b815f6c4e2c6433fcfdda6e4b0f569a"} Dec 01 08:41:29 crc kubenswrapper[4689]: I1201 08:41:29.850069 4689 generic.go:334] "Generic (PLEG): container finished" podID="e5e4c105-766f-4c1a-befe-a059da17406f" containerID="5a221929012e73570e9ecb5e31c4d018eae5a0f70b37169178a2bd6ee1d6e9a5" exitCode=0 Dec 01 08:41:29 crc kubenswrapper[4689]: I1201 08:41:29.851235 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4kvdm" event={"ID":"e5e4c105-766f-4c1a-befe-a059da17406f","Type":"ContainerDied","Data":"5a221929012e73570e9ecb5e31c4d018eae5a0f70b37169178a2bd6ee1d6e9a5"} Dec 01 08:41:29 crc kubenswrapper[4689]: I1201 08:41:29.855567 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-twqmb" event={"ID":"a49ba834-1d80-4003-bf95-6dfd68b25a49","Type":"ContainerStarted","Data":"806cd1a6841fce146b5c5da9d75005f3dac73b478d3551d9ac473708a515e4d6"} Dec 01 08:41:30 crc kubenswrapper[4689]: I1201 08:41:30.100309 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-c4zck"] Dec 01 08:41:30 crc kubenswrapper[4689]: I1201 08:41:30.145699 4689 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 01 08:41:30 crc kubenswrapper[4689]: W1201 08:41:30.164320 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1860f8a4_ce73_4d74_8dcf_0a43a90d35b9.slice/crio-cd9db9cebf3ac95af38dd29dfd136498499bb028fa8e151c4d3a553200e92585 WatchSource:0}: Error finding container cd9db9cebf3ac95af38dd29dfd136498499bb028fa8e151c4d3a553200e92585: Status 404 returned error can't find the container with id cd9db9cebf3ac95af38dd29dfd136498499bb028fa8e151c4d3a553200e92585 Dec 01 08:41:30 crc kubenswrapper[4689]: I1201 08:41:30.349502 4689 patch_prober.go:28] interesting pod/downloads-7954f5f757-xx949 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" start-of-body= Dec 01 08:41:30 crc kubenswrapper[4689]: I1201 08:41:30.360740 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-xx949" podUID="bd24264f-fc40-410e-9bed-3f8e340035b5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" Dec 01 08:41:30 crc kubenswrapper[4689]: I1201 08:41:30.349773 4689 patch_prober.go:28] interesting pod/downloads-7954f5f757-xx949 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" start-of-body= Dec 01 08:41:30 crc kubenswrapper[4689]: I1201 08:41:30.361382 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-xx949" podUID="bd24264f-fc40-410e-9bed-3f8e340035b5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" Dec 01 08:41:30 crc kubenswrapper[4689]: I1201 08:41:30.370961 4689 patch_prober.go:28] interesting pod/apiserver-76f77b778f-ch9jh container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 01 08:41:30 crc kubenswrapper[4689]: [+]log ok Dec 01 08:41:30 crc kubenswrapper[4689]: [+]etcd ok Dec 01 08:41:30 crc kubenswrapper[4689]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 01 08:41:30 crc kubenswrapper[4689]: [+]poststarthook/generic-apiserver-start-informers ok Dec 01 08:41:30 crc kubenswrapper[4689]: [+]poststarthook/max-in-flight-filter ok Dec 01 08:41:30 crc kubenswrapper[4689]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 01 08:41:30 crc kubenswrapper[4689]: [+]poststarthook/image.openshift.io-apiserver-caches ok Dec 01 08:41:30 crc kubenswrapper[4689]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Dec 01 08:41:30 crc kubenswrapper[4689]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Dec 01 08:41:30 crc kubenswrapper[4689]: [+]poststarthook/project.openshift.io-projectcache ok Dec 01 08:41:30 crc kubenswrapper[4689]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Dec 01 08:41:30 crc kubenswrapper[4689]: [+]poststarthook/openshift.io-startinformers ok Dec 01 08:41:30 crc kubenswrapper[4689]: [+]poststarthook/openshift.io-restmapperupdater ok Dec 01 08:41:30 crc kubenswrapper[4689]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 01 08:41:30 crc kubenswrapper[4689]: livez check failed Dec 01 08:41:30 crc kubenswrapper[4689]: I1201 08:41:30.371086 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-ch9jh" podUID="c389f615-2c0f-467b-924e-ad740d3fff07" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 08:41:30 crc kubenswrapper[4689]: I1201 08:41:30.491930 4689 patch_prober.go:28] interesting pod/console-f9d7485db-j5r2f container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.6:8443/health\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Dec 01 08:41:30 crc kubenswrapper[4689]: I1201 08:41:30.491991 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-j5r2f" podUID="710ccb76-093a-484d-a784-737ae81e7c21" containerName="console" probeResult="failure" output="Get \"https://10.217.0.6:8443/health\": dial tcp 10.217.0.6:8443: connect: connection refused" Dec 01 08:41:30 crc kubenswrapper[4689]: I1201 08:41:30.539385 4689 patch_prober.go:28] interesting pod/router-default-5444994796-hb577 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 08:41:30 crc kubenswrapper[4689]: [-]has-synced failed: reason withheld Dec 01 08:41:30 crc kubenswrapper[4689]: [+]process-running ok Dec 01 08:41:30 crc kubenswrapper[4689]: healthz check failed Dec 01 08:41:30 crc kubenswrapper[4689]: I1201 08:41:30.540306 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hb577" podUID="3bb10b5c-893e-422d-a60f-101f4717b0bc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 08:41:30 crc kubenswrapper[4689]: I1201 08:41:30.558939 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l9s25"] Dec 01 08:41:30 crc kubenswrapper[4689]: W1201 08:41:30.705646 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda02d72db_aa64_4300_acc0_93b8677bf6df.slice/crio-c24a69ea2edb33e738d30a648fd3b549fd2c24d45614694e3a0fb6140a7ffa96 WatchSource:0}: Error finding container c24a69ea2edb33e738d30a648fd3b549fd2c24d45614694e3a0fb6140a7ffa96: Status 404 returned error can't find the container with id c24a69ea2edb33e738d30a648fd3b549fd2c24d45614694e3a0fb6140a7ffa96 Dec 01 08:41:30 crc kubenswrapper[4689]: I1201 08:41:30.710447 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z26c6"] Dec 01 08:41:30 crc kubenswrapper[4689]: I1201 08:41:30.762368 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-9nx2j" Dec 01 08:41:30 crc kubenswrapper[4689]: I1201 08:41:30.779257 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-z629s" Dec 01 08:41:30 crc kubenswrapper[4689]: I1201 08:41:30.788856 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-8rfdp" Dec 01 08:41:30 crc kubenswrapper[4689]: I1201 08:41:30.866157 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jk49d"] Dec 01 08:41:30 crc kubenswrapper[4689]: I1201 08:41:30.907301 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z26c6" event={"ID":"6729f1b7-260e-4a90-a2da-1258e036b9ea","Type":"ContainerStarted","Data":"3024ef3e65427744783eb0d8f92b3875f5d2a9ef21f13896f13dec1f9767c687"} Dec 01 08:41:30 crc kubenswrapper[4689]: I1201 08:41:30.908810 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hwvv4" event={"ID":"cdd81b3a-e9ab-4f49-b621-3e16eed7ac73","Type":"ContainerStarted","Data":"92dda95cbf8b09286d7feba0af07a65e18517e943eaafee9caac321387ec4f3a"} Dec 01 08:41:30 crc kubenswrapper[4689]: I1201 08:41:30.908829 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hwvv4" event={"ID":"cdd81b3a-e9ab-4f49-b621-3e16eed7ac73","Type":"ContainerStarted","Data":"9c19c3d71b518cb5197147fc9b88da3e1cbaf9cffa01032fb2d85ab005740859"} Dec 01 08:41:30 crc kubenswrapper[4689]: I1201 08:41:30.909905 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"507dc54c-e4ea-4b41-a390-fcc3123a7859","Type":"ContainerStarted","Data":"574a4f4633004d98233e5a4fb660e8a07285a5c01d65ffdafcfdc0a659d77483"} Dec 01 08:41:30 crc kubenswrapper[4689]: I1201 08:41:30.910567 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l9s25" event={"ID":"a02d72db-aa64-4300-acc0-93b8677bf6df","Type":"ContainerStarted","Data":"c24a69ea2edb33e738d30a648fd3b549fd2c24d45614694e3a0fb6140a7ffa96"} Dec 01 08:41:30 crc kubenswrapper[4689]: I1201 08:41:30.911514 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"b055b7a2-406e-470c-87ea-9f71ecc3caf8","Type":"ContainerStarted","Data":"3fc5a7e96cb7f5827c4077a5120e6e0036b564da2828dbbd248ba2b6b9eb26ca"} Dec 01 08:41:30 crc kubenswrapper[4689]: I1201 08:41:30.914980 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-jtwvs" event={"ID":"5d6a08d0-a948-4c69-b3f0-f5e084adb453","Type":"ContainerStarted","Data":"451196ae6df57fd0f45e9b26e281c9e37c0fdfee86a6023da420109d3b6b410e"} Dec 01 08:41:30 crc kubenswrapper[4689]: I1201 08:41:30.916460 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c4zck" event={"ID":"1860f8a4-ce73-4d74-8dcf-0a43a90d35b9","Type":"ContainerStarted","Data":"0aad43468abdef26208728422bec95dbd4ee44d2763ecf3f3efe6054880a857b"} Dec 01 08:41:30 crc kubenswrapper[4689]: I1201 08:41:30.916486 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c4zck" event={"ID":"1860f8a4-ce73-4d74-8dcf-0a43a90d35b9","Type":"ContainerStarted","Data":"cd9db9cebf3ac95af38dd29dfd136498499bb028fa8e151c4d3a553200e92585"} Dec 01 08:41:30 crc kubenswrapper[4689]: I1201 08:41:30.917851 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6gkfv" event={"ID":"2790f0e0-bca7-4070-8d79-72ae564043ef","Type":"ContainerStarted","Data":"a1285b3b17210603941d93eb1f3b9d14f6dcae4fc98cf7b47167bdd2d433fb51"} Dec 01 08:41:30 crc kubenswrapper[4689]: I1201 08:41:30.917885 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6gkfv" event={"ID":"2790f0e0-bca7-4070-8d79-72ae564043ef","Type":"ContainerStarted","Data":"9bcc3463687c41d144b3d915cbd4ff095a197e9ce3469ae38975bfa356a52988"} Dec 01 08:41:30 crc kubenswrapper[4689]: I1201 08:41:30.931336 4689 generic.go:334] "Generic (PLEG): container finished" podID="a49ba834-1d80-4003-bf95-6dfd68b25a49" containerID="79386418abdc31ad13ebb605959faa59aa6ddcac8acde324c5265a0c5a8b6fc4" exitCode=0 Dec 01 08:41:30 crc kubenswrapper[4689]: I1201 08:41:30.931992 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-twqmb" event={"ID":"a49ba834-1d80-4003-bf95-6dfd68b25a49","Type":"ContainerDied","Data":"79386418abdc31ad13ebb605959faa59aa6ddcac8acde324c5265a0c5a8b6fc4"} Dec 01 08:41:30 crc kubenswrapper[4689]: I1201 08:41:30.997317 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=4.997293304 podStartE2EDuration="4.997293304s" podCreationTimestamp="2025-12-01 08:41:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:41:30.995479127 +0000 UTC m=+171.067767031" watchObservedRunningTime="2025-12-01 08:41:30.997293304 +0000 UTC m=+171.069581208" Dec 01 08:41:31 crc kubenswrapper[4689]: I1201 08:41:31.012237 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-t8fdl"] Dec 01 08:41:31 crc kubenswrapper[4689]: I1201 08:41:31.152786 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-jtwvs" podStartSLOduration=150.152759274 podStartE2EDuration="2m30.152759274s" podCreationTimestamp="2025-12-01 08:39:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:41:31.144564503 +0000 UTC m=+171.216852407" watchObservedRunningTime="2025-12-01 08:41:31.152759274 +0000 UTC m=+171.225047178" Dec 01 08:41:31 crc kubenswrapper[4689]: I1201 08:41:31.234627 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ltkzh" Dec 01 08:41:31 crc kubenswrapper[4689]: I1201 08:41:31.319372 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bjlxg" Dec 01 08:41:31 crc kubenswrapper[4689]: I1201 08:41:31.565417 4689 patch_prober.go:28] interesting pod/router-default-5444994796-hb577 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 08:41:31 crc kubenswrapper[4689]: [-]has-synced failed: reason withheld Dec 01 08:41:31 crc kubenswrapper[4689]: [+]process-running ok Dec 01 08:41:31 crc kubenswrapper[4689]: healthz check failed Dec 01 08:41:31 crc kubenswrapper[4689]: I1201 08:41:31.565941 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hb577" podUID="3bb10b5c-893e-422d-a60f-101f4717b0bc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 08:41:31 crc kubenswrapper[4689]: I1201 08:41:31.953915 4689 generic.go:334] "Generic (PLEG): container finished" podID="6729f1b7-260e-4a90-a2da-1258e036b9ea" containerID="5d810298f1db3066bb5475e8df6c93e4dc11fe825d906aaaef3cd96b0a54ec4c" exitCode=0 Dec 01 08:41:31 crc kubenswrapper[4689]: I1201 08:41:31.954012 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z26c6" event={"ID":"6729f1b7-260e-4a90-a2da-1258e036b9ea","Type":"ContainerDied","Data":"5d810298f1db3066bb5475e8df6c93e4dc11fe825d906aaaef3cd96b0a54ec4c"} Dec 01 08:41:31 crc kubenswrapper[4689]: I1201 08:41:31.965045 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"507dc54c-e4ea-4b41-a390-fcc3123a7859","Type":"ContainerStarted","Data":"ae6245c92e525a5740e3bd33bc62eddb6cb92cb9b654de99ab9d381366797db5"} Dec 01 08:41:31 crc kubenswrapper[4689]: I1201 08:41:31.993546 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" event={"ID":"7d86e20d-febe-4cfb-a738-4705f8122326","Type":"ContainerStarted","Data":"e088513029875ffa84f4b164ada18653cdce555977e688e6984b0d101248c3a1"} Dec 01 08:41:31 crc kubenswrapper[4689]: I1201 08:41:31.993613 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" event={"ID":"7d86e20d-febe-4cfb-a738-4705f8122326","Type":"ContainerStarted","Data":"251bf0202d38ad8c2b8d502884a7afb8940b19cb08935a73016cf046f9a44416"} Dec 01 08:41:31 crc kubenswrapper[4689]: I1201 08:41:31.994174 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:32 crc kubenswrapper[4689]: I1201 08:41:32.001962 4689 generic.go:334] "Generic (PLEG): container finished" podID="b055b7a2-406e-470c-87ea-9f71ecc3caf8" containerID="3fc5a7e96cb7f5827c4077a5120e6e0036b564da2828dbbd248ba2b6b9eb26ca" exitCode=0 Dec 01 08:41:32 crc kubenswrapper[4689]: I1201 08:41:32.002140 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"b055b7a2-406e-470c-87ea-9f71ecc3caf8","Type":"ContainerDied","Data":"3fc5a7e96cb7f5827c4077a5120e6e0036b564da2828dbbd248ba2b6b9eb26ca"} Dec 01 08:41:32 crc kubenswrapper[4689]: I1201 08:41:32.003580 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=6.003531696 podStartE2EDuration="6.003531696s" podCreationTimestamp="2025-12-01 08:41:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:41:31.991685108 +0000 UTC m=+172.063973012" watchObservedRunningTime="2025-12-01 08:41:32.003531696 +0000 UTC m=+172.075819600" Dec 01 08:41:32 crc kubenswrapper[4689]: I1201 08:41:32.007797 4689 generic.go:334] "Generic (PLEG): container finished" podID="2790f0e0-bca7-4070-8d79-72ae564043ef" containerID="a1285b3b17210603941d93eb1f3b9d14f6dcae4fc98cf7b47167bdd2d433fb51" exitCode=0 Dec 01 08:41:32 crc kubenswrapper[4689]: I1201 08:41:32.007881 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6gkfv" event={"ID":"2790f0e0-bca7-4070-8d79-72ae564043ef","Type":"ContainerDied","Data":"a1285b3b17210603941d93eb1f3b9d14f6dcae4fc98cf7b47167bdd2d433fb51"} Dec 01 08:41:32 crc kubenswrapper[4689]: I1201 08:41:32.018747 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" podStartSLOduration=151.018713259 podStartE2EDuration="2m31.018713259s" podCreationTimestamp="2025-12-01 08:39:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:41:32.017582984 +0000 UTC m=+172.089870908" watchObservedRunningTime="2025-12-01 08:41:32.018713259 +0000 UTC m=+172.091001163" Dec 01 08:41:32 crc kubenswrapper[4689]: I1201 08:41:32.041393 4689 generic.go:334] "Generic (PLEG): container finished" podID="2a3d70f2-3da1-4712-bb46-200a641c7648" containerID="5a2ff6c6d1d83088a789254fafd840d8d05495d2e1ece71d190f993fa368eb10" exitCode=0 Dec 01 08:41:32 crc kubenswrapper[4689]: I1201 08:41:32.041534 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jk49d" event={"ID":"2a3d70f2-3da1-4712-bb46-200a641c7648","Type":"ContainerDied","Data":"5a2ff6c6d1d83088a789254fafd840d8d05495d2e1ece71d190f993fa368eb10"} Dec 01 08:41:32 crc kubenswrapper[4689]: I1201 08:41:32.041569 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jk49d" event={"ID":"2a3d70f2-3da1-4712-bb46-200a641c7648","Type":"ContainerStarted","Data":"a5528f12af23a3e91d23232381826dbc4f5dddab4cf4ab8bc307d8dd8f821c6d"} Dec 01 08:41:32 crc kubenswrapper[4689]: I1201 08:41:32.055845 4689 generic.go:334] "Generic (PLEG): container finished" podID="cdd81b3a-e9ab-4f49-b621-3e16eed7ac73" containerID="92dda95cbf8b09286d7feba0af07a65e18517e943eaafee9caac321387ec4f3a" exitCode=0 Dec 01 08:41:32 crc kubenswrapper[4689]: I1201 08:41:32.056001 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hwvv4" event={"ID":"cdd81b3a-e9ab-4f49-b621-3e16eed7ac73","Type":"ContainerDied","Data":"92dda95cbf8b09286d7feba0af07a65e18517e943eaafee9caac321387ec4f3a"} Dec 01 08:41:32 crc kubenswrapper[4689]: I1201 08:41:32.075453 4689 generic.go:334] "Generic (PLEG): container finished" podID="1860f8a4-ce73-4d74-8dcf-0a43a90d35b9" containerID="0aad43468abdef26208728422bec95dbd4ee44d2763ecf3f3efe6054880a857b" exitCode=0 Dec 01 08:41:32 crc kubenswrapper[4689]: I1201 08:41:32.075585 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c4zck" event={"ID":"1860f8a4-ce73-4d74-8dcf-0a43a90d35b9","Type":"ContainerDied","Data":"0aad43468abdef26208728422bec95dbd4ee44d2763ecf3f3efe6054880a857b"} Dec 01 08:41:32 crc kubenswrapper[4689]: I1201 08:41:32.110250 4689 generic.go:334] "Generic (PLEG): container finished" podID="a02d72db-aa64-4300-acc0-93b8677bf6df" containerID="a6d1801b3be7323c166a363fd3eb4cbf9313874621e48d6e2b858add8dbcf416" exitCode=0 Dec 01 08:41:32 crc kubenswrapper[4689]: I1201 08:41:32.111633 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l9s25" event={"ID":"a02d72db-aa64-4300-acc0-93b8677bf6df","Type":"ContainerDied","Data":"a6d1801b3be7323c166a363fd3eb4cbf9313874621e48d6e2b858add8dbcf416"} Dec 01 08:41:32 crc kubenswrapper[4689]: I1201 08:41:32.515968 4689 patch_prober.go:28] interesting pod/router-default-5444994796-hb577 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 08:41:32 crc kubenswrapper[4689]: [-]has-synced failed: reason withheld Dec 01 08:41:32 crc kubenswrapper[4689]: [+]process-running ok Dec 01 08:41:32 crc kubenswrapper[4689]: healthz check failed Dec 01 08:41:32 crc kubenswrapper[4689]: I1201 08:41:32.516454 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hb577" podUID="3bb10b5c-893e-422d-a60f-101f4717b0bc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 08:41:33 crc kubenswrapper[4689]: I1201 08:41:33.154535 4689 generic.go:334] "Generic (PLEG): container finished" podID="507dc54c-e4ea-4b41-a390-fcc3123a7859" containerID="ae6245c92e525a5740e3bd33bc62eddb6cb92cb9b654de99ab9d381366797db5" exitCode=0 Dec 01 08:41:33 crc kubenswrapper[4689]: I1201 08:41:33.154648 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"507dc54c-e4ea-4b41-a390-fcc3123a7859","Type":"ContainerDied","Data":"ae6245c92e525a5740e3bd33bc62eddb6cb92cb9b654de99ab9d381366797db5"} Dec 01 08:41:33 crc kubenswrapper[4689]: I1201 08:41:33.537968 4689 patch_prober.go:28] interesting pod/router-default-5444994796-hb577 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 08:41:33 crc kubenswrapper[4689]: [-]has-synced failed: reason withheld Dec 01 08:41:33 crc kubenswrapper[4689]: [+]process-running ok Dec 01 08:41:33 crc kubenswrapper[4689]: healthz check failed Dec 01 08:41:33 crc kubenswrapper[4689]: I1201 08:41:33.538054 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hb577" podUID="3bb10b5c-893e-422d-a60f-101f4717b0bc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 08:41:33 crc kubenswrapper[4689]: I1201 08:41:33.951095 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 01 08:41:34 crc kubenswrapper[4689]: I1201 08:41:34.195790 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 01 08:41:34 crc kubenswrapper[4689]: I1201 08:41:34.195770 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"b055b7a2-406e-470c-87ea-9f71ecc3caf8","Type":"ContainerDied","Data":"a491116cc3a96050d37ee7ca73a7358b9b815f6c4e2c6433fcfdda6e4b0f569a"} Dec 01 08:41:34 crc kubenswrapper[4689]: I1201 08:41:34.195861 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a491116cc3a96050d37ee7ca73a7358b9b815f6c4e2c6433fcfdda6e4b0f569a" Dec 01 08:41:34 crc kubenswrapper[4689]: I1201 08:41:34.222190 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b055b7a2-406e-470c-87ea-9f71ecc3caf8-kubelet-dir\") pod \"b055b7a2-406e-470c-87ea-9f71ecc3caf8\" (UID: \"b055b7a2-406e-470c-87ea-9f71ecc3caf8\") " Dec 01 08:41:34 crc kubenswrapper[4689]: I1201 08:41:34.222483 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b055b7a2-406e-470c-87ea-9f71ecc3caf8-kube-api-access\") pod \"b055b7a2-406e-470c-87ea-9f71ecc3caf8\" (UID: \"b055b7a2-406e-470c-87ea-9f71ecc3caf8\") " Dec 01 08:41:34 crc kubenswrapper[4689]: I1201 08:41:34.222596 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b055b7a2-406e-470c-87ea-9f71ecc3caf8-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "b055b7a2-406e-470c-87ea-9f71ecc3caf8" (UID: "b055b7a2-406e-470c-87ea-9f71ecc3caf8"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 08:41:34 crc kubenswrapper[4689]: I1201 08:41:34.223834 4689 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b055b7a2-406e-470c-87ea-9f71ecc3caf8-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 01 08:41:34 crc kubenswrapper[4689]: I1201 08:41:34.252593 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b055b7a2-406e-470c-87ea-9f71ecc3caf8-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "b055b7a2-406e-470c-87ea-9f71ecc3caf8" (UID: "b055b7a2-406e-470c-87ea-9f71ecc3caf8"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:41:34 crc kubenswrapper[4689]: I1201 08:41:34.327341 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b055b7a2-406e-470c-87ea-9f71ecc3caf8-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 01 08:41:34 crc kubenswrapper[4689]: I1201 08:41:34.515148 4689 patch_prober.go:28] interesting pod/router-default-5444994796-hb577 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 08:41:34 crc kubenswrapper[4689]: [-]has-synced failed: reason withheld Dec 01 08:41:34 crc kubenswrapper[4689]: [+]process-running ok Dec 01 08:41:34 crc kubenswrapper[4689]: healthz check failed Dec 01 08:41:34 crc kubenswrapper[4689]: I1201 08:41:34.515208 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hb577" podUID="3bb10b5c-893e-422d-a60f-101f4717b0bc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 08:41:34 crc kubenswrapper[4689]: I1201 08:41:34.841303 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 01 08:41:34 crc kubenswrapper[4689]: I1201 08:41:34.938901 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/507dc54c-e4ea-4b41-a390-fcc3123a7859-kubelet-dir\") pod \"507dc54c-e4ea-4b41-a390-fcc3123a7859\" (UID: \"507dc54c-e4ea-4b41-a390-fcc3123a7859\") " Dec 01 08:41:34 crc kubenswrapper[4689]: I1201 08:41:34.939040 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/507dc54c-e4ea-4b41-a390-fcc3123a7859-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "507dc54c-e4ea-4b41-a390-fcc3123a7859" (UID: "507dc54c-e4ea-4b41-a390-fcc3123a7859"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 08:41:34 crc kubenswrapper[4689]: I1201 08:41:34.939095 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/507dc54c-e4ea-4b41-a390-fcc3123a7859-kube-api-access\") pod \"507dc54c-e4ea-4b41-a390-fcc3123a7859\" (UID: \"507dc54c-e4ea-4b41-a390-fcc3123a7859\") " Dec 01 08:41:34 crc kubenswrapper[4689]: I1201 08:41:34.939601 4689 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/507dc54c-e4ea-4b41-a390-fcc3123a7859-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 01 08:41:35 crc kubenswrapper[4689]: I1201 08:41:34.997451 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/507dc54c-e4ea-4b41-a390-fcc3123a7859-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "507dc54c-e4ea-4b41-a390-fcc3123a7859" (UID: "507dc54c-e4ea-4b41-a390-fcc3123a7859"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:41:35 crc kubenswrapper[4689]: I1201 08:41:35.041005 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/507dc54c-e4ea-4b41-a390-fcc3123a7859-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 01 08:41:35 crc kubenswrapper[4689]: I1201 08:41:35.235288 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"507dc54c-e4ea-4b41-a390-fcc3123a7859","Type":"ContainerDied","Data":"574a4f4633004d98233e5a4fb660e8a07285a5c01d65ffdafcfdc0a659d77483"} Dec 01 08:41:35 crc kubenswrapper[4689]: I1201 08:41:35.235351 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="574a4f4633004d98233e5a4fb660e8a07285a5c01d65ffdafcfdc0a659d77483" Dec 01 08:41:35 crc kubenswrapper[4689]: I1201 08:41:35.235471 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 01 08:41:35 crc kubenswrapper[4689]: I1201 08:41:35.376526 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-ch9jh" Dec 01 08:41:35 crc kubenswrapper[4689]: I1201 08:41:35.381497 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-ch9jh" Dec 01 08:41:35 crc kubenswrapper[4689]: I1201 08:41:35.514750 4689 patch_prober.go:28] interesting pod/router-default-5444994796-hb577 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 08:41:35 crc kubenswrapper[4689]: [-]has-synced failed: reason withheld Dec 01 08:41:35 crc kubenswrapper[4689]: [+]process-running ok Dec 01 08:41:35 crc kubenswrapper[4689]: healthz check failed Dec 01 08:41:35 crc kubenswrapper[4689]: I1201 08:41:35.514853 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hb577" podUID="3bb10b5c-893e-422d-a60f-101f4717b0bc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 08:41:36 crc kubenswrapper[4689]: I1201 08:41:36.544853 4689 patch_prober.go:28] interesting pod/router-default-5444994796-hb577 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 08:41:36 crc kubenswrapper[4689]: [-]has-synced failed: reason withheld Dec 01 08:41:36 crc kubenswrapper[4689]: [+]process-running ok Dec 01 08:41:36 crc kubenswrapper[4689]: healthz check failed Dec 01 08:41:36 crc kubenswrapper[4689]: I1201 08:41:36.544982 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hb577" podUID="3bb10b5c-893e-422d-a60f-101f4717b0bc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 08:41:37 crc kubenswrapper[4689]: I1201 08:41:37.514874 4689 patch_prober.go:28] interesting pod/router-default-5444994796-hb577 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 08:41:37 crc kubenswrapper[4689]: [-]has-synced failed: reason withheld Dec 01 08:41:37 crc kubenswrapper[4689]: [+]process-running ok Dec 01 08:41:37 crc kubenswrapper[4689]: healthz check failed Dec 01 08:41:37 crc kubenswrapper[4689]: I1201 08:41:37.515978 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hb577" podUID="3bb10b5c-893e-422d-a60f-101f4717b0bc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 08:41:38 crc kubenswrapper[4689]: I1201 08:41:38.515425 4689 patch_prober.go:28] interesting pod/router-default-5444994796-hb577 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 08:41:38 crc kubenswrapper[4689]: [-]has-synced failed: reason withheld Dec 01 08:41:38 crc kubenswrapper[4689]: [+]process-running ok Dec 01 08:41:38 crc kubenswrapper[4689]: healthz check failed Dec 01 08:41:38 crc kubenswrapper[4689]: I1201 08:41:38.515598 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hb577" podUID="3bb10b5c-893e-422d-a60f-101f4717b0bc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 08:41:39 crc kubenswrapper[4689]: I1201 08:41:39.148319 4689 patch_prober.go:28] interesting pod/machine-config-daemon-hmdnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 08:41:39 crc kubenswrapper[4689]: I1201 08:41:39.148459 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 08:41:39 crc kubenswrapper[4689]: I1201 08:41:39.516762 4689 patch_prober.go:28] interesting pod/router-default-5444994796-hb577 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 08:41:39 crc kubenswrapper[4689]: [-]has-synced failed: reason withheld Dec 01 08:41:39 crc kubenswrapper[4689]: [+]process-running ok Dec 01 08:41:39 crc kubenswrapper[4689]: healthz check failed Dec 01 08:41:39 crc kubenswrapper[4689]: I1201 08:41:39.516960 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hb577" podUID="3bb10b5c-893e-422d-a60f-101f4717b0bc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 08:41:40 crc kubenswrapper[4689]: I1201 08:41:40.376886 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-xx949" Dec 01 08:41:40 crc kubenswrapper[4689]: I1201 08:41:40.489259 4689 patch_prober.go:28] interesting pod/console-f9d7485db-j5r2f container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.6:8443/health\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Dec 01 08:41:40 crc kubenswrapper[4689]: I1201 08:41:40.489388 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-j5r2f" podUID="710ccb76-093a-484d-a784-737ae81e7c21" containerName="console" probeResult="failure" output="Get \"https://10.217.0.6:8443/health\": dial tcp 10.217.0.6:8443: connect: connection refused" Dec 01 08:41:40 crc kubenswrapper[4689]: I1201 08:41:40.514213 4689 patch_prober.go:28] interesting pod/router-default-5444994796-hb577 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 08:41:40 crc kubenswrapper[4689]: [-]has-synced failed: reason withheld Dec 01 08:41:40 crc kubenswrapper[4689]: [+]process-running ok Dec 01 08:41:40 crc kubenswrapper[4689]: healthz check failed Dec 01 08:41:40 crc kubenswrapper[4689]: I1201 08:41:40.514291 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hb577" podUID="3bb10b5c-893e-422d-a60f-101f4717b0bc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 08:41:41 crc kubenswrapper[4689]: I1201 08:41:41.517670 4689 patch_prober.go:28] interesting pod/router-default-5444994796-hb577 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 08:41:41 crc kubenswrapper[4689]: [-]has-synced failed: reason withheld Dec 01 08:41:41 crc kubenswrapper[4689]: [+]process-running ok Dec 01 08:41:41 crc kubenswrapper[4689]: healthz check failed Dec 01 08:41:41 crc kubenswrapper[4689]: I1201 08:41:41.518405 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hb577" podUID="3bb10b5c-893e-422d-a60f-101f4717b0bc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 08:41:42 crc kubenswrapper[4689]: I1201 08:41:42.518607 4689 patch_prober.go:28] interesting pod/router-default-5444994796-hb577 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 08:41:42 crc kubenswrapper[4689]: [-]has-synced failed: reason withheld Dec 01 08:41:42 crc kubenswrapper[4689]: [+]process-running ok Dec 01 08:41:42 crc kubenswrapper[4689]: healthz check failed Dec 01 08:41:42 crc kubenswrapper[4689]: I1201 08:41:42.518705 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hb577" podUID="3bb10b5c-893e-422d-a60f-101f4717b0bc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 08:41:43 crc kubenswrapper[4689]: I1201 08:41:43.516141 4689 patch_prober.go:28] interesting pod/router-default-5444994796-hb577 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 08:41:43 crc kubenswrapper[4689]: [-]has-synced failed: reason withheld Dec 01 08:41:43 crc kubenswrapper[4689]: [+]process-running ok Dec 01 08:41:43 crc kubenswrapper[4689]: healthz check failed Dec 01 08:41:43 crc kubenswrapper[4689]: I1201 08:41:43.516892 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hb577" podUID="3bb10b5c-893e-422d-a60f-101f4717b0bc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 08:41:44 crc kubenswrapper[4689]: I1201 08:41:44.523755 4689 patch_prober.go:28] interesting pod/router-default-5444994796-hb577 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 08:41:44 crc kubenswrapper[4689]: [-]has-synced failed: reason withheld Dec 01 08:41:44 crc kubenswrapper[4689]: [+]process-running ok Dec 01 08:41:44 crc kubenswrapper[4689]: healthz check failed Dec 01 08:41:44 crc kubenswrapper[4689]: I1201 08:41:44.523847 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hb577" podUID="3bb10b5c-893e-422d-a60f-101f4717b0bc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 08:41:45 crc kubenswrapper[4689]: I1201 08:41:45.514765 4689 patch_prober.go:28] interesting pod/router-default-5444994796-hb577 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 08:41:45 crc kubenswrapper[4689]: [-]has-synced failed: reason withheld Dec 01 08:41:45 crc kubenswrapper[4689]: [+]process-running ok Dec 01 08:41:45 crc kubenswrapper[4689]: healthz check failed Dec 01 08:41:45 crc kubenswrapper[4689]: I1201 08:41:45.514882 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hb577" podUID="3bb10b5c-893e-422d-a60f-101f4717b0bc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 08:41:46 crc kubenswrapper[4689]: I1201 08:41:46.512790 4689 patch_prober.go:28] interesting pod/router-default-5444994796-hb577 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 08:41:46 crc kubenswrapper[4689]: [-]has-synced failed: reason withheld Dec 01 08:41:46 crc kubenswrapper[4689]: [+]process-running ok Dec 01 08:41:46 crc kubenswrapper[4689]: healthz check failed Dec 01 08:41:46 crc kubenswrapper[4689]: I1201 08:41:46.512863 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hb577" podUID="3bb10b5c-893e-422d-a60f-101f4717b0bc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 08:41:47 crc kubenswrapper[4689]: I1201 08:41:47.517105 4689 patch_prober.go:28] interesting pod/router-default-5444994796-hb577 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 08:41:47 crc kubenswrapper[4689]: [-]has-synced failed: reason withheld Dec 01 08:41:47 crc kubenswrapper[4689]: [+]process-running ok Dec 01 08:41:47 crc kubenswrapper[4689]: healthz check failed Dec 01 08:41:47 crc kubenswrapper[4689]: I1201 08:41:47.518250 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hb577" podUID="3bb10b5c-893e-422d-a60f-101f4717b0bc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 08:41:48 crc kubenswrapper[4689]: I1201 08:41:48.513609 4689 patch_prober.go:28] interesting pod/router-default-5444994796-hb577 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 08:41:48 crc kubenswrapper[4689]: [-]has-synced failed: reason withheld Dec 01 08:41:48 crc kubenswrapper[4689]: [+]process-running ok Dec 01 08:41:48 crc kubenswrapper[4689]: healthz check failed Dec 01 08:41:48 crc kubenswrapper[4689]: I1201 08:41:48.513688 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hb577" podUID="3bb10b5c-893e-422d-a60f-101f4717b0bc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 08:41:49 crc kubenswrapper[4689]: I1201 08:41:49.058727 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 08:41:49 crc kubenswrapper[4689]: I1201 08:41:49.461828 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:41:49 crc kubenswrapper[4689]: I1201 08:41:49.512731 4689 patch_prober.go:28] interesting pod/router-default-5444994796-hb577 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 08:41:49 crc kubenswrapper[4689]: [-]has-synced failed: reason withheld Dec 01 08:41:49 crc kubenswrapper[4689]: [+]process-running ok Dec 01 08:41:49 crc kubenswrapper[4689]: healthz check failed Dec 01 08:41:49 crc kubenswrapper[4689]: I1201 08:41:49.512833 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hb577" podUID="3bb10b5c-893e-422d-a60f-101f4717b0bc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 08:41:50 crc kubenswrapper[4689]: I1201 08:41:50.489806 4689 patch_prober.go:28] interesting pod/console-f9d7485db-j5r2f container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.6:8443/health\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Dec 01 08:41:50 crc kubenswrapper[4689]: I1201 08:41:50.489884 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-j5r2f" podUID="710ccb76-093a-484d-a784-737ae81e7c21" containerName="console" probeResult="failure" output="Get \"https://10.217.0.6:8443/health\": dial tcp 10.217.0.6:8443: connect: connection refused" Dec 01 08:41:50 crc kubenswrapper[4689]: I1201 08:41:50.530435 4689 patch_prober.go:28] interesting pod/router-default-5444994796-hb577 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 08:41:50 crc kubenswrapper[4689]: [-]has-synced failed: reason withheld Dec 01 08:41:50 crc kubenswrapper[4689]: [+]process-running ok Dec 01 08:41:50 crc kubenswrapper[4689]: healthz check failed Dec 01 08:41:50 crc kubenswrapper[4689]: I1201 08:41:50.530596 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hb577" podUID="3bb10b5c-893e-422d-a60f-101f4717b0bc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 08:41:50 crc kubenswrapper[4689]: I1201 08:41:50.617504 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zvzpg" Dec 01 08:41:51 crc kubenswrapper[4689]: I1201 08:41:51.513309 4689 patch_prober.go:28] interesting pod/router-default-5444994796-hb577 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 08:41:51 crc kubenswrapper[4689]: [-]has-synced failed: reason withheld Dec 01 08:41:51 crc kubenswrapper[4689]: [+]process-running ok Dec 01 08:41:51 crc kubenswrapper[4689]: healthz check failed Dec 01 08:41:51 crc kubenswrapper[4689]: I1201 08:41:51.513679 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hb577" podUID="3bb10b5c-893e-422d-a60f-101f4717b0bc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 08:41:52 crc kubenswrapper[4689]: I1201 08:41:52.516084 4689 patch_prober.go:28] interesting pod/router-default-5444994796-hb577 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 08:41:52 crc kubenswrapper[4689]: [-]has-synced failed: reason withheld Dec 01 08:41:52 crc kubenswrapper[4689]: [+]process-running ok Dec 01 08:41:52 crc kubenswrapper[4689]: healthz check failed Dec 01 08:41:52 crc kubenswrapper[4689]: I1201 08:41:52.516221 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hb577" podUID="3bb10b5c-893e-422d-a60f-101f4717b0bc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 08:41:53 crc kubenswrapper[4689]: I1201 08:41:53.513649 4689 patch_prober.go:28] interesting pod/router-default-5444994796-hb577 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 08:41:53 crc kubenswrapper[4689]: [-]has-synced failed: reason withheld Dec 01 08:41:53 crc kubenswrapper[4689]: [+]process-running ok Dec 01 08:41:53 crc kubenswrapper[4689]: healthz check failed Dec 01 08:41:53 crc kubenswrapper[4689]: I1201 08:41:53.514138 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hb577" podUID="3bb10b5c-893e-422d-a60f-101f4717b0bc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 08:41:54 crc kubenswrapper[4689]: I1201 08:41:54.539995 4689 patch_prober.go:28] interesting pod/router-default-5444994796-hb577 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 08:41:54 crc kubenswrapper[4689]: [-]has-synced failed: reason withheld Dec 01 08:41:54 crc kubenswrapper[4689]: [+]process-running ok Dec 01 08:41:54 crc kubenswrapper[4689]: healthz check failed Dec 01 08:41:54 crc kubenswrapper[4689]: I1201 08:41:54.540668 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hb577" podUID="3bb10b5c-893e-422d-a60f-101f4717b0bc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 08:41:55 crc kubenswrapper[4689]: I1201 08:41:55.513871 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-hb577" Dec 01 08:41:55 crc kubenswrapper[4689]: I1201 08:41:55.516163 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-hb577" Dec 01 08:41:59 crc kubenswrapper[4689]: I1201 08:41:59.323298 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 01 08:41:59 crc kubenswrapper[4689]: E1201 08:41:59.324257 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b055b7a2-406e-470c-87ea-9f71ecc3caf8" containerName="pruner" Dec 01 08:41:59 crc kubenswrapper[4689]: I1201 08:41:59.324270 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="b055b7a2-406e-470c-87ea-9f71ecc3caf8" containerName="pruner" Dec 01 08:41:59 crc kubenswrapper[4689]: E1201 08:41:59.324291 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="507dc54c-e4ea-4b41-a390-fcc3123a7859" containerName="pruner" Dec 01 08:41:59 crc kubenswrapper[4689]: I1201 08:41:59.324297 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="507dc54c-e4ea-4b41-a390-fcc3123a7859" containerName="pruner" Dec 01 08:41:59 crc kubenswrapper[4689]: I1201 08:41:59.324435 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="b055b7a2-406e-470c-87ea-9f71ecc3caf8" containerName="pruner" Dec 01 08:41:59 crc kubenswrapper[4689]: I1201 08:41:59.324458 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="507dc54c-e4ea-4b41-a390-fcc3123a7859" containerName="pruner" Dec 01 08:41:59 crc kubenswrapper[4689]: I1201 08:41:59.324857 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 01 08:41:59 crc kubenswrapper[4689]: I1201 08:41:59.327490 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 01 08:41:59 crc kubenswrapper[4689]: I1201 08:41:59.327678 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 01 08:41:59 crc kubenswrapper[4689]: I1201 08:41:59.338777 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 01 08:41:59 crc kubenswrapper[4689]: I1201 08:41:59.538666 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c53baf2d-eb84-4da5-938a-675c325fc6dc-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"c53baf2d-eb84-4da5-938a-675c325fc6dc\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 01 08:41:59 crc kubenswrapper[4689]: I1201 08:41:59.538796 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c53baf2d-eb84-4da5-938a-675c325fc6dc-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"c53baf2d-eb84-4da5-938a-675c325fc6dc\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 01 08:41:59 crc kubenswrapper[4689]: I1201 08:41:59.644207 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c53baf2d-eb84-4da5-938a-675c325fc6dc-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"c53baf2d-eb84-4da5-938a-675c325fc6dc\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 01 08:41:59 crc kubenswrapper[4689]: I1201 08:41:59.644301 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c53baf2d-eb84-4da5-938a-675c325fc6dc-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"c53baf2d-eb84-4da5-938a-675c325fc6dc\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 01 08:41:59 crc kubenswrapper[4689]: I1201 08:41:59.644442 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c53baf2d-eb84-4da5-938a-675c325fc6dc-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"c53baf2d-eb84-4da5-938a-675c325fc6dc\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 01 08:41:59 crc kubenswrapper[4689]: I1201 08:41:59.674612 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c53baf2d-eb84-4da5-938a-675c325fc6dc-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"c53baf2d-eb84-4da5-938a-675c325fc6dc\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 01 08:41:59 crc kubenswrapper[4689]: I1201 08:41:59.959768 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 01 08:42:00 crc kubenswrapper[4689]: I1201 08:42:00.509757 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-j5r2f" Dec 01 08:42:00 crc kubenswrapper[4689]: I1201 08:42:00.523905 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-j5r2f" Dec 01 08:42:04 crc kubenswrapper[4689]: I1201 08:42:04.539383 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 01 08:42:04 crc kubenswrapper[4689]: I1201 08:42:04.541393 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 01 08:42:04 crc kubenswrapper[4689]: I1201 08:42:04.542294 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 01 08:42:04 crc kubenswrapper[4689]: I1201 08:42:04.628347 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/51372c30-ea27-438b-ba20-741b5e630044-var-lock\") pod \"installer-9-crc\" (UID: \"51372c30-ea27-438b-ba20-741b5e630044\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 01 08:42:04 crc kubenswrapper[4689]: I1201 08:42:04.628503 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/51372c30-ea27-438b-ba20-741b5e630044-kube-api-access\") pod \"installer-9-crc\" (UID: \"51372c30-ea27-438b-ba20-741b5e630044\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 01 08:42:04 crc kubenswrapper[4689]: I1201 08:42:04.628585 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/51372c30-ea27-438b-ba20-741b5e630044-kubelet-dir\") pod \"installer-9-crc\" (UID: \"51372c30-ea27-438b-ba20-741b5e630044\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 01 08:42:04 crc kubenswrapper[4689]: I1201 08:42:04.730017 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/51372c30-ea27-438b-ba20-741b5e630044-kubelet-dir\") pod \"installer-9-crc\" (UID: \"51372c30-ea27-438b-ba20-741b5e630044\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 01 08:42:04 crc kubenswrapper[4689]: I1201 08:42:04.730114 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/51372c30-ea27-438b-ba20-741b5e630044-var-lock\") pod \"installer-9-crc\" (UID: \"51372c30-ea27-438b-ba20-741b5e630044\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 01 08:42:04 crc kubenswrapper[4689]: I1201 08:42:04.730149 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/51372c30-ea27-438b-ba20-741b5e630044-kube-api-access\") pod \"installer-9-crc\" (UID: \"51372c30-ea27-438b-ba20-741b5e630044\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 01 08:42:04 crc kubenswrapper[4689]: I1201 08:42:04.730282 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/51372c30-ea27-438b-ba20-741b5e630044-kubelet-dir\") pod \"installer-9-crc\" (UID: \"51372c30-ea27-438b-ba20-741b5e630044\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 01 08:42:04 crc kubenswrapper[4689]: I1201 08:42:04.730406 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/51372c30-ea27-438b-ba20-741b5e630044-var-lock\") pod \"installer-9-crc\" (UID: \"51372c30-ea27-438b-ba20-741b5e630044\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 01 08:42:04 crc kubenswrapper[4689]: I1201 08:42:04.766928 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/51372c30-ea27-438b-ba20-741b5e630044-kube-api-access\") pod \"installer-9-crc\" (UID: \"51372c30-ea27-438b-ba20-741b5e630044\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 01 08:42:04 crc kubenswrapper[4689]: I1201 08:42:04.870058 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 01 08:42:07 crc kubenswrapper[4689]: E1201 08:42:07.990497 4689 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Dec 01 08:42:07 crc kubenswrapper[4689]: E1201 08:42:07.995532 4689 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x2n49,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-6gkfv_openshift-marketplace(2790f0e0-bca7-4070-8d79-72ae564043ef): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 01 08:42:07 crc kubenswrapper[4689]: E1201 08:42:07.996759 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-6gkfv" podUID="2790f0e0-bca7-4070-8d79-72ae564043ef" Dec 01 08:42:09 crc kubenswrapper[4689]: I1201 08:42:09.146893 4689 patch_prober.go:28] interesting pod/machine-config-daemon-hmdnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 08:42:09 crc kubenswrapper[4689]: I1201 08:42:09.147342 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 08:42:09 crc kubenswrapper[4689]: I1201 08:42:09.147415 4689 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" Dec 01 08:42:09 crc kubenswrapper[4689]: I1201 08:42:09.148718 4689 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cdcf58565a332d76dac9cc12df1e0cd89eee6934be4b3234f8c36e9a8e22983a"} pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 01 08:42:09 crc kubenswrapper[4689]: I1201 08:42:09.150125 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" containerName="machine-config-daemon" containerID="cri-o://cdcf58565a332d76dac9cc12df1e0cd89eee6934be4b3234f8c36e9a8e22983a" gracePeriod=600 Dec 01 08:42:09 crc kubenswrapper[4689]: I1201 08:42:09.815994 4689 generic.go:334] "Generic (PLEG): container finished" podID="3947625d-75bf-4332-a233-1491b2ee9d96" containerID="cdcf58565a332d76dac9cc12df1e0cd89eee6934be4b3234f8c36e9a8e22983a" exitCode=0 Dec 01 08:42:09 crc kubenswrapper[4689]: I1201 08:42:09.816062 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" event={"ID":"3947625d-75bf-4332-a233-1491b2ee9d96","Type":"ContainerDied","Data":"cdcf58565a332d76dac9cc12df1e0cd89eee6934be4b3234f8c36e9a8e22983a"} Dec 01 08:42:10 crc kubenswrapper[4689]: I1201 08:42:10.170185 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-8rfdp"] Dec 01 08:42:13 crc kubenswrapper[4689]: E1201 08:42:13.404021 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-6gkfv" podUID="2790f0e0-bca7-4070-8d79-72ae564043ef" Dec 01 08:42:13 crc kubenswrapper[4689]: E1201 08:42:13.512200 4689 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Dec 01 08:42:13 crc kubenswrapper[4689]: E1201 08:42:13.512407 4689 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fnvrm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-c4zck_openshift-marketplace(1860f8a4-ce73-4d74-8dcf-0a43a90d35b9): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 01 08:42:13 crc kubenswrapper[4689]: E1201 08:42:13.514993 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-c4zck" podUID="1860f8a4-ce73-4d74-8dcf-0a43a90d35b9" Dec 01 08:42:13 crc kubenswrapper[4689]: E1201 08:42:13.525904 4689 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Dec 01 08:42:13 crc kubenswrapper[4689]: E1201 08:42:13.526084 4689 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mxp5n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-z26c6_openshift-marketplace(6729f1b7-260e-4a90-a2da-1258e036b9ea): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 01 08:42:13 crc kubenswrapper[4689]: E1201 08:42:13.527568 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-z26c6" podUID="6729f1b7-260e-4a90-a2da-1258e036b9ea" Dec 01 08:42:13 crc kubenswrapper[4689]: E1201 08:42:13.617056 4689 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Dec 01 08:42:13 crc kubenswrapper[4689]: E1201 08:42:13.617600 4689 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vc7zj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-hwvv4_openshift-marketplace(cdd81b3a-e9ab-4f49-b621-3e16eed7ac73): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 01 08:42:13 crc kubenswrapper[4689]: E1201 08:42:13.617904 4689 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Dec 01 08:42:13 crc kubenswrapper[4689]: E1201 08:42:13.618117 4689 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kq9wb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-l9s25_openshift-marketplace(a02d72db-aa64-4300-acc0-93b8677bf6df): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 01 08:42:13 crc kubenswrapper[4689]: E1201 08:42:13.618809 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-hwvv4" podUID="cdd81b3a-e9ab-4f49-b621-3e16eed7ac73" Dec 01 08:42:13 crc kubenswrapper[4689]: E1201 08:42:13.619340 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-l9s25" podUID="a02d72db-aa64-4300-acc0-93b8677bf6df" Dec 01 08:42:13 crc kubenswrapper[4689]: I1201 08:42:13.840221 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 01 08:42:13 crc kubenswrapper[4689]: I1201 08:42:13.860915 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jk49d" event={"ID":"2a3d70f2-3da1-4712-bb46-200a641c7648","Type":"ContainerStarted","Data":"9bcfd575f35378156db01a95a91ef619ff21b3be98610ce280cdca03186b1191"} Dec 01 08:42:13 crc kubenswrapper[4689]: I1201 08:42:13.890560 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-twqmb" event={"ID":"a49ba834-1d80-4003-bf95-6dfd68b25a49","Type":"ContainerStarted","Data":"49e50dc2437dd2c604e494752cbc28981dc54fd4501f6cd3a953cdacc9fcc5f0"} Dec 01 08:42:13 crc kubenswrapper[4689]: I1201 08:42:13.903518 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" event={"ID":"3947625d-75bf-4332-a233-1491b2ee9d96","Type":"ContainerStarted","Data":"37b1b11c7bc8ffe4ab73103e6e1b196742e6409d79e78201bda5211f96e4082a"} Dec 01 08:42:13 crc kubenswrapper[4689]: E1201 08:42:13.918106 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-z26c6" podUID="6729f1b7-260e-4a90-a2da-1258e036b9ea" Dec 01 08:42:13 crc kubenswrapper[4689]: E1201 08:42:13.918325 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-l9s25" podUID="a02d72db-aa64-4300-acc0-93b8677bf6df" Dec 01 08:42:13 crc kubenswrapper[4689]: E1201 08:42:13.922583 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-hwvv4" podUID="cdd81b3a-e9ab-4f49-b621-3e16eed7ac73" Dec 01 08:42:13 crc kubenswrapper[4689]: E1201 08:42:13.922668 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-c4zck" podUID="1860f8a4-ce73-4d74-8dcf-0a43a90d35b9" Dec 01 08:42:13 crc kubenswrapper[4689]: I1201 08:42:13.972573 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 01 08:42:14 crc kubenswrapper[4689]: I1201 08:42:14.910801 4689 generic.go:334] "Generic (PLEG): container finished" podID="2a3d70f2-3da1-4712-bb46-200a641c7648" containerID="9bcfd575f35378156db01a95a91ef619ff21b3be98610ce280cdca03186b1191" exitCode=0 Dec 01 08:42:14 crc kubenswrapper[4689]: I1201 08:42:14.911072 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jk49d" event={"ID":"2a3d70f2-3da1-4712-bb46-200a641c7648","Type":"ContainerDied","Data":"9bcfd575f35378156db01a95a91ef619ff21b3be98610ce280cdca03186b1191"} Dec 01 08:42:14 crc kubenswrapper[4689]: I1201 08:42:14.916680 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"51372c30-ea27-438b-ba20-741b5e630044","Type":"ContainerStarted","Data":"1263410c61f6b949d666e25834717063f8fc60babfd1780d84e579eda720873e"} Dec 01 08:42:14 crc kubenswrapper[4689]: I1201 08:42:14.916753 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"51372c30-ea27-438b-ba20-741b5e630044","Type":"ContainerStarted","Data":"c4dca92dbf45b58b6c2f2fefde135a09fa22fd5492cef1b6cd1667b1a86bfea6"} Dec 01 08:42:14 crc kubenswrapper[4689]: I1201 08:42:14.921960 4689 generic.go:334] "Generic (PLEG): container finished" podID="e5e4c105-766f-4c1a-befe-a059da17406f" containerID="cea92daafe0d95df7f372a07fd8913471a77c2417c4737a6c2fd48b03844ca9b" exitCode=0 Dec 01 08:42:14 crc kubenswrapper[4689]: I1201 08:42:14.922056 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4kvdm" event={"ID":"e5e4c105-766f-4c1a-befe-a059da17406f","Type":"ContainerDied","Data":"cea92daafe0d95df7f372a07fd8913471a77c2417c4737a6c2fd48b03844ca9b"} Dec 01 08:42:14 crc kubenswrapper[4689]: I1201 08:42:14.925760 4689 generic.go:334] "Generic (PLEG): container finished" podID="a49ba834-1d80-4003-bf95-6dfd68b25a49" containerID="49e50dc2437dd2c604e494752cbc28981dc54fd4501f6cd3a953cdacc9fcc5f0" exitCode=0 Dec 01 08:42:14 crc kubenswrapper[4689]: I1201 08:42:14.926176 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-twqmb" event={"ID":"a49ba834-1d80-4003-bf95-6dfd68b25a49","Type":"ContainerDied","Data":"49e50dc2437dd2c604e494752cbc28981dc54fd4501f6cd3a953cdacc9fcc5f0"} Dec 01 08:42:14 crc kubenswrapper[4689]: I1201 08:42:14.930201 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"c53baf2d-eb84-4da5-938a-675c325fc6dc","Type":"ContainerStarted","Data":"2865c89140115e9b374251ee9ac1bfe34bb67aa6131a8251f1f28bf529bad846"} Dec 01 08:42:14 crc kubenswrapper[4689]: I1201 08:42:14.930629 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"c53baf2d-eb84-4da5-938a-675c325fc6dc","Type":"ContainerStarted","Data":"f4694086b5e033ee7cafbffd430bff25779ea752f898fcbc55ca46a2eb4228c5"} Dec 01 08:42:14 crc kubenswrapper[4689]: I1201 08:42:14.954252 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=15.95420984 podStartE2EDuration="15.95420984s" podCreationTimestamp="2025-12-01 08:41:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:42:14.949606964 +0000 UTC m=+215.021894868" watchObservedRunningTime="2025-12-01 08:42:14.95420984 +0000 UTC m=+215.026497734" Dec 01 08:42:14 crc kubenswrapper[4689]: I1201 08:42:14.973026 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=10.973005014 podStartE2EDuration="10.973005014s" podCreationTimestamp="2025-12-01 08:42:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:42:14.971319435 +0000 UTC m=+215.043607339" watchObservedRunningTime="2025-12-01 08:42:14.973005014 +0000 UTC m=+215.045292918" Dec 01 08:42:15 crc kubenswrapper[4689]: I1201 08:42:15.947829 4689 generic.go:334] "Generic (PLEG): container finished" podID="c53baf2d-eb84-4da5-938a-675c325fc6dc" containerID="2865c89140115e9b374251ee9ac1bfe34bb67aa6131a8251f1f28bf529bad846" exitCode=0 Dec 01 08:42:15 crc kubenswrapper[4689]: I1201 08:42:15.948449 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"c53baf2d-eb84-4da5-938a-675c325fc6dc","Type":"ContainerDied","Data":"2865c89140115e9b374251ee9ac1bfe34bb67aa6131a8251f1f28bf529bad846"} Dec 01 08:42:15 crc kubenswrapper[4689]: I1201 08:42:15.951972 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jk49d" event={"ID":"2a3d70f2-3da1-4712-bb46-200a641c7648","Type":"ContainerStarted","Data":"a233db44d9bf1bd9e7e6594a21247d54002b11b4e8a2065f9112d4cc3e395cfc"} Dec 01 08:42:15 crc kubenswrapper[4689]: I1201 08:42:15.957307 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4kvdm" event={"ID":"e5e4c105-766f-4c1a-befe-a059da17406f","Type":"ContainerStarted","Data":"21bba6ed1f84c6c8fd98e99fbd940df6c6a04e1b568eae5f7728d234dfa10950"} Dec 01 08:42:15 crc kubenswrapper[4689]: I1201 08:42:15.991141 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jk49d" podStartSLOduration=5.310743667 podStartE2EDuration="48.9911188s" podCreationTimestamp="2025-12-01 08:41:27 +0000 UTC" firstStartedPulling="2025-12-01 08:41:32.047746603 +0000 UTC m=+172.120034497" lastFinishedPulling="2025-12-01 08:42:15.728121726 +0000 UTC m=+215.800409630" observedRunningTime="2025-12-01 08:42:15.987197574 +0000 UTC m=+216.059485488" watchObservedRunningTime="2025-12-01 08:42:15.9911188 +0000 UTC m=+216.063406704" Dec 01 08:42:16 crc kubenswrapper[4689]: I1201 08:42:16.964225 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-twqmb" event={"ID":"a49ba834-1d80-4003-bf95-6dfd68b25a49","Type":"ContainerStarted","Data":"44edf9b750829c8e2029f41b7edfb58c306be2756f69913cff5e3ae9b9214d6c"} Dec 01 08:42:16 crc kubenswrapper[4689]: I1201 08:42:16.991580 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-twqmb" podStartSLOduration=7.129035729 podStartE2EDuration="51.991550744s" podCreationTimestamp="2025-12-01 08:41:25 +0000 UTC" firstStartedPulling="2025-12-01 08:41:30.975318396 +0000 UTC m=+171.047606300" lastFinishedPulling="2025-12-01 08:42:15.837833411 +0000 UTC m=+215.910121315" observedRunningTime="2025-12-01 08:42:16.989083431 +0000 UTC m=+217.061371345" watchObservedRunningTime="2025-12-01 08:42:16.991550744 +0000 UTC m=+217.063838648" Dec 01 08:42:16 crc kubenswrapper[4689]: I1201 08:42:16.993190 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4kvdm" podStartSLOduration=6.379070266 podStartE2EDuration="51.993180042s" podCreationTimestamp="2025-12-01 08:41:25 +0000 UTC" firstStartedPulling="2025-12-01 08:41:30.145230891 +0000 UTC m=+170.217518795" lastFinishedPulling="2025-12-01 08:42:15.759340667 +0000 UTC m=+215.831628571" observedRunningTime="2025-12-01 08:42:16.012343275 +0000 UTC m=+216.084631179" watchObservedRunningTime="2025-12-01 08:42:16.993180042 +0000 UTC m=+217.065467946" Dec 01 08:42:17 crc kubenswrapper[4689]: I1201 08:42:17.329036 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 01 08:42:17 crc kubenswrapper[4689]: I1201 08:42:17.351476 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c53baf2d-eb84-4da5-938a-675c325fc6dc-kubelet-dir\") pod \"c53baf2d-eb84-4da5-938a-675c325fc6dc\" (UID: \"c53baf2d-eb84-4da5-938a-675c325fc6dc\") " Dec 01 08:42:17 crc kubenswrapper[4689]: I1201 08:42:17.351593 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c53baf2d-eb84-4da5-938a-675c325fc6dc-kube-api-access\") pod \"c53baf2d-eb84-4da5-938a-675c325fc6dc\" (UID: \"c53baf2d-eb84-4da5-938a-675c325fc6dc\") " Dec 01 08:42:17 crc kubenswrapper[4689]: I1201 08:42:17.351859 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c53baf2d-eb84-4da5-938a-675c325fc6dc-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "c53baf2d-eb84-4da5-938a-675c325fc6dc" (UID: "c53baf2d-eb84-4da5-938a-675c325fc6dc"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 08:42:17 crc kubenswrapper[4689]: I1201 08:42:17.352131 4689 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c53baf2d-eb84-4da5-938a-675c325fc6dc-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 01 08:42:17 crc kubenswrapper[4689]: I1201 08:42:17.357834 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c53baf2d-eb84-4da5-938a-675c325fc6dc-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c53baf2d-eb84-4da5-938a-675c325fc6dc" (UID: "c53baf2d-eb84-4da5-938a-675c325fc6dc"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:42:17 crc kubenswrapper[4689]: I1201 08:42:17.454295 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c53baf2d-eb84-4da5-938a-675c325fc6dc-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 01 08:42:17 crc kubenswrapper[4689]: I1201 08:42:17.973181 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 01 08:42:17 crc kubenswrapper[4689]: I1201 08:42:17.982274 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"c53baf2d-eb84-4da5-938a-675c325fc6dc","Type":"ContainerDied","Data":"f4694086b5e033ee7cafbffd430bff25779ea752f898fcbc55ca46a2eb4228c5"} Dec 01 08:42:17 crc kubenswrapper[4689]: I1201 08:42:17.982326 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f4694086b5e033ee7cafbffd430bff25779ea752f898fcbc55ca46a2eb4228c5" Dec 01 08:42:18 crc kubenswrapper[4689]: I1201 08:42:18.187146 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jk49d" Dec 01 08:42:18 crc kubenswrapper[4689]: I1201 08:42:18.187228 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jk49d" Dec 01 08:42:18 crc kubenswrapper[4689]: I1201 08:42:18.375509 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jk49d" Dec 01 08:42:25 crc kubenswrapper[4689]: I1201 08:42:25.832330 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4kvdm" Dec 01 08:42:25 crc kubenswrapper[4689]: I1201 08:42:25.833106 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4kvdm" Dec 01 08:42:25 crc kubenswrapper[4689]: I1201 08:42:25.897175 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4kvdm" Dec 01 08:42:26 crc kubenswrapper[4689]: I1201 08:42:26.020535 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hwvv4" event={"ID":"cdd81b3a-e9ab-4f49-b621-3e16eed7ac73","Type":"ContainerStarted","Data":"5d1c3b5de4c6ee4cdee953fa5e033b749a62712184c3a200cfdc2e7b154ac915"} Dec 01 08:42:26 crc kubenswrapper[4689]: I1201 08:42:26.060663 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4kvdm" Dec 01 08:42:26 crc kubenswrapper[4689]: I1201 08:42:26.578083 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-twqmb" Dec 01 08:42:26 crc kubenswrapper[4689]: I1201 08:42:26.578150 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-twqmb" Dec 01 08:42:26 crc kubenswrapper[4689]: I1201 08:42:26.620710 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-twqmb" Dec 01 08:42:27 crc kubenswrapper[4689]: I1201 08:42:27.029092 4689 generic.go:334] "Generic (PLEG): container finished" podID="cdd81b3a-e9ab-4f49-b621-3e16eed7ac73" containerID="5d1c3b5de4c6ee4cdee953fa5e033b749a62712184c3a200cfdc2e7b154ac915" exitCode=0 Dec 01 08:42:27 crc kubenswrapper[4689]: I1201 08:42:27.029165 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hwvv4" event={"ID":"cdd81b3a-e9ab-4f49-b621-3e16eed7ac73","Type":"ContainerDied","Data":"5d1c3b5de4c6ee4cdee953fa5e033b749a62712184c3a200cfdc2e7b154ac915"} Dec 01 08:42:27 crc kubenswrapper[4689]: I1201 08:42:27.075203 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-twqmb" Dec 01 08:42:28 crc kubenswrapper[4689]: I1201 08:42:28.055716 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hwvv4" event={"ID":"cdd81b3a-e9ab-4f49-b621-3e16eed7ac73","Type":"ContainerStarted","Data":"df7ad68b4d5fb42d63fdd6deff023213332d99f8d0376df3dd0330445e33e97a"} Dec 01 08:42:28 crc kubenswrapper[4689]: I1201 08:42:28.099027 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hwvv4" podStartSLOduration=7.573152025 podStartE2EDuration="1m3.098992691s" podCreationTimestamp="2025-12-01 08:41:25 +0000 UTC" firstStartedPulling="2025-12-01 08:41:32.065672534 +0000 UTC m=+172.137960438" lastFinishedPulling="2025-12-01 08:42:27.5915132 +0000 UTC m=+227.663801104" observedRunningTime="2025-12-01 08:42:28.093132288 +0000 UTC m=+228.165420192" watchObservedRunningTime="2025-12-01 08:42:28.098992691 +0000 UTC m=+228.171280635" Dec 01 08:42:28 crc kubenswrapper[4689]: I1201 08:42:28.302264 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jk49d" Dec 01 08:42:29 crc kubenswrapper[4689]: I1201 08:42:29.062739 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l9s25" event={"ID":"a02d72db-aa64-4300-acc0-93b8677bf6df","Type":"ContainerStarted","Data":"3c37e4e3406a81a196f0e8d6453acc11d0c09f19a4d62a25aa0197583e22e209"} Dec 01 08:42:30 crc kubenswrapper[4689]: I1201 08:42:30.072470 4689 generic.go:334] "Generic (PLEG): container finished" podID="a02d72db-aa64-4300-acc0-93b8677bf6df" containerID="3c37e4e3406a81a196f0e8d6453acc11d0c09f19a4d62a25aa0197583e22e209" exitCode=0 Dec 01 08:42:30 crc kubenswrapper[4689]: I1201 08:42:30.072553 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l9s25" event={"ID":"a02d72db-aa64-4300-acc0-93b8677bf6df","Type":"ContainerDied","Data":"3c37e4e3406a81a196f0e8d6453acc11d0c09f19a4d62a25aa0197583e22e209"} Dec 01 08:42:30 crc kubenswrapper[4689]: I1201 08:42:30.863429 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jk49d"] Dec 01 08:42:30 crc kubenswrapper[4689]: I1201 08:42:30.864144 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jk49d" podUID="2a3d70f2-3da1-4712-bb46-200a641c7648" containerName="registry-server" containerID="cri-o://a233db44d9bf1bd9e7e6594a21247d54002b11b4e8a2065f9112d4cc3e395cfc" gracePeriod=2 Dec 01 08:42:32 crc kubenswrapper[4689]: I1201 08:42:32.106934 4689 generic.go:334] "Generic (PLEG): container finished" podID="2a3d70f2-3da1-4712-bb46-200a641c7648" containerID="a233db44d9bf1bd9e7e6594a21247d54002b11b4e8a2065f9112d4cc3e395cfc" exitCode=0 Dec 01 08:42:32 crc kubenswrapper[4689]: I1201 08:42:32.106985 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jk49d" event={"ID":"2a3d70f2-3da1-4712-bb46-200a641c7648","Type":"ContainerDied","Data":"a233db44d9bf1bd9e7e6594a21247d54002b11b4e8a2065f9112d4cc3e395cfc"} Dec 01 08:42:35 crc kubenswrapper[4689]: I1201 08:42:35.250835 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-8rfdp" podUID="e85d92ae-30aa-4302-b217-43a48dcadd8a" containerName="oauth-openshift" containerID="cri-o://70b59839ace2b780fff325d9fb0084344cdf75ad86e5fe3de51c16dff7ae0f73" gracePeriod=15 Dec 01 08:42:35 crc kubenswrapper[4689]: I1201 08:42:35.388592 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jk49d" Dec 01 08:42:35 crc kubenswrapper[4689]: I1201 08:42:35.507561 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a3d70f2-3da1-4712-bb46-200a641c7648-utilities\") pod \"2a3d70f2-3da1-4712-bb46-200a641c7648\" (UID: \"2a3d70f2-3da1-4712-bb46-200a641c7648\") " Dec 01 08:42:35 crc kubenswrapper[4689]: I1201 08:42:35.507642 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a3d70f2-3da1-4712-bb46-200a641c7648-catalog-content\") pod \"2a3d70f2-3da1-4712-bb46-200a641c7648\" (UID: \"2a3d70f2-3da1-4712-bb46-200a641c7648\") " Dec 01 08:42:35 crc kubenswrapper[4689]: I1201 08:42:35.507689 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-clq27\" (UniqueName: \"kubernetes.io/projected/2a3d70f2-3da1-4712-bb46-200a641c7648-kube-api-access-clq27\") pod \"2a3d70f2-3da1-4712-bb46-200a641c7648\" (UID: \"2a3d70f2-3da1-4712-bb46-200a641c7648\") " Dec 01 08:42:35 crc kubenswrapper[4689]: I1201 08:42:35.508923 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a3d70f2-3da1-4712-bb46-200a641c7648-utilities" (OuterVolumeSpecName: "utilities") pod "2a3d70f2-3da1-4712-bb46-200a641c7648" (UID: "2a3d70f2-3da1-4712-bb46-200a641c7648"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 08:42:35 crc kubenswrapper[4689]: I1201 08:42:35.519741 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a3d70f2-3da1-4712-bb46-200a641c7648-kube-api-access-clq27" (OuterVolumeSpecName: "kube-api-access-clq27") pod "2a3d70f2-3da1-4712-bb46-200a641c7648" (UID: "2a3d70f2-3da1-4712-bb46-200a641c7648"). InnerVolumeSpecName "kube-api-access-clq27". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:42:35 crc kubenswrapper[4689]: I1201 08:42:35.535117 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a3d70f2-3da1-4712-bb46-200a641c7648-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2a3d70f2-3da1-4712-bb46-200a641c7648" (UID: "2a3d70f2-3da1-4712-bb46-200a641c7648"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 08:42:35 crc kubenswrapper[4689]: I1201 08:42:35.610342 4689 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a3d70f2-3da1-4712-bb46-200a641c7648-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 08:42:35 crc kubenswrapper[4689]: I1201 08:42:35.610769 4689 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a3d70f2-3da1-4712-bb46-200a641c7648-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 08:42:35 crc kubenswrapper[4689]: I1201 08:42:35.610881 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-clq27\" (UniqueName: \"kubernetes.io/projected/2a3d70f2-3da1-4712-bb46-200a641c7648-kube-api-access-clq27\") on node \"crc\" DevicePath \"\"" Dec 01 08:42:36 crc kubenswrapper[4689]: I1201 08:42:36.132927 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jk49d" event={"ID":"2a3d70f2-3da1-4712-bb46-200a641c7648","Type":"ContainerDied","Data":"a5528f12af23a3e91d23232381826dbc4f5dddab4cf4ab8bc307d8dd8f821c6d"} Dec 01 08:42:36 crc kubenswrapper[4689]: I1201 08:42:36.133098 4689 scope.go:117] "RemoveContainer" containerID="a233db44d9bf1bd9e7e6594a21247d54002b11b4e8a2065f9112d4cc3e395cfc" Dec 01 08:42:36 crc kubenswrapper[4689]: I1201 08:42:36.133046 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jk49d" Dec 01 08:42:36 crc kubenswrapper[4689]: I1201 08:42:36.178382 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jk49d"] Dec 01 08:42:36 crc kubenswrapper[4689]: I1201 08:42:36.184223 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jk49d"] Dec 01 08:42:36 crc kubenswrapper[4689]: I1201 08:42:36.205361 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hwvv4" Dec 01 08:42:36 crc kubenswrapper[4689]: I1201 08:42:36.205524 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hwvv4" Dec 01 08:42:36 crc kubenswrapper[4689]: I1201 08:42:36.268827 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hwvv4" Dec 01 08:42:36 crc kubenswrapper[4689]: I1201 08:42:36.335956 4689 scope.go:117] "RemoveContainer" containerID="9bcfd575f35378156db01a95a91ef619ff21b3be98610ce280cdca03186b1191" Dec 01 08:42:36 crc kubenswrapper[4689]: I1201 08:42:36.376420 4689 scope.go:117] "RemoveContainer" containerID="5a2ff6c6d1d83088a789254fafd840d8d05495d2e1ece71d190f993fa368eb10" Dec 01 08:42:36 crc kubenswrapper[4689]: I1201 08:42:36.847471 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-8rfdp" Dec 01 08:42:37 crc kubenswrapper[4689]: I1201 08:42:37.035476 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e85d92ae-30aa-4302-b217-43a48dcadd8a-v4-0-config-system-trusted-ca-bundle\") pod \"e85d92ae-30aa-4302-b217-43a48dcadd8a\" (UID: \"e85d92ae-30aa-4302-b217-43a48dcadd8a\") " Dec 01 08:42:37 crc kubenswrapper[4689]: I1201 08:42:37.035533 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e85d92ae-30aa-4302-b217-43a48dcadd8a-audit-dir\") pod \"e85d92ae-30aa-4302-b217-43a48dcadd8a\" (UID: \"e85d92ae-30aa-4302-b217-43a48dcadd8a\") " Dec 01 08:42:37 crc kubenswrapper[4689]: I1201 08:42:37.035597 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e85d92ae-30aa-4302-b217-43a48dcadd8a-v4-0-config-user-template-provider-selection\") pod \"e85d92ae-30aa-4302-b217-43a48dcadd8a\" (UID: \"e85d92ae-30aa-4302-b217-43a48dcadd8a\") " Dec 01 08:42:37 crc kubenswrapper[4689]: I1201 08:42:37.035658 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e85d92ae-30aa-4302-b217-43a48dcadd8a-v4-0-config-system-service-ca\") pod \"e85d92ae-30aa-4302-b217-43a48dcadd8a\" (UID: \"e85d92ae-30aa-4302-b217-43a48dcadd8a\") " Dec 01 08:42:37 crc kubenswrapper[4689]: I1201 08:42:37.035700 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-szh8z\" (UniqueName: \"kubernetes.io/projected/e85d92ae-30aa-4302-b217-43a48dcadd8a-kube-api-access-szh8z\") pod \"e85d92ae-30aa-4302-b217-43a48dcadd8a\" (UID: \"e85d92ae-30aa-4302-b217-43a48dcadd8a\") " Dec 01 08:42:37 crc kubenswrapper[4689]: I1201 08:42:37.035730 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e85d92ae-30aa-4302-b217-43a48dcadd8a-v4-0-config-system-serving-cert\") pod \"e85d92ae-30aa-4302-b217-43a48dcadd8a\" (UID: \"e85d92ae-30aa-4302-b217-43a48dcadd8a\") " Dec 01 08:42:37 crc kubenswrapper[4689]: I1201 08:42:37.035787 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e85d92ae-30aa-4302-b217-43a48dcadd8a-v4-0-config-user-template-login\") pod \"e85d92ae-30aa-4302-b217-43a48dcadd8a\" (UID: \"e85d92ae-30aa-4302-b217-43a48dcadd8a\") " Dec 01 08:42:37 crc kubenswrapper[4689]: I1201 08:42:37.035819 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e85d92ae-30aa-4302-b217-43a48dcadd8a-v4-0-config-system-router-certs\") pod \"e85d92ae-30aa-4302-b217-43a48dcadd8a\" (UID: \"e85d92ae-30aa-4302-b217-43a48dcadd8a\") " Dec 01 08:42:37 crc kubenswrapper[4689]: I1201 08:42:37.035854 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e85d92ae-30aa-4302-b217-43a48dcadd8a-v4-0-config-system-ocp-branding-template\") pod \"e85d92ae-30aa-4302-b217-43a48dcadd8a\" (UID: \"e85d92ae-30aa-4302-b217-43a48dcadd8a\") " Dec 01 08:42:37 crc kubenswrapper[4689]: I1201 08:42:37.035891 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e85d92ae-30aa-4302-b217-43a48dcadd8a-v4-0-config-user-template-error\") pod \"e85d92ae-30aa-4302-b217-43a48dcadd8a\" (UID: \"e85d92ae-30aa-4302-b217-43a48dcadd8a\") " Dec 01 08:42:37 crc kubenswrapper[4689]: I1201 08:42:37.035954 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e85d92ae-30aa-4302-b217-43a48dcadd8a-v4-0-config-user-idp-0-file-data\") pod \"e85d92ae-30aa-4302-b217-43a48dcadd8a\" (UID: \"e85d92ae-30aa-4302-b217-43a48dcadd8a\") " Dec 01 08:42:37 crc kubenswrapper[4689]: I1201 08:42:37.035982 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e85d92ae-30aa-4302-b217-43a48dcadd8a-audit-policies\") pod \"e85d92ae-30aa-4302-b217-43a48dcadd8a\" (UID: \"e85d92ae-30aa-4302-b217-43a48dcadd8a\") " Dec 01 08:42:37 crc kubenswrapper[4689]: I1201 08:42:37.036006 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e85d92ae-30aa-4302-b217-43a48dcadd8a-v4-0-config-system-cliconfig\") pod \"e85d92ae-30aa-4302-b217-43a48dcadd8a\" (UID: \"e85d92ae-30aa-4302-b217-43a48dcadd8a\") " Dec 01 08:42:37 crc kubenswrapper[4689]: I1201 08:42:37.036033 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e85d92ae-30aa-4302-b217-43a48dcadd8a-v4-0-config-system-session\") pod \"e85d92ae-30aa-4302-b217-43a48dcadd8a\" (UID: \"e85d92ae-30aa-4302-b217-43a48dcadd8a\") " Dec 01 08:42:37 crc kubenswrapper[4689]: I1201 08:42:37.037154 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e85d92ae-30aa-4302-b217-43a48dcadd8a-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "e85d92ae-30aa-4302-b217-43a48dcadd8a" (UID: "e85d92ae-30aa-4302-b217-43a48dcadd8a"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:42:37 crc kubenswrapper[4689]: I1201 08:42:37.037140 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e85d92ae-30aa-4302-b217-43a48dcadd8a-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "e85d92ae-30aa-4302-b217-43a48dcadd8a" (UID: "e85d92ae-30aa-4302-b217-43a48dcadd8a"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:42:37 crc kubenswrapper[4689]: I1201 08:42:37.037260 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e85d92ae-30aa-4302-b217-43a48dcadd8a-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "e85d92ae-30aa-4302-b217-43a48dcadd8a" (UID: "e85d92ae-30aa-4302-b217-43a48dcadd8a"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 08:42:37 crc kubenswrapper[4689]: I1201 08:42:37.038021 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e85d92ae-30aa-4302-b217-43a48dcadd8a-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "e85d92ae-30aa-4302-b217-43a48dcadd8a" (UID: "e85d92ae-30aa-4302-b217-43a48dcadd8a"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:42:37 crc kubenswrapper[4689]: I1201 08:42:37.038748 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e85d92ae-30aa-4302-b217-43a48dcadd8a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "e85d92ae-30aa-4302-b217-43a48dcadd8a" (UID: "e85d92ae-30aa-4302-b217-43a48dcadd8a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:42:37 crc kubenswrapper[4689]: I1201 08:42:37.054027 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e85d92ae-30aa-4302-b217-43a48dcadd8a-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "e85d92ae-30aa-4302-b217-43a48dcadd8a" (UID: "e85d92ae-30aa-4302-b217-43a48dcadd8a"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:42:37 crc kubenswrapper[4689]: I1201 08:42:37.055203 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e85d92ae-30aa-4302-b217-43a48dcadd8a-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "e85d92ae-30aa-4302-b217-43a48dcadd8a" (UID: "e85d92ae-30aa-4302-b217-43a48dcadd8a"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:42:37 crc kubenswrapper[4689]: I1201 08:42:37.055316 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e85d92ae-30aa-4302-b217-43a48dcadd8a-kube-api-access-szh8z" (OuterVolumeSpecName: "kube-api-access-szh8z") pod "e85d92ae-30aa-4302-b217-43a48dcadd8a" (UID: "e85d92ae-30aa-4302-b217-43a48dcadd8a"). InnerVolumeSpecName "kube-api-access-szh8z". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:42:37 crc kubenswrapper[4689]: I1201 08:42:37.056533 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e85d92ae-30aa-4302-b217-43a48dcadd8a-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "e85d92ae-30aa-4302-b217-43a48dcadd8a" (UID: "e85d92ae-30aa-4302-b217-43a48dcadd8a"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:42:37 crc kubenswrapper[4689]: I1201 08:42:37.058114 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e85d92ae-30aa-4302-b217-43a48dcadd8a-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "e85d92ae-30aa-4302-b217-43a48dcadd8a" (UID: "e85d92ae-30aa-4302-b217-43a48dcadd8a"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:42:37 crc kubenswrapper[4689]: I1201 08:42:37.058181 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e85d92ae-30aa-4302-b217-43a48dcadd8a-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "e85d92ae-30aa-4302-b217-43a48dcadd8a" (UID: "e85d92ae-30aa-4302-b217-43a48dcadd8a"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:42:37 crc kubenswrapper[4689]: I1201 08:42:37.059085 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e85d92ae-30aa-4302-b217-43a48dcadd8a-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "e85d92ae-30aa-4302-b217-43a48dcadd8a" (UID: "e85d92ae-30aa-4302-b217-43a48dcadd8a"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:42:37 crc kubenswrapper[4689]: I1201 08:42:37.059858 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e85d92ae-30aa-4302-b217-43a48dcadd8a-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "e85d92ae-30aa-4302-b217-43a48dcadd8a" (UID: "e85d92ae-30aa-4302-b217-43a48dcadd8a"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:42:37 crc kubenswrapper[4689]: I1201 08:42:37.066456 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e85d92ae-30aa-4302-b217-43a48dcadd8a-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "e85d92ae-30aa-4302-b217-43a48dcadd8a" (UID: "e85d92ae-30aa-4302-b217-43a48dcadd8a"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:42:37 crc kubenswrapper[4689]: I1201 08:42:37.069471 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a3d70f2-3da1-4712-bb46-200a641c7648" path="/var/lib/kubelet/pods/2a3d70f2-3da1-4712-bb46-200a641c7648/volumes" Dec 01 08:42:37 crc kubenswrapper[4689]: I1201 08:42:37.138298 4689 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e85d92ae-30aa-4302-b217-43a48dcadd8a-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 01 08:42:37 crc kubenswrapper[4689]: I1201 08:42:37.138434 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-szh8z\" (UniqueName: \"kubernetes.io/projected/e85d92ae-30aa-4302-b217-43a48dcadd8a-kube-api-access-szh8z\") on node \"crc\" DevicePath \"\"" Dec 01 08:42:37 crc kubenswrapper[4689]: I1201 08:42:37.138476 4689 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e85d92ae-30aa-4302-b217-43a48dcadd8a-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 08:42:37 crc kubenswrapper[4689]: I1201 08:42:37.138504 4689 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e85d92ae-30aa-4302-b217-43a48dcadd8a-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 01 08:42:37 crc kubenswrapper[4689]: I1201 08:42:37.138535 4689 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e85d92ae-30aa-4302-b217-43a48dcadd8a-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 01 08:42:37 crc kubenswrapper[4689]: I1201 08:42:37.138573 4689 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e85d92ae-30aa-4302-b217-43a48dcadd8a-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 01 08:42:37 crc kubenswrapper[4689]: I1201 08:42:37.138602 4689 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e85d92ae-30aa-4302-b217-43a48dcadd8a-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 01 08:42:37 crc kubenswrapper[4689]: I1201 08:42:37.138632 4689 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e85d92ae-30aa-4302-b217-43a48dcadd8a-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 01 08:42:37 crc kubenswrapper[4689]: I1201 08:42:37.138654 4689 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e85d92ae-30aa-4302-b217-43a48dcadd8a-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 01 08:42:37 crc kubenswrapper[4689]: I1201 08:42:37.138690 4689 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e85d92ae-30aa-4302-b217-43a48dcadd8a-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 01 08:42:37 crc kubenswrapper[4689]: I1201 08:42:37.138734 4689 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e85d92ae-30aa-4302-b217-43a48dcadd8a-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 01 08:42:37 crc kubenswrapper[4689]: I1201 08:42:37.138763 4689 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e85d92ae-30aa-4302-b217-43a48dcadd8a-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 08:42:37 crc kubenswrapper[4689]: I1201 08:42:37.138786 4689 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e85d92ae-30aa-4302-b217-43a48dcadd8a-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 01 08:42:37 crc kubenswrapper[4689]: I1201 08:42:37.138819 4689 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e85d92ae-30aa-4302-b217-43a48dcadd8a-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 01 08:42:37 crc kubenswrapper[4689]: I1201 08:42:37.144523 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-8rfdp" Dec 01 08:42:37 crc kubenswrapper[4689]: I1201 08:42:37.144566 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-8rfdp" event={"ID":"e85d92ae-30aa-4302-b217-43a48dcadd8a","Type":"ContainerDied","Data":"70b59839ace2b780fff325d9fb0084344cdf75ad86e5fe3de51c16dff7ae0f73"} Dec 01 08:42:37 crc kubenswrapper[4689]: I1201 08:42:37.144717 4689 scope.go:117] "RemoveContainer" containerID="70b59839ace2b780fff325d9fb0084344cdf75ad86e5fe3de51c16dff7ae0f73" Dec 01 08:42:37 crc kubenswrapper[4689]: I1201 08:42:37.144433 4689 generic.go:334] "Generic (PLEG): container finished" podID="e85d92ae-30aa-4302-b217-43a48dcadd8a" containerID="70b59839ace2b780fff325d9fb0084344cdf75ad86e5fe3de51c16dff7ae0f73" exitCode=0 Dec 01 08:42:37 crc kubenswrapper[4689]: I1201 08:42:37.145216 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-8rfdp" event={"ID":"e85d92ae-30aa-4302-b217-43a48dcadd8a","Type":"ContainerDied","Data":"e1d5b4003e6ac7729d5c114ed97812492f57ce66682981c0795e083d02dd1752"} Dec 01 08:42:37 crc kubenswrapper[4689]: I1201 08:42:37.176743 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-8rfdp"] Dec 01 08:42:37 crc kubenswrapper[4689]: I1201 08:42:37.180227 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-8rfdp"] Dec 01 08:42:37 crc kubenswrapper[4689]: I1201 08:42:37.202240 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hwvv4" Dec 01 08:42:37 crc kubenswrapper[4689]: I1201 08:42:37.370246 4689 scope.go:117] "RemoveContainer" containerID="70b59839ace2b780fff325d9fb0084344cdf75ad86e5fe3de51c16dff7ae0f73" Dec 01 08:42:37 crc kubenswrapper[4689]: E1201 08:42:37.374481 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70b59839ace2b780fff325d9fb0084344cdf75ad86e5fe3de51c16dff7ae0f73\": container with ID starting with 70b59839ace2b780fff325d9fb0084344cdf75ad86e5fe3de51c16dff7ae0f73 not found: ID does not exist" containerID="70b59839ace2b780fff325d9fb0084344cdf75ad86e5fe3de51c16dff7ae0f73" Dec 01 08:42:37 crc kubenswrapper[4689]: I1201 08:42:37.374590 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70b59839ace2b780fff325d9fb0084344cdf75ad86e5fe3de51c16dff7ae0f73"} err="failed to get container status \"70b59839ace2b780fff325d9fb0084344cdf75ad86e5fe3de51c16dff7ae0f73\": rpc error: code = NotFound desc = could not find container \"70b59839ace2b780fff325d9fb0084344cdf75ad86e5fe3de51c16dff7ae0f73\": container with ID starting with 70b59839ace2b780fff325d9fb0084344cdf75ad86e5fe3de51c16dff7ae0f73 not found: ID does not exist" Dec 01 08:42:38 crc kubenswrapper[4689]: I1201 08:42:38.157045 4689 generic.go:334] "Generic (PLEG): container finished" podID="6729f1b7-260e-4a90-a2da-1258e036b9ea" containerID="3d0a6f0755b434599c6185fe6a3b4666c869a86a3a50a4c67602865af3f6b235" exitCode=0 Dec 01 08:42:38 crc kubenswrapper[4689]: I1201 08:42:38.157137 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z26c6" event={"ID":"6729f1b7-260e-4a90-a2da-1258e036b9ea","Type":"ContainerDied","Data":"3d0a6f0755b434599c6185fe6a3b4666c869a86a3a50a4c67602865af3f6b235"} Dec 01 08:42:38 crc kubenswrapper[4689]: I1201 08:42:38.162003 4689 generic.go:334] "Generic (PLEG): container finished" podID="1860f8a4-ce73-4d74-8dcf-0a43a90d35b9" containerID="465640181ae9fe565d15e3e966890f7db41a724afa7d0c97a9cf5f9701cbebc4" exitCode=0 Dec 01 08:42:38 crc kubenswrapper[4689]: I1201 08:42:38.162079 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c4zck" event={"ID":"1860f8a4-ce73-4d74-8dcf-0a43a90d35b9","Type":"ContainerDied","Data":"465640181ae9fe565d15e3e966890f7db41a724afa7d0c97a9cf5f9701cbebc4"} Dec 01 08:42:38 crc kubenswrapper[4689]: I1201 08:42:38.173692 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l9s25" event={"ID":"a02d72db-aa64-4300-acc0-93b8677bf6df","Type":"ContainerStarted","Data":"b90fe638e664ed622cd75be22ec5fc0d9dea5edc51ddcdc9b4cf6c17045838a0"} Dec 01 08:42:38 crc kubenswrapper[4689]: I1201 08:42:38.181308 4689 generic.go:334] "Generic (PLEG): container finished" podID="2790f0e0-bca7-4070-8d79-72ae564043ef" containerID="95dddc55c7ef43ef995476c680ac865f3db512c409a4355f918a5e2a83607204" exitCode=0 Dec 01 08:42:38 crc kubenswrapper[4689]: I1201 08:42:38.181969 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6gkfv" event={"ID":"2790f0e0-bca7-4070-8d79-72ae564043ef","Type":"ContainerDied","Data":"95dddc55c7ef43ef995476c680ac865f3db512c409a4355f918a5e2a83607204"} Dec 01 08:42:38 crc kubenswrapper[4689]: I1201 08:42:38.261131 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-l9s25" podStartSLOduration=4.952649001 podStartE2EDuration="1m10.261065126s" podCreationTimestamp="2025-12-01 08:41:28 +0000 UTC" firstStartedPulling="2025-12-01 08:41:32.115420518 +0000 UTC m=+172.187708422" lastFinishedPulling="2025-12-01 08:42:37.423836643 +0000 UTC m=+237.496124547" observedRunningTime="2025-12-01 08:42:38.25848873 +0000 UTC m=+238.330776654" watchObservedRunningTime="2025-12-01 08:42:38.261065126 +0000 UTC m=+238.333353030" Dec 01 08:42:38 crc kubenswrapper[4689]: I1201 08:42:38.651221 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hwvv4"] Dec 01 08:42:39 crc kubenswrapper[4689]: I1201 08:42:39.057471 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e85d92ae-30aa-4302-b217-43a48dcadd8a" path="/var/lib/kubelet/pods/e85d92ae-30aa-4302-b217-43a48dcadd8a/volumes" Dec 01 08:42:39 crc kubenswrapper[4689]: I1201 08:42:39.188765 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c4zck" event={"ID":"1860f8a4-ce73-4d74-8dcf-0a43a90d35b9","Type":"ContainerStarted","Data":"994348d608ec625bf26f07ea88512d99f3e93556858eaef535eea39a7971f825"} Dec 01 08:42:39 crc kubenswrapper[4689]: I1201 08:42:39.191388 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6gkfv" event={"ID":"2790f0e0-bca7-4070-8d79-72ae564043ef","Type":"ContainerStarted","Data":"8b4d746f0a2ce206cb7819078c2f094aceaeef7f49cdaa3486b844c92c2c27e6"} Dec 01 08:42:39 crc kubenswrapper[4689]: I1201 08:42:39.194047 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z26c6" event={"ID":"6729f1b7-260e-4a90-a2da-1258e036b9ea","Type":"ContainerStarted","Data":"6d7d3beddc20a2bf7df1809d2beb1552a8eb8de8e78a5ee6eaaa11077bbae855"} Dec 01 08:42:39 crc kubenswrapper[4689]: I1201 08:42:39.194452 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hwvv4" podUID="cdd81b3a-e9ab-4f49-b621-3e16eed7ac73" containerName="registry-server" containerID="cri-o://df7ad68b4d5fb42d63fdd6deff023213332d99f8d0376df3dd0330445e33e97a" gracePeriod=2 Dec 01 08:42:39 crc kubenswrapper[4689]: I1201 08:42:39.210001 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-c4zck" podStartSLOduration=5.596188509 podStartE2EDuration="1m12.209983262s" podCreationTimestamp="2025-12-01 08:41:27 +0000 UTC" firstStartedPulling="2025-12-01 08:41:32.080615309 +0000 UTC m=+172.152903213" lastFinishedPulling="2025-12-01 08:42:38.694410062 +0000 UTC m=+238.766697966" observedRunningTime="2025-12-01 08:42:39.208846869 +0000 UTC m=+239.281134763" watchObservedRunningTime="2025-12-01 08:42:39.209983262 +0000 UTC m=+239.282271166" Dec 01 08:42:39 crc kubenswrapper[4689]: I1201 08:42:39.232044 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-l9s25" Dec 01 08:42:39 crc kubenswrapper[4689]: I1201 08:42:39.232091 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-l9s25" Dec 01 08:42:39 crc kubenswrapper[4689]: I1201 08:42:39.267292 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6gkfv" podStartSLOduration=7.431621 podStartE2EDuration="1m14.2672656s" podCreationTimestamp="2025-12-01 08:41:25 +0000 UTC" firstStartedPulling="2025-12-01 08:41:32.009508536 +0000 UTC m=+172.081796440" lastFinishedPulling="2025-12-01 08:42:38.845153136 +0000 UTC m=+238.917441040" observedRunningTime="2025-12-01 08:42:39.264228611 +0000 UTC m=+239.336516515" watchObservedRunningTime="2025-12-01 08:42:39.2672656 +0000 UTC m=+239.339553504" Dec 01 08:42:39 crc kubenswrapper[4689]: I1201 08:42:39.269756 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-z26c6" podStartSLOduration=4.5421607139999995 podStartE2EDuration="1m11.269747623s" podCreationTimestamp="2025-12-01 08:41:28 +0000 UTC" firstStartedPulling="2025-12-01 08:41:31.956081976 +0000 UTC m=+172.028369870" lastFinishedPulling="2025-12-01 08:42:38.683668875 +0000 UTC m=+238.755956779" observedRunningTime="2025-12-01 08:42:39.239777971 +0000 UTC m=+239.312065875" watchObservedRunningTime="2025-12-01 08:42:39.269747623 +0000 UTC m=+239.342035527" Dec 01 08:42:39 crc kubenswrapper[4689]: I1201 08:42:39.272293 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-z26c6" Dec 01 08:42:39 crc kubenswrapper[4689]: I1201 08:42:39.272329 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-z26c6" Dec 01 08:42:39 crc kubenswrapper[4689]: I1201 08:42:39.555616 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hwvv4" Dec 01 08:42:39 crc kubenswrapper[4689]: I1201 08:42:39.600513 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cdd81b3a-e9ab-4f49-b621-3e16eed7ac73-catalog-content\") pod \"cdd81b3a-e9ab-4f49-b621-3e16eed7ac73\" (UID: \"cdd81b3a-e9ab-4f49-b621-3e16eed7ac73\") " Dec 01 08:42:39 crc kubenswrapper[4689]: I1201 08:42:39.600580 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vc7zj\" (UniqueName: \"kubernetes.io/projected/cdd81b3a-e9ab-4f49-b621-3e16eed7ac73-kube-api-access-vc7zj\") pod \"cdd81b3a-e9ab-4f49-b621-3e16eed7ac73\" (UID: \"cdd81b3a-e9ab-4f49-b621-3e16eed7ac73\") " Dec 01 08:42:39 crc kubenswrapper[4689]: I1201 08:42:39.600617 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cdd81b3a-e9ab-4f49-b621-3e16eed7ac73-utilities\") pod \"cdd81b3a-e9ab-4f49-b621-3e16eed7ac73\" (UID: \"cdd81b3a-e9ab-4f49-b621-3e16eed7ac73\") " Dec 01 08:42:39 crc kubenswrapper[4689]: I1201 08:42:39.601721 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cdd81b3a-e9ab-4f49-b621-3e16eed7ac73-utilities" (OuterVolumeSpecName: "utilities") pod "cdd81b3a-e9ab-4f49-b621-3e16eed7ac73" (UID: "cdd81b3a-e9ab-4f49-b621-3e16eed7ac73"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 08:42:39 crc kubenswrapper[4689]: I1201 08:42:39.609557 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdd81b3a-e9ab-4f49-b621-3e16eed7ac73-kube-api-access-vc7zj" (OuterVolumeSpecName: "kube-api-access-vc7zj") pod "cdd81b3a-e9ab-4f49-b621-3e16eed7ac73" (UID: "cdd81b3a-e9ab-4f49-b621-3e16eed7ac73"). InnerVolumeSpecName "kube-api-access-vc7zj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:42:39 crc kubenswrapper[4689]: I1201 08:42:39.647210 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cdd81b3a-e9ab-4f49-b621-3e16eed7ac73-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cdd81b3a-e9ab-4f49-b621-3e16eed7ac73" (UID: "cdd81b3a-e9ab-4f49-b621-3e16eed7ac73"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 08:42:39 crc kubenswrapper[4689]: I1201 08:42:39.701623 4689 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cdd81b3a-e9ab-4f49-b621-3e16eed7ac73-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 08:42:39 crc kubenswrapper[4689]: I1201 08:42:39.701665 4689 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cdd81b3a-e9ab-4f49-b621-3e16eed7ac73-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 08:42:39 crc kubenswrapper[4689]: I1201 08:42:39.701682 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vc7zj\" (UniqueName: \"kubernetes.io/projected/cdd81b3a-e9ab-4f49-b621-3e16eed7ac73-kube-api-access-vc7zj\") on node \"crc\" DevicePath \"\"" Dec 01 08:42:40 crc kubenswrapper[4689]: I1201 08:42:40.203161 4689 generic.go:334] "Generic (PLEG): container finished" podID="cdd81b3a-e9ab-4f49-b621-3e16eed7ac73" containerID="df7ad68b4d5fb42d63fdd6deff023213332d99f8d0376df3dd0330445e33e97a" exitCode=0 Dec 01 08:42:40 crc kubenswrapper[4689]: I1201 08:42:40.203258 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hwvv4" event={"ID":"cdd81b3a-e9ab-4f49-b621-3e16eed7ac73","Type":"ContainerDied","Data":"df7ad68b4d5fb42d63fdd6deff023213332d99f8d0376df3dd0330445e33e97a"} Dec 01 08:42:40 crc kubenswrapper[4689]: I1201 08:42:40.203781 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hwvv4" event={"ID":"cdd81b3a-e9ab-4f49-b621-3e16eed7ac73","Type":"ContainerDied","Data":"9c19c3d71b518cb5197147fc9b88da3e1cbaf9cffa01032fb2d85ab005740859"} Dec 01 08:42:40 crc kubenswrapper[4689]: I1201 08:42:40.203819 4689 scope.go:117] "RemoveContainer" containerID="df7ad68b4d5fb42d63fdd6deff023213332d99f8d0376df3dd0330445e33e97a" Dec 01 08:42:40 crc kubenswrapper[4689]: I1201 08:42:40.203279 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hwvv4" Dec 01 08:42:40 crc kubenswrapper[4689]: I1201 08:42:40.219001 4689 scope.go:117] "RemoveContainer" containerID="5d1c3b5de4c6ee4cdee953fa5e033b749a62712184c3a200cfdc2e7b154ac915" Dec 01 08:42:40 crc kubenswrapper[4689]: I1201 08:42:40.240057 4689 scope.go:117] "RemoveContainer" containerID="92dda95cbf8b09286d7feba0af07a65e18517e943eaafee9caac321387ec4f3a" Dec 01 08:42:40 crc kubenswrapper[4689]: I1201 08:42:40.244754 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hwvv4"] Dec 01 08:42:40 crc kubenswrapper[4689]: I1201 08:42:40.249338 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hwvv4"] Dec 01 08:42:40 crc kubenswrapper[4689]: I1201 08:42:40.269186 4689 scope.go:117] "RemoveContainer" containerID="df7ad68b4d5fb42d63fdd6deff023213332d99f8d0376df3dd0330445e33e97a" Dec 01 08:42:40 crc kubenswrapper[4689]: E1201 08:42:40.269671 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df7ad68b4d5fb42d63fdd6deff023213332d99f8d0376df3dd0330445e33e97a\": container with ID starting with df7ad68b4d5fb42d63fdd6deff023213332d99f8d0376df3dd0330445e33e97a not found: ID does not exist" containerID="df7ad68b4d5fb42d63fdd6deff023213332d99f8d0376df3dd0330445e33e97a" Dec 01 08:42:40 crc kubenswrapper[4689]: I1201 08:42:40.269785 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df7ad68b4d5fb42d63fdd6deff023213332d99f8d0376df3dd0330445e33e97a"} err="failed to get container status \"df7ad68b4d5fb42d63fdd6deff023213332d99f8d0376df3dd0330445e33e97a\": rpc error: code = NotFound desc = could not find container \"df7ad68b4d5fb42d63fdd6deff023213332d99f8d0376df3dd0330445e33e97a\": container with ID starting with df7ad68b4d5fb42d63fdd6deff023213332d99f8d0376df3dd0330445e33e97a not found: ID does not exist" Dec 01 08:42:40 crc kubenswrapper[4689]: I1201 08:42:40.269977 4689 scope.go:117] "RemoveContainer" containerID="5d1c3b5de4c6ee4cdee953fa5e033b749a62712184c3a200cfdc2e7b154ac915" Dec 01 08:42:40 crc kubenswrapper[4689]: E1201 08:42:40.270256 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d1c3b5de4c6ee4cdee953fa5e033b749a62712184c3a200cfdc2e7b154ac915\": container with ID starting with 5d1c3b5de4c6ee4cdee953fa5e033b749a62712184c3a200cfdc2e7b154ac915 not found: ID does not exist" containerID="5d1c3b5de4c6ee4cdee953fa5e033b749a62712184c3a200cfdc2e7b154ac915" Dec 01 08:42:40 crc kubenswrapper[4689]: I1201 08:42:40.270387 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d1c3b5de4c6ee4cdee953fa5e033b749a62712184c3a200cfdc2e7b154ac915"} err="failed to get container status \"5d1c3b5de4c6ee4cdee953fa5e033b749a62712184c3a200cfdc2e7b154ac915\": rpc error: code = NotFound desc = could not find container \"5d1c3b5de4c6ee4cdee953fa5e033b749a62712184c3a200cfdc2e7b154ac915\": container with ID starting with 5d1c3b5de4c6ee4cdee953fa5e033b749a62712184c3a200cfdc2e7b154ac915 not found: ID does not exist" Dec 01 08:42:40 crc kubenswrapper[4689]: I1201 08:42:40.270479 4689 scope.go:117] "RemoveContainer" containerID="92dda95cbf8b09286d7feba0af07a65e18517e943eaafee9caac321387ec4f3a" Dec 01 08:42:40 crc kubenswrapper[4689]: E1201 08:42:40.270824 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92dda95cbf8b09286d7feba0af07a65e18517e943eaafee9caac321387ec4f3a\": container with ID starting with 92dda95cbf8b09286d7feba0af07a65e18517e943eaafee9caac321387ec4f3a not found: ID does not exist" containerID="92dda95cbf8b09286d7feba0af07a65e18517e943eaafee9caac321387ec4f3a" Dec 01 08:42:40 crc kubenswrapper[4689]: I1201 08:42:40.270983 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92dda95cbf8b09286d7feba0af07a65e18517e943eaafee9caac321387ec4f3a"} err="failed to get container status \"92dda95cbf8b09286d7feba0af07a65e18517e943eaafee9caac321387ec4f3a\": rpc error: code = NotFound desc = could not find container \"92dda95cbf8b09286d7feba0af07a65e18517e943eaafee9caac321387ec4f3a\": container with ID starting with 92dda95cbf8b09286d7feba0af07a65e18517e943eaafee9caac321387ec4f3a not found: ID does not exist" Dec 01 08:42:40 crc kubenswrapper[4689]: I1201 08:42:40.281730 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-l9s25" podUID="a02d72db-aa64-4300-acc0-93b8677bf6df" containerName="registry-server" probeResult="failure" output=< Dec 01 08:42:40 crc kubenswrapper[4689]: timeout: failed to connect service ":50051" within 1s Dec 01 08:42:40 crc kubenswrapper[4689]: > Dec 01 08:42:40 crc kubenswrapper[4689]: I1201 08:42:40.320275 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-7544d6d989-kzcmr"] Dec 01 08:42:40 crc kubenswrapper[4689]: E1201 08:42:40.320807 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c53baf2d-eb84-4da5-938a-675c325fc6dc" containerName="pruner" Dec 01 08:42:40 crc kubenswrapper[4689]: I1201 08:42:40.320964 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="c53baf2d-eb84-4da5-938a-675c325fc6dc" containerName="pruner" Dec 01 08:42:40 crc kubenswrapper[4689]: E1201 08:42:40.321092 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdd81b3a-e9ab-4f49-b621-3e16eed7ac73" containerName="registry-server" Dec 01 08:42:40 crc kubenswrapper[4689]: I1201 08:42:40.321165 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdd81b3a-e9ab-4f49-b621-3e16eed7ac73" containerName="registry-server" Dec 01 08:42:40 crc kubenswrapper[4689]: E1201 08:42:40.321251 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e85d92ae-30aa-4302-b217-43a48dcadd8a" containerName="oauth-openshift" Dec 01 08:42:40 crc kubenswrapper[4689]: I1201 08:42:40.321313 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="e85d92ae-30aa-4302-b217-43a48dcadd8a" containerName="oauth-openshift" Dec 01 08:42:40 crc kubenswrapper[4689]: E1201 08:42:40.321393 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a3d70f2-3da1-4712-bb46-200a641c7648" containerName="extract-content" Dec 01 08:42:40 crc kubenswrapper[4689]: I1201 08:42:40.321465 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a3d70f2-3da1-4712-bb46-200a641c7648" containerName="extract-content" Dec 01 08:42:40 crc kubenswrapper[4689]: E1201 08:42:40.321532 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdd81b3a-e9ab-4f49-b621-3e16eed7ac73" containerName="extract-utilities" Dec 01 08:42:40 crc kubenswrapper[4689]: I1201 08:42:40.321606 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdd81b3a-e9ab-4f49-b621-3e16eed7ac73" containerName="extract-utilities" Dec 01 08:42:40 crc kubenswrapper[4689]: E1201 08:42:40.321666 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a3d70f2-3da1-4712-bb46-200a641c7648" containerName="registry-server" Dec 01 08:42:40 crc kubenswrapper[4689]: I1201 08:42:40.321721 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a3d70f2-3da1-4712-bb46-200a641c7648" containerName="registry-server" Dec 01 08:42:40 crc kubenswrapper[4689]: E1201 08:42:40.321777 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a3d70f2-3da1-4712-bb46-200a641c7648" containerName="extract-utilities" Dec 01 08:42:40 crc kubenswrapper[4689]: I1201 08:42:40.321832 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a3d70f2-3da1-4712-bb46-200a641c7648" containerName="extract-utilities" Dec 01 08:42:40 crc kubenswrapper[4689]: E1201 08:42:40.321888 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdd81b3a-e9ab-4f49-b621-3e16eed7ac73" containerName="extract-content" Dec 01 08:42:40 crc kubenswrapper[4689]: I1201 08:42:40.321947 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdd81b3a-e9ab-4f49-b621-3e16eed7ac73" containerName="extract-content" Dec 01 08:42:40 crc kubenswrapper[4689]: I1201 08:42:40.322153 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="c53baf2d-eb84-4da5-938a-675c325fc6dc" containerName="pruner" Dec 01 08:42:40 crc kubenswrapper[4689]: I1201 08:42:40.322261 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="e85d92ae-30aa-4302-b217-43a48dcadd8a" containerName="oauth-openshift" Dec 01 08:42:40 crc kubenswrapper[4689]: I1201 08:42:40.322330 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a3d70f2-3da1-4712-bb46-200a641c7648" containerName="registry-server" Dec 01 08:42:40 crc kubenswrapper[4689]: I1201 08:42:40.322420 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdd81b3a-e9ab-4f49-b621-3e16eed7ac73" containerName="registry-server" Dec 01 08:42:40 crc kubenswrapper[4689]: I1201 08:42:40.323126 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7544d6d989-kzcmr" Dec 01 08:42:40 crc kubenswrapper[4689]: I1201 08:42:40.326980 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 01 08:42:40 crc kubenswrapper[4689]: I1201 08:42:40.327440 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 01 08:42:40 crc kubenswrapper[4689]: I1201 08:42:40.330901 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 01 08:42:40 crc kubenswrapper[4689]: I1201 08:42:40.331298 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 01 08:42:40 crc kubenswrapper[4689]: I1201 08:42:40.331610 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 01 08:42:40 crc kubenswrapper[4689]: I1201 08:42:40.332316 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 01 08:42:40 crc kubenswrapper[4689]: I1201 08:42:40.332592 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 01 08:42:40 crc kubenswrapper[4689]: I1201 08:42:40.332689 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 01 08:42:40 crc kubenswrapper[4689]: I1201 08:42:40.332809 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 01 08:42:40 crc kubenswrapper[4689]: I1201 08:42:40.332967 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 01 08:42:40 crc kubenswrapper[4689]: I1201 08:42:40.333063 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 01 08:42:40 crc kubenswrapper[4689]: I1201 08:42:40.333145 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 01 08:42:40 crc kubenswrapper[4689]: I1201 08:42:40.336439 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7544d6d989-kzcmr"] Dec 01 08:42:40 crc kubenswrapper[4689]: I1201 08:42:40.338742 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 01 08:42:40 crc kubenswrapper[4689]: I1201 08:42:40.340270 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 01 08:42:40 crc kubenswrapper[4689]: I1201 08:42:40.353166 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-z26c6" podUID="6729f1b7-260e-4a90-a2da-1258e036b9ea" containerName="registry-server" probeResult="failure" output=< Dec 01 08:42:40 crc kubenswrapper[4689]: timeout: failed to connect service ":50051" within 1s Dec 01 08:42:40 crc kubenswrapper[4689]: > Dec 01 08:42:40 crc kubenswrapper[4689]: I1201 08:42:40.358700 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 01 08:42:40 crc kubenswrapper[4689]: I1201 08:42:40.490726 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2043c180-d558-48e0-8295-e2d244822828-v4-0-config-user-template-login\") pod \"oauth-openshift-7544d6d989-kzcmr\" (UID: \"2043c180-d558-48e0-8295-e2d244822828\") " pod="openshift-authentication/oauth-openshift-7544d6d989-kzcmr" Dec 01 08:42:40 crc kubenswrapper[4689]: I1201 08:42:40.490777 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bd4ng\" (UniqueName: \"kubernetes.io/projected/2043c180-d558-48e0-8295-e2d244822828-kube-api-access-bd4ng\") pod \"oauth-openshift-7544d6d989-kzcmr\" (UID: \"2043c180-d558-48e0-8295-e2d244822828\") " pod="openshift-authentication/oauth-openshift-7544d6d989-kzcmr" Dec 01 08:42:40 crc kubenswrapper[4689]: I1201 08:42:40.490817 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2043c180-d558-48e0-8295-e2d244822828-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7544d6d989-kzcmr\" (UID: \"2043c180-d558-48e0-8295-e2d244822828\") " pod="openshift-authentication/oauth-openshift-7544d6d989-kzcmr" Dec 01 08:42:40 crc kubenswrapper[4689]: I1201 08:42:40.490839 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2043c180-d558-48e0-8295-e2d244822828-v4-0-config-system-router-certs\") pod \"oauth-openshift-7544d6d989-kzcmr\" (UID: \"2043c180-d558-48e0-8295-e2d244822828\") " pod="openshift-authentication/oauth-openshift-7544d6d989-kzcmr" Dec 01 08:42:40 crc kubenswrapper[4689]: I1201 08:42:40.490862 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2043c180-d558-48e0-8295-e2d244822828-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7544d6d989-kzcmr\" (UID: \"2043c180-d558-48e0-8295-e2d244822828\") " pod="openshift-authentication/oauth-openshift-7544d6d989-kzcmr" Dec 01 08:42:40 crc kubenswrapper[4689]: I1201 08:42:40.490905 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2043c180-d558-48e0-8295-e2d244822828-v4-0-config-system-session\") pod \"oauth-openshift-7544d6d989-kzcmr\" (UID: \"2043c180-d558-48e0-8295-e2d244822828\") " pod="openshift-authentication/oauth-openshift-7544d6d989-kzcmr" Dec 01 08:42:40 crc kubenswrapper[4689]: I1201 08:42:40.490933 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2043c180-d558-48e0-8295-e2d244822828-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7544d6d989-kzcmr\" (UID: \"2043c180-d558-48e0-8295-e2d244822828\") " pod="openshift-authentication/oauth-openshift-7544d6d989-kzcmr" Dec 01 08:42:40 crc kubenswrapper[4689]: I1201 08:42:40.490955 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2043c180-d558-48e0-8295-e2d244822828-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7544d6d989-kzcmr\" (UID: \"2043c180-d558-48e0-8295-e2d244822828\") " pod="openshift-authentication/oauth-openshift-7544d6d989-kzcmr" Dec 01 08:42:40 crc kubenswrapper[4689]: I1201 08:42:40.490978 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2043c180-d558-48e0-8295-e2d244822828-audit-dir\") pod \"oauth-openshift-7544d6d989-kzcmr\" (UID: \"2043c180-d558-48e0-8295-e2d244822828\") " pod="openshift-authentication/oauth-openshift-7544d6d989-kzcmr" Dec 01 08:42:40 crc kubenswrapper[4689]: I1201 08:42:40.491050 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2043c180-d558-48e0-8295-e2d244822828-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7544d6d989-kzcmr\" (UID: \"2043c180-d558-48e0-8295-e2d244822828\") " pod="openshift-authentication/oauth-openshift-7544d6d989-kzcmr" Dec 01 08:42:40 crc kubenswrapper[4689]: I1201 08:42:40.491078 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2043c180-d558-48e0-8295-e2d244822828-v4-0-config-system-service-ca\") pod \"oauth-openshift-7544d6d989-kzcmr\" (UID: \"2043c180-d558-48e0-8295-e2d244822828\") " pod="openshift-authentication/oauth-openshift-7544d6d989-kzcmr" Dec 01 08:42:40 crc kubenswrapper[4689]: I1201 08:42:40.491105 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2043c180-d558-48e0-8295-e2d244822828-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7544d6d989-kzcmr\" (UID: \"2043c180-d558-48e0-8295-e2d244822828\") " pod="openshift-authentication/oauth-openshift-7544d6d989-kzcmr" Dec 01 08:42:40 crc kubenswrapper[4689]: I1201 08:42:40.491129 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2043c180-d558-48e0-8295-e2d244822828-audit-policies\") pod \"oauth-openshift-7544d6d989-kzcmr\" (UID: \"2043c180-d558-48e0-8295-e2d244822828\") " pod="openshift-authentication/oauth-openshift-7544d6d989-kzcmr" Dec 01 08:42:40 crc kubenswrapper[4689]: I1201 08:42:40.491176 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2043c180-d558-48e0-8295-e2d244822828-v4-0-config-user-template-error\") pod \"oauth-openshift-7544d6d989-kzcmr\" (UID: \"2043c180-d558-48e0-8295-e2d244822828\") " pod="openshift-authentication/oauth-openshift-7544d6d989-kzcmr" Dec 01 08:42:40 crc kubenswrapper[4689]: I1201 08:42:40.592098 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2043c180-d558-48e0-8295-e2d244822828-v4-0-config-system-router-certs\") pod \"oauth-openshift-7544d6d989-kzcmr\" (UID: \"2043c180-d558-48e0-8295-e2d244822828\") " pod="openshift-authentication/oauth-openshift-7544d6d989-kzcmr" Dec 01 08:42:40 crc kubenswrapper[4689]: I1201 08:42:40.592474 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2043c180-d558-48e0-8295-e2d244822828-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7544d6d989-kzcmr\" (UID: \"2043c180-d558-48e0-8295-e2d244822828\") " pod="openshift-authentication/oauth-openshift-7544d6d989-kzcmr" Dec 01 08:42:40 crc kubenswrapper[4689]: I1201 08:42:40.592627 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2043c180-d558-48e0-8295-e2d244822828-v4-0-config-system-session\") pod \"oauth-openshift-7544d6d989-kzcmr\" (UID: \"2043c180-d558-48e0-8295-e2d244822828\") " pod="openshift-authentication/oauth-openshift-7544d6d989-kzcmr" Dec 01 08:42:40 crc kubenswrapper[4689]: I1201 08:42:40.592768 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2043c180-d558-48e0-8295-e2d244822828-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7544d6d989-kzcmr\" (UID: \"2043c180-d558-48e0-8295-e2d244822828\") " pod="openshift-authentication/oauth-openshift-7544d6d989-kzcmr" Dec 01 08:42:40 crc kubenswrapper[4689]: I1201 08:42:40.592882 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2043c180-d558-48e0-8295-e2d244822828-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7544d6d989-kzcmr\" (UID: \"2043c180-d558-48e0-8295-e2d244822828\") " pod="openshift-authentication/oauth-openshift-7544d6d989-kzcmr" Dec 01 08:42:40 crc kubenswrapper[4689]: I1201 08:42:40.592991 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2043c180-d558-48e0-8295-e2d244822828-audit-dir\") pod \"oauth-openshift-7544d6d989-kzcmr\" (UID: \"2043c180-d558-48e0-8295-e2d244822828\") " pod="openshift-authentication/oauth-openshift-7544d6d989-kzcmr" Dec 01 08:42:40 crc kubenswrapper[4689]: I1201 08:42:40.593126 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2043c180-d558-48e0-8295-e2d244822828-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7544d6d989-kzcmr\" (UID: \"2043c180-d558-48e0-8295-e2d244822828\") " pod="openshift-authentication/oauth-openshift-7544d6d989-kzcmr" Dec 01 08:42:40 crc kubenswrapper[4689]: I1201 08:42:40.593227 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2043c180-d558-48e0-8295-e2d244822828-v4-0-config-system-service-ca\") pod \"oauth-openshift-7544d6d989-kzcmr\" (UID: \"2043c180-d558-48e0-8295-e2d244822828\") " pod="openshift-authentication/oauth-openshift-7544d6d989-kzcmr" Dec 01 08:42:40 crc kubenswrapper[4689]: I1201 08:42:40.593382 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2043c180-d558-48e0-8295-e2d244822828-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7544d6d989-kzcmr\" (UID: \"2043c180-d558-48e0-8295-e2d244822828\") " pod="openshift-authentication/oauth-openshift-7544d6d989-kzcmr" Dec 01 08:42:40 crc kubenswrapper[4689]: I1201 08:42:40.593496 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2043c180-d558-48e0-8295-e2d244822828-audit-policies\") pod \"oauth-openshift-7544d6d989-kzcmr\" (UID: \"2043c180-d558-48e0-8295-e2d244822828\") " pod="openshift-authentication/oauth-openshift-7544d6d989-kzcmr" Dec 01 08:42:40 crc kubenswrapper[4689]: I1201 08:42:40.593608 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2043c180-d558-48e0-8295-e2d244822828-v4-0-config-user-template-error\") pod \"oauth-openshift-7544d6d989-kzcmr\" (UID: \"2043c180-d558-48e0-8295-e2d244822828\") " pod="openshift-authentication/oauth-openshift-7544d6d989-kzcmr" Dec 01 08:42:40 crc kubenswrapper[4689]: I1201 08:42:40.593729 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2043c180-d558-48e0-8295-e2d244822828-v4-0-config-user-template-login\") pod \"oauth-openshift-7544d6d989-kzcmr\" (UID: \"2043c180-d558-48e0-8295-e2d244822828\") " pod="openshift-authentication/oauth-openshift-7544d6d989-kzcmr" Dec 01 08:42:40 crc kubenswrapper[4689]: I1201 08:42:40.593827 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bd4ng\" (UniqueName: \"kubernetes.io/projected/2043c180-d558-48e0-8295-e2d244822828-kube-api-access-bd4ng\") pod \"oauth-openshift-7544d6d989-kzcmr\" (UID: \"2043c180-d558-48e0-8295-e2d244822828\") " pod="openshift-authentication/oauth-openshift-7544d6d989-kzcmr" Dec 01 08:42:40 crc kubenswrapper[4689]: I1201 08:42:40.594003 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2043c180-d558-48e0-8295-e2d244822828-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7544d6d989-kzcmr\" (UID: \"2043c180-d558-48e0-8295-e2d244822828\") " pod="openshift-authentication/oauth-openshift-7544d6d989-kzcmr" Dec 01 08:42:40 crc kubenswrapper[4689]: I1201 08:42:40.594082 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2043c180-d558-48e0-8295-e2d244822828-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7544d6d989-kzcmr\" (UID: \"2043c180-d558-48e0-8295-e2d244822828\") " pod="openshift-authentication/oauth-openshift-7544d6d989-kzcmr" Dec 01 08:42:40 crc kubenswrapper[4689]: I1201 08:42:40.594899 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2043c180-d558-48e0-8295-e2d244822828-audit-policies\") pod \"oauth-openshift-7544d6d989-kzcmr\" (UID: \"2043c180-d558-48e0-8295-e2d244822828\") " pod="openshift-authentication/oauth-openshift-7544d6d989-kzcmr" Dec 01 08:42:40 crc kubenswrapper[4689]: I1201 08:42:40.595083 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2043c180-d558-48e0-8295-e2d244822828-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7544d6d989-kzcmr\" (UID: \"2043c180-d558-48e0-8295-e2d244822828\") " pod="openshift-authentication/oauth-openshift-7544d6d989-kzcmr" Dec 01 08:42:40 crc kubenswrapper[4689]: I1201 08:42:40.596652 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2043c180-d558-48e0-8295-e2d244822828-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7544d6d989-kzcmr\" (UID: \"2043c180-d558-48e0-8295-e2d244822828\") " pod="openshift-authentication/oauth-openshift-7544d6d989-kzcmr" Dec 01 08:42:40 crc kubenswrapper[4689]: I1201 08:42:40.596757 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2043c180-d558-48e0-8295-e2d244822828-audit-dir\") pod \"oauth-openshift-7544d6d989-kzcmr\" (UID: \"2043c180-d558-48e0-8295-e2d244822828\") " pod="openshift-authentication/oauth-openshift-7544d6d989-kzcmr" Dec 01 08:42:40 crc kubenswrapper[4689]: I1201 08:42:40.597245 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2043c180-d558-48e0-8295-e2d244822828-v4-0-config-system-service-ca\") pod \"oauth-openshift-7544d6d989-kzcmr\" (UID: \"2043c180-d558-48e0-8295-e2d244822828\") " pod="openshift-authentication/oauth-openshift-7544d6d989-kzcmr" Dec 01 08:42:40 crc kubenswrapper[4689]: I1201 08:42:40.598911 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2043c180-d558-48e0-8295-e2d244822828-v4-0-config-system-session\") pod \"oauth-openshift-7544d6d989-kzcmr\" (UID: \"2043c180-d558-48e0-8295-e2d244822828\") " pod="openshift-authentication/oauth-openshift-7544d6d989-kzcmr" Dec 01 08:42:40 crc kubenswrapper[4689]: I1201 08:42:40.602686 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2043c180-d558-48e0-8295-e2d244822828-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7544d6d989-kzcmr\" (UID: \"2043c180-d558-48e0-8295-e2d244822828\") " pod="openshift-authentication/oauth-openshift-7544d6d989-kzcmr" Dec 01 08:42:40 crc kubenswrapper[4689]: I1201 08:42:40.603120 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2043c180-d558-48e0-8295-e2d244822828-v4-0-config-user-template-login\") pod \"oauth-openshift-7544d6d989-kzcmr\" (UID: \"2043c180-d558-48e0-8295-e2d244822828\") " pod="openshift-authentication/oauth-openshift-7544d6d989-kzcmr" Dec 01 08:42:40 crc kubenswrapper[4689]: I1201 08:42:40.604845 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2043c180-d558-48e0-8295-e2d244822828-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7544d6d989-kzcmr\" (UID: \"2043c180-d558-48e0-8295-e2d244822828\") " pod="openshift-authentication/oauth-openshift-7544d6d989-kzcmr" Dec 01 08:42:40 crc kubenswrapper[4689]: I1201 08:42:40.605178 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2043c180-d558-48e0-8295-e2d244822828-v4-0-config-user-template-error\") pod \"oauth-openshift-7544d6d989-kzcmr\" (UID: \"2043c180-d558-48e0-8295-e2d244822828\") " pod="openshift-authentication/oauth-openshift-7544d6d989-kzcmr" Dec 01 08:42:40 crc kubenswrapper[4689]: I1201 08:42:40.605529 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2043c180-d558-48e0-8295-e2d244822828-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7544d6d989-kzcmr\" (UID: \"2043c180-d558-48e0-8295-e2d244822828\") " pod="openshift-authentication/oauth-openshift-7544d6d989-kzcmr" Dec 01 08:42:40 crc kubenswrapper[4689]: I1201 08:42:40.610467 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bd4ng\" (UniqueName: \"kubernetes.io/projected/2043c180-d558-48e0-8295-e2d244822828-kube-api-access-bd4ng\") pod \"oauth-openshift-7544d6d989-kzcmr\" (UID: \"2043c180-d558-48e0-8295-e2d244822828\") " pod="openshift-authentication/oauth-openshift-7544d6d989-kzcmr" Dec 01 08:42:40 crc kubenswrapper[4689]: I1201 08:42:40.617987 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2043c180-d558-48e0-8295-e2d244822828-v4-0-config-system-router-certs\") pod \"oauth-openshift-7544d6d989-kzcmr\" (UID: \"2043c180-d558-48e0-8295-e2d244822828\") " pod="openshift-authentication/oauth-openshift-7544d6d989-kzcmr" Dec 01 08:42:40 crc kubenswrapper[4689]: I1201 08:42:40.643813 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7544d6d989-kzcmr" Dec 01 08:42:40 crc kubenswrapper[4689]: I1201 08:42:40.875183 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7544d6d989-kzcmr"] Dec 01 08:42:41 crc kubenswrapper[4689]: I1201 08:42:41.056082 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cdd81b3a-e9ab-4f49-b621-3e16eed7ac73" path="/var/lib/kubelet/pods/cdd81b3a-e9ab-4f49-b621-3e16eed7ac73/volumes" Dec 01 08:42:41 crc kubenswrapper[4689]: I1201 08:42:41.209782 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7544d6d989-kzcmr" event={"ID":"2043c180-d558-48e0-8295-e2d244822828","Type":"ContainerStarted","Data":"870ade5037e0bc279850c9c2c97a35dd976b062beddb5076279329b84020c2a9"} Dec 01 08:42:42 crc kubenswrapper[4689]: I1201 08:42:42.218739 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7544d6d989-kzcmr" event={"ID":"2043c180-d558-48e0-8295-e2d244822828","Type":"ContainerStarted","Data":"e53af09cfa10b17acdc7bef887806df4ccc9c794815eb7e9286eccb0cc0b43c5"} Dec 01 08:42:42 crc kubenswrapper[4689]: I1201 08:42:42.220586 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-7544d6d989-kzcmr" Dec 01 08:42:42 crc kubenswrapper[4689]: I1201 08:42:42.227176 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-7544d6d989-kzcmr" Dec 01 08:42:42 crc kubenswrapper[4689]: I1201 08:42:42.257554 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-7544d6d989-kzcmr" podStartSLOduration=32.257511648 podStartE2EDuration="32.257511648s" podCreationTimestamp="2025-12-01 08:42:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:42:42.244187495 +0000 UTC m=+242.316475459" watchObservedRunningTime="2025-12-01 08:42:42.257511648 +0000 UTC m=+242.329799572" Dec 01 08:42:46 crc kubenswrapper[4689]: I1201 08:42:46.591784 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6gkfv" Dec 01 08:42:46 crc kubenswrapper[4689]: I1201 08:42:46.591887 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6gkfv" Dec 01 08:42:46 crc kubenswrapper[4689]: I1201 08:42:46.678295 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6gkfv" Dec 01 08:42:47 crc kubenswrapper[4689]: I1201 08:42:47.376214 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6gkfv" Dec 01 08:42:47 crc kubenswrapper[4689]: I1201 08:42:47.760670 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-c4zck" Dec 01 08:42:47 crc kubenswrapper[4689]: I1201 08:42:47.760737 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-c4zck" Dec 01 08:42:47 crc kubenswrapper[4689]: I1201 08:42:47.838853 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-c4zck" Dec 01 08:42:48 crc kubenswrapper[4689]: I1201 08:42:48.424731 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-c4zck" Dec 01 08:42:49 crc kubenswrapper[4689]: I1201 08:42:49.302864 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-l9s25" Dec 01 08:42:49 crc kubenswrapper[4689]: I1201 08:42:49.345276 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-z26c6" Dec 01 08:42:49 crc kubenswrapper[4689]: I1201 08:42:49.363219 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-l9s25" Dec 01 08:42:49 crc kubenswrapper[4689]: I1201 08:42:49.412360 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-z26c6" Dec 01 08:42:49 crc kubenswrapper[4689]: I1201 08:42:49.651258 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6gkfv"] Dec 01 08:42:49 crc kubenswrapper[4689]: I1201 08:42:49.651566 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6gkfv" podUID="2790f0e0-bca7-4070-8d79-72ae564043ef" containerName="registry-server" containerID="cri-o://8b4d746f0a2ce206cb7819078c2f094aceaeef7f49cdaa3486b844c92c2c27e6" gracePeriod=2 Dec 01 08:42:52 crc kubenswrapper[4689]: I1201 08:42:52.360690 4689 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 01 08:42:52 crc kubenswrapper[4689]: I1201 08:42:52.361291 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://848171dd65d193193056bff78092de5ea65abaf7af4d168a9c27409765bd0ac2" gracePeriod=15 Dec 01 08:42:52 crc kubenswrapper[4689]: I1201 08:42:52.361323 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://5920f4efa4e812090eb100b51bcf8e37eb82eb6b9fa495c8560ceeea3a94d55b" gracePeriod=15 Dec 01 08:42:52 crc kubenswrapper[4689]: I1201 08:42:52.361497 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://ad9f8bfeec0bea5ec9df44017223d633ce320bba477e99d5cc325712f92fff65" gracePeriod=15 Dec 01 08:42:52 crc kubenswrapper[4689]: I1201 08:42:52.361412 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://7190f95ad8a5cce27417903e343138ff345b97c0713609f47db91d8fe0c44a7f" gracePeriod=15 Dec 01 08:42:52 crc kubenswrapper[4689]: I1201 08:42:52.361605 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://2dbad86a3834b9ec3334de7ccc49988da1f7e42c423801c9789ce12ba70390b1" gracePeriod=15 Dec 01 08:42:52 crc kubenswrapper[4689]: I1201 08:42:52.364282 4689 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 01 08:42:52 crc kubenswrapper[4689]: E1201 08:42:52.364707 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 01 08:42:52 crc kubenswrapper[4689]: I1201 08:42:52.364740 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 01 08:42:52 crc kubenswrapper[4689]: E1201 08:42:52.364766 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 01 08:42:52 crc kubenswrapper[4689]: I1201 08:42:52.364780 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 01 08:42:52 crc kubenswrapper[4689]: E1201 08:42:52.364794 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 01 08:42:52 crc kubenswrapper[4689]: I1201 08:42:52.364806 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 01 08:42:52 crc kubenswrapper[4689]: E1201 08:42:52.364824 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 01 08:42:52 crc kubenswrapper[4689]: I1201 08:42:52.364837 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 01 08:42:52 crc kubenswrapper[4689]: E1201 08:42:52.364860 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 01 08:42:52 crc kubenswrapper[4689]: I1201 08:42:52.364873 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 01 08:42:52 crc kubenswrapper[4689]: E1201 08:42:52.364892 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 01 08:42:52 crc kubenswrapper[4689]: I1201 08:42:52.364904 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 01 08:42:52 crc kubenswrapper[4689]: E1201 08:42:52.364944 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 01 08:42:52 crc kubenswrapper[4689]: I1201 08:42:52.364957 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 01 08:42:52 crc kubenswrapper[4689]: I1201 08:42:52.365143 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 01 08:42:52 crc kubenswrapper[4689]: I1201 08:42:52.365162 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 01 08:42:52 crc kubenswrapper[4689]: I1201 08:42:52.365186 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 01 08:42:52 crc kubenswrapper[4689]: I1201 08:42:52.365203 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 01 08:42:52 crc kubenswrapper[4689]: I1201 08:42:52.365216 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 01 08:42:52 crc kubenswrapper[4689]: I1201 08:42:52.365236 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 01 08:42:52 crc kubenswrapper[4689]: I1201 08:42:52.433284 4689 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 01 08:42:52 crc kubenswrapper[4689]: [+]log ok Dec 01 08:42:52 crc kubenswrapper[4689]: [+]api-openshift-apiserver-available ok Dec 01 08:42:52 crc kubenswrapper[4689]: [+]api-openshift-oauth-apiserver-available ok Dec 01 08:42:52 crc kubenswrapper[4689]: [+]informer-sync ok Dec 01 08:42:52 crc kubenswrapper[4689]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 01 08:42:52 crc kubenswrapper[4689]: [+]poststarthook/openshift.io-api-request-count-filter ok Dec 01 08:42:52 crc kubenswrapper[4689]: [+]poststarthook/openshift.io-startkubeinformers ok Dec 01 08:42:52 crc kubenswrapper[4689]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Dec 01 08:42:52 crc kubenswrapper[4689]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Dec 01 08:42:52 crc kubenswrapper[4689]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 01 08:42:52 crc kubenswrapper[4689]: [+]poststarthook/generic-apiserver-start-informers ok Dec 01 08:42:52 crc kubenswrapper[4689]: [+]poststarthook/priority-and-fairness-config-consumer ok Dec 01 08:42:52 crc kubenswrapper[4689]: [+]poststarthook/priority-and-fairness-filter ok Dec 01 08:42:52 crc kubenswrapper[4689]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 01 08:42:52 crc kubenswrapper[4689]: [+]poststarthook/start-apiextensions-informers ok Dec 01 08:42:52 crc kubenswrapper[4689]: [+]poststarthook/start-apiextensions-controllers ok Dec 01 08:42:52 crc kubenswrapper[4689]: [+]poststarthook/crd-informer-synced ok Dec 01 08:42:52 crc kubenswrapper[4689]: [+]poststarthook/start-system-namespaces-controller ok Dec 01 08:42:52 crc kubenswrapper[4689]: [+]poststarthook/start-cluster-authentication-info-controller ok Dec 01 08:42:52 crc kubenswrapper[4689]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Dec 01 08:42:52 crc kubenswrapper[4689]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Dec 01 08:42:52 crc kubenswrapper[4689]: [+]poststarthook/start-legacy-token-tracking-controller ok Dec 01 08:42:52 crc kubenswrapper[4689]: [+]poststarthook/start-service-ip-repair-controllers ok Dec 01 08:42:52 crc kubenswrapper[4689]: [+]poststarthook/rbac/bootstrap-roles ok Dec 01 08:42:52 crc kubenswrapper[4689]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Dec 01 08:42:52 crc kubenswrapper[4689]: [+]poststarthook/priority-and-fairness-config-producer ok Dec 01 08:42:52 crc kubenswrapper[4689]: [+]poststarthook/bootstrap-controller ok Dec 01 08:42:52 crc kubenswrapper[4689]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Dec 01 08:42:52 crc kubenswrapper[4689]: [+]poststarthook/start-kube-aggregator-informers ok Dec 01 08:42:52 crc kubenswrapper[4689]: [+]poststarthook/apiservice-status-local-available-controller ok Dec 01 08:42:52 crc kubenswrapper[4689]: [+]poststarthook/apiservice-status-remote-available-controller ok Dec 01 08:42:52 crc kubenswrapper[4689]: [+]poststarthook/apiservice-registration-controller ok Dec 01 08:42:52 crc kubenswrapper[4689]: [+]poststarthook/apiservice-wait-for-first-sync ok Dec 01 08:42:52 crc kubenswrapper[4689]: [+]poststarthook/apiservice-discovery-controller ok Dec 01 08:42:52 crc kubenswrapper[4689]: [+]poststarthook/kube-apiserver-autoregistration ok Dec 01 08:42:52 crc kubenswrapper[4689]: [+]autoregister-completion ok Dec 01 08:42:52 crc kubenswrapper[4689]: [+]poststarthook/apiservice-openapi-controller ok Dec 01 08:42:52 crc kubenswrapper[4689]: [+]poststarthook/apiservice-openapiv3-controller ok Dec 01 08:42:52 crc kubenswrapper[4689]: [-]shutdown failed: reason withheld Dec 01 08:42:52 crc kubenswrapper[4689]: readyz check failed Dec 01 08:42:52 crc kubenswrapper[4689]: I1201 08:42:52.433406 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 08:42:52 crc kubenswrapper[4689]: I1201 08:42:52.525978 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 08:42:52 crc kubenswrapper[4689]: I1201 08:42:52.526089 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 08:42:52 crc kubenswrapper[4689]: I1201 08:42:52.526119 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 08:42:52 crc kubenswrapper[4689]: I1201 08:42:52.627575 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 08:42:52 crc kubenswrapper[4689]: I1201 08:42:52.627660 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 08:42:52 crc kubenswrapper[4689]: I1201 08:42:52.627742 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 08:42:52 crc kubenswrapper[4689]: I1201 08:42:52.627855 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 08:42:52 crc kubenswrapper[4689]: I1201 08:42:52.627886 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 08:42:52 crc kubenswrapper[4689]: I1201 08:42:52.627929 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 08:42:52 crc kubenswrapper[4689]: I1201 08:42:52.721571 4689 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 01 08:42:52 crc kubenswrapper[4689]: I1201 08:42:52.723300 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 01 08:42:52 crc kubenswrapper[4689]: I1201 08:42:52.729637 4689 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Dec 01 08:42:52 crc kubenswrapper[4689]: I1201 08:42:52.739108 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 01 08:42:52 crc kubenswrapper[4689]: I1201 08:42:52.739332 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 01 08:42:52 crc kubenswrapper[4689]: I1201 08:42:52.739493 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 01 08:42:52 crc kubenswrapper[4689]: I1201 08:42:52.739543 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 01 08:42:52 crc kubenswrapper[4689]: I1201 08:42:52.739606 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 01 08:42:52 crc kubenswrapper[4689]: I1201 08:42:52.840790 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 01 08:42:52 crc kubenswrapper[4689]: I1201 08:42:52.840937 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 01 08:42:52 crc kubenswrapper[4689]: I1201 08:42:52.840976 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 01 08:42:52 crc kubenswrapper[4689]: I1201 08:42:52.841002 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 01 08:42:52 crc kubenswrapper[4689]: I1201 08:42:52.841045 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 01 08:42:52 crc kubenswrapper[4689]: I1201 08:42:52.841121 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 01 08:42:52 crc kubenswrapper[4689]: I1201 08:42:52.841219 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 01 08:42:52 crc kubenswrapper[4689]: I1201 08:42:52.841215 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 01 08:42:52 crc kubenswrapper[4689]: I1201 08:42:52.841332 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 01 08:42:52 crc kubenswrapper[4689]: I1201 08:42:52.841341 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 01 08:42:52 crc kubenswrapper[4689]: E1201 08:42:52.876135 4689 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.190:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 01 08:42:52 crc kubenswrapper[4689]: I1201 08:42:52.876713 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 01 08:42:52 crc kubenswrapper[4689]: E1201 08:42:52.927028 4689 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.190:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187d0adce981bc60 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-01 08:42:52.92577904 +0000 UTC m=+252.998066934,LastTimestamp:2025-12-01 08:42:52.92577904 +0000 UTC m=+252.998066934,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 01 08:42:53 crc kubenswrapper[4689]: I1201 08:42:53.394147 4689 generic.go:334] "Generic (PLEG): container finished" podID="51372c30-ea27-438b-ba20-741b5e630044" containerID="1263410c61f6b949d666e25834717063f8fc60babfd1780d84e579eda720873e" exitCode=0 Dec 01 08:42:53 crc kubenswrapper[4689]: I1201 08:42:53.394388 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"51372c30-ea27-438b-ba20-741b5e630044","Type":"ContainerDied","Data":"1263410c61f6b949d666e25834717063f8fc60babfd1780d84e579eda720873e"} Dec 01 08:42:53 crc kubenswrapper[4689]: I1201 08:42:53.396247 4689 status_manager.go:851] "Failed to get status for pod" podUID="51372c30-ea27-438b-ba20-741b5e630044" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 01 08:42:53 crc kubenswrapper[4689]: I1201 08:42:53.399877 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"a93bdebb7a3221b22d9d7587fa265597ca1806acbc482838d0acf9cc98da8092"} Dec 01 08:42:53 crc kubenswrapper[4689]: I1201 08:42:53.399930 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"03a76d71aeaca46d805577f6e0efb289064814a53ce9083cbba8267c6d5526f8"} Dec 01 08:42:53 crc kubenswrapper[4689]: E1201 08:42:53.400969 4689 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.190:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 01 08:42:53 crc kubenswrapper[4689]: I1201 08:42:53.401787 4689 status_manager.go:851] "Failed to get status for pod" podUID="51372c30-ea27-438b-ba20-741b5e630044" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 01 08:42:53 crc kubenswrapper[4689]: I1201 08:42:53.404764 4689 generic.go:334] "Generic (PLEG): container finished" podID="2790f0e0-bca7-4070-8d79-72ae564043ef" containerID="8b4d746f0a2ce206cb7819078c2f094aceaeef7f49cdaa3486b844c92c2c27e6" exitCode=0 Dec 01 08:42:53 crc kubenswrapper[4689]: I1201 08:42:53.404870 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6gkfv" event={"ID":"2790f0e0-bca7-4070-8d79-72ae564043ef","Type":"ContainerDied","Data":"8b4d746f0a2ce206cb7819078c2f094aceaeef7f49cdaa3486b844c92c2c27e6"} Dec 01 08:42:53 crc kubenswrapper[4689]: I1201 08:42:53.408598 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 01 08:42:53 crc kubenswrapper[4689]: I1201 08:42:53.410904 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 01 08:42:53 crc kubenswrapper[4689]: I1201 08:42:53.412242 4689 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5920f4efa4e812090eb100b51bcf8e37eb82eb6b9fa495c8560ceeea3a94d55b" exitCode=0 Dec 01 08:42:53 crc kubenswrapper[4689]: I1201 08:42:53.412323 4689 scope.go:117] "RemoveContainer" containerID="83ec64e82504ae546c78bdb3eb8b6cf47514373616749723ec457191b95b73c7" Dec 01 08:42:53 crc kubenswrapper[4689]: I1201 08:42:53.412478 4689 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7190f95ad8a5cce27417903e343138ff345b97c0713609f47db91d8fe0c44a7f" exitCode=0 Dec 01 08:42:53 crc kubenswrapper[4689]: I1201 08:42:53.412716 4689 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="848171dd65d193193056bff78092de5ea65abaf7af4d168a9c27409765bd0ac2" exitCode=0 Dec 01 08:42:53 crc kubenswrapper[4689]: I1201 08:42:53.412815 4689 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ad9f8bfeec0bea5ec9df44017223d633ce320bba477e99d5cc325712f92fff65" exitCode=2 Dec 01 08:42:53 crc kubenswrapper[4689]: I1201 08:42:53.786577 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6gkfv" Dec 01 08:42:53 crc kubenswrapper[4689]: I1201 08:42:53.787790 4689 status_manager.go:851] "Failed to get status for pod" podUID="2790f0e0-bca7-4070-8d79-72ae564043ef" pod="openshift-marketplace/community-operators-6gkfv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6gkfv\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 01 08:42:53 crc kubenswrapper[4689]: I1201 08:42:53.788060 4689 status_manager.go:851] "Failed to get status for pod" podUID="51372c30-ea27-438b-ba20-741b5e630044" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 01 08:42:53 crc kubenswrapper[4689]: I1201 08:42:53.955463 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2790f0e0-bca7-4070-8d79-72ae564043ef-catalog-content\") pod \"2790f0e0-bca7-4070-8d79-72ae564043ef\" (UID: \"2790f0e0-bca7-4070-8d79-72ae564043ef\") " Dec 01 08:42:53 crc kubenswrapper[4689]: I1201 08:42:53.955644 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2n49\" (UniqueName: \"kubernetes.io/projected/2790f0e0-bca7-4070-8d79-72ae564043ef-kube-api-access-x2n49\") pod \"2790f0e0-bca7-4070-8d79-72ae564043ef\" (UID: \"2790f0e0-bca7-4070-8d79-72ae564043ef\") " Dec 01 08:42:53 crc kubenswrapper[4689]: I1201 08:42:53.955871 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2790f0e0-bca7-4070-8d79-72ae564043ef-utilities\") pod \"2790f0e0-bca7-4070-8d79-72ae564043ef\" (UID: \"2790f0e0-bca7-4070-8d79-72ae564043ef\") " Dec 01 08:42:53 crc kubenswrapper[4689]: I1201 08:42:53.957460 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2790f0e0-bca7-4070-8d79-72ae564043ef-utilities" (OuterVolumeSpecName: "utilities") pod "2790f0e0-bca7-4070-8d79-72ae564043ef" (UID: "2790f0e0-bca7-4070-8d79-72ae564043ef"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 08:42:53 crc kubenswrapper[4689]: I1201 08:42:53.962620 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2790f0e0-bca7-4070-8d79-72ae564043ef-kube-api-access-x2n49" (OuterVolumeSpecName: "kube-api-access-x2n49") pod "2790f0e0-bca7-4070-8d79-72ae564043ef" (UID: "2790f0e0-bca7-4070-8d79-72ae564043ef"). InnerVolumeSpecName "kube-api-access-x2n49". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:42:54 crc kubenswrapper[4689]: I1201 08:42:54.027598 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2790f0e0-bca7-4070-8d79-72ae564043ef-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2790f0e0-bca7-4070-8d79-72ae564043ef" (UID: "2790f0e0-bca7-4070-8d79-72ae564043ef"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 08:42:54 crc kubenswrapper[4689]: I1201 08:42:54.057903 4689 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2790f0e0-bca7-4070-8d79-72ae564043ef-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 08:42:54 crc kubenswrapper[4689]: I1201 08:42:54.057966 4689 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2790f0e0-bca7-4070-8d79-72ae564043ef-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 08:42:54 crc kubenswrapper[4689]: I1201 08:42:54.058206 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2n49\" (UniqueName: \"kubernetes.io/projected/2790f0e0-bca7-4070-8d79-72ae564043ef-kube-api-access-x2n49\") on node \"crc\" DevicePath \"\"" Dec 01 08:42:54 crc kubenswrapper[4689]: I1201 08:42:54.432013 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6gkfv" Dec 01 08:42:54 crc kubenswrapper[4689]: I1201 08:42:54.432470 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6gkfv" event={"ID":"2790f0e0-bca7-4070-8d79-72ae564043ef","Type":"ContainerDied","Data":"9bcc3463687c41d144b3d915cbd4ff095a197e9ce3469ae38975bfa356a52988"} Dec 01 08:42:54 crc kubenswrapper[4689]: I1201 08:42:54.432568 4689 scope.go:117] "RemoveContainer" containerID="8b4d746f0a2ce206cb7819078c2f094aceaeef7f49cdaa3486b844c92c2c27e6" Dec 01 08:42:54 crc kubenswrapper[4689]: I1201 08:42:54.433578 4689 status_manager.go:851] "Failed to get status for pod" podUID="51372c30-ea27-438b-ba20-741b5e630044" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 01 08:42:54 crc kubenswrapper[4689]: I1201 08:42:54.433974 4689 status_manager.go:851] "Failed to get status for pod" podUID="2790f0e0-bca7-4070-8d79-72ae564043ef" pod="openshift-marketplace/community-operators-6gkfv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6gkfv\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 01 08:42:54 crc kubenswrapper[4689]: I1201 08:42:54.438952 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 01 08:42:54 crc kubenswrapper[4689]: I1201 08:42:54.454814 4689 status_manager.go:851] "Failed to get status for pod" podUID="2790f0e0-bca7-4070-8d79-72ae564043ef" pod="openshift-marketplace/community-operators-6gkfv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6gkfv\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 01 08:42:54 crc kubenswrapper[4689]: I1201 08:42:54.455045 4689 status_manager.go:851] "Failed to get status for pod" podUID="51372c30-ea27-438b-ba20-741b5e630044" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 01 08:42:54 crc kubenswrapper[4689]: I1201 08:42:54.455646 4689 scope.go:117] "RemoveContainer" containerID="95dddc55c7ef43ef995476c680ac865f3db512c409a4355f918a5e2a83607204" Dec 01 08:42:54 crc kubenswrapper[4689]: I1201 08:42:54.497691 4689 scope.go:117] "RemoveContainer" containerID="a1285b3b17210603941d93eb1f3b9d14f6dcae4fc98cf7b47167bdd2d433fb51" Dec 01 08:42:54 crc kubenswrapper[4689]: I1201 08:42:54.741889 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 01 08:42:54 crc kubenswrapper[4689]: I1201 08:42:54.742546 4689 status_manager.go:851] "Failed to get status for pod" podUID="51372c30-ea27-438b-ba20-741b5e630044" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 01 08:42:54 crc kubenswrapper[4689]: I1201 08:42:54.743048 4689 status_manager.go:851] "Failed to get status for pod" podUID="2790f0e0-bca7-4070-8d79-72ae564043ef" pod="openshift-marketplace/community-operators-6gkfv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6gkfv\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 01 08:42:54 crc kubenswrapper[4689]: I1201 08:42:54.878621 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/51372c30-ea27-438b-ba20-741b5e630044-var-lock\") pod \"51372c30-ea27-438b-ba20-741b5e630044\" (UID: \"51372c30-ea27-438b-ba20-741b5e630044\") " Dec 01 08:42:54 crc kubenswrapper[4689]: I1201 08:42:54.879307 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/51372c30-ea27-438b-ba20-741b5e630044-kube-api-access\") pod \"51372c30-ea27-438b-ba20-741b5e630044\" (UID: \"51372c30-ea27-438b-ba20-741b5e630044\") " Dec 01 08:42:54 crc kubenswrapper[4689]: I1201 08:42:54.879354 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/51372c30-ea27-438b-ba20-741b5e630044-kubelet-dir\") pod \"51372c30-ea27-438b-ba20-741b5e630044\" (UID: \"51372c30-ea27-438b-ba20-741b5e630044\") " Dec 01 08:42:54 crc kubenswrapper[4689]: I1201 08:42:54.878757 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/51372c30-ea27-438b-ba20-741b5e630044-var-lock" (OuterVolumeSpecName: "var-lock") pod "51372c30-ea27-438b-ba20-741b5e630044" (UID: "51372c30-ea27-438b-ba20-741b5e630044"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 08:42:54 crc kubenswrapper[4689]: I1201 08:42:54.879667 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/51372c30-ea27-438b-ba20-741b5e630044-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "51372c30-ea27-438b-ba20-741b5e630044" (UID: "51372c30-ea27-438b-ba20-741b5e630044"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 08:42:54 crc kubenswrapper[4689]: I1201 08:42:54.879973 4689 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/51372c30-ea27-438b-ba20-741b5e630044-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 01 08:42:54 crc kubenswrapper[4689]: I1201 08:42:54.879994 4689 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/51372c30-ea27-438b-ba20-741b5e630044-var-lock\") on node \"crc\" DevicePath \"\"" Dec 01 08:42:54 crc kubenswrapper[4689]: I1201 08:42:54.885027 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51372c30-ea27-438b-ba20-741b5e630044-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "51372c30-ea27-438b-ba20-741b5e630044" (UID: "51372c30-ea27-438b-ba20-741b5e630044"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:42:54 crc kubenswrapper[4689]: I1201 08:42:54.982053 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/51372c30-ea27-438b-ba20-741b5e630044-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 01 08:42:55 crc kubenswrapper[4689]: I1201 08:42:55.150234 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 01 08:42:55 crc kubenswrapper[4689]: I1201 08:42:55.150894 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 08:42:55 crc kubenswrapper[4689]: I1201 08:42:55.151555 4689 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 01 08:42:55 crc kubenswrapper[4689]: I1201 08:42:55.151959 4689 status_manager.go:851] "Failed to get status for pod" podUID="2790f0e0-bca7-4070-8d79-72ae564043ef" pod="openshift-marketplace/community-operators-6gkfv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6gkfv\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 01 08:42:55 crc kubenswrapper[4689]: I1201 08:42:55.152281 4689 status_manager.go:851] "Failed to get status for pod" podUID="51372c30-ea27-438b-ba20-741b5e630044" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 01 08:42:55 crc kubenswrapper[4689]: I1201 08:42:55.183829 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 01 08:42:55 crc kubenswrapper[4689]: I1201 08:42:55.183899 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 01 08:42:55 crc kubenswrapper[4689]: I1201 08:42:55.183907 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 08:42:55 crc kubenswrapper[4689]: I1201 08:42:55.183968 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 01 08:42:55 crc kubenswrapper[4689]: I1201 08:42:55.184027 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 08:42:55 crc kubenswrapper[4689]: I1201 08:42:55.184123 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 08:42:55 crc kubenswrapper[4689]: I1201 08:42:55.184248 4689 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Dec 01 08:42:55 crc kubenswrapper[4689]: I1201 08:42:55.184272 4689 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 01 08:42:55 crc kubenswrapper[4689]: I1201 08:42:55.184284 4689 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 01 08:42:55 crc kubenswrapper[4689]: I1201 08:42:55.458782 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 01 08:42:55 crc kubenswrapper[4689]: I1201 08:42:55.460858 4689 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="2dbad86a3834b9ec3334de7ccc49988da1f7e42c423801c9789ce12ba70390b1" exitCode=0 Dec 01 08:42:55 crc kubenswrapper[4689]: I1201 08:42:55.460986 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 08:42:55 crc kubenswrapper[4689]: I1201 08:42:55.461108 4689 scope.go:117] "RemoveContainer" containerID="5920f4efa4e812090eb100b51bcf8e37eb82eb6b9fa495c8560ceeea3a94d55b" Dec 01 08:42:55 crc kubenswrapper[4689]: I1201 08:42:55.465725 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"51372c30-ea27-438b-ba20-741b5e630044","Type":"ContainerDied","Data":"c4dca92dbf45b58b6c2f2fefde135a09fa22fd5492cef1b6cd1667b1a86bfea6"} Dec 01 08:42:55 crc kubenswrapper[4689]: I1201 08:42:55.465853 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 01 08:42:55 crc kubenswrapper[4689]: I1201 08:42:55.465857 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4dca92dbf45b58b6c2f2fefde135a09fa22fd5492cef1b6cd1667b1a86bfea6" Dec 01 08:42:55 crc kubenswrapper[4689]: I1201 08:42:55.474068 4689 status_manager.go:851] "Failed to get status for pod" podUID="51372c30-ea27-438b-ba20-741b5e630044" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 01 08:42:55 crc kubenswrapper[4689]: I1201 08:42:55.474853 4689 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 01 08:42:55 crc kubenswrapper[4689]: I1201 08:42:55.475496 4689 status_manager.go:851] "Failed to get status for pod" podUID="2790f0e0-bca7-4070-8d79-72ae564043ef" pod="openshift-marketplace/community-operators-6gkfv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6gkfv\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 01 08:42:55 crc kubenswrapper[4689]: I1201 08:42:55.488757 4689 scope.go:117] "RemoveContainer" containerID="7190f95ad8a5cce27417903e343138ff345b97c0713609f47db91d8fe0c44a7f" Dec 01 08:42:55 crc kubenswrapper[4689]: I1201 08:42:55.494001 4689 status_manager.go:851] "Failed to get status for pod" podUID="51372c30-ea27-438b-ba20-741b5e630044" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 01 08:42:55 crc kubenswrapper[4689]: I1201 08:42:55.495055 4689 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 01 08:42:55 crc kubenswrapper[4689]: I1201 08:42:55.496628 4689 status_manager.go:851] "Failed to get status for pod" podUID="2790f0e0-bca7-4070-8d79-72ae564043ef" pod="openshift-marketplace/community-operators-6gkfv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6gkfv\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 01 08:42:55 crc kubenswrapper[4689]: I1201 08:42:55.508761 4689 scope.go:117] "RemoveContainer" containerID="848171dd65d193193056bff78092de5ea65abaf7af4d168a9c27409765bd0ac2" Dec 01 08:42:55 crc kubenswrapper[4689]: I1201 08:42:55.534634 4689 scope.go:117] "RemoveContainer" containerID="ad9f8bfeec0bea5ec9df44017223d633ce320bba477e99d5cc325712f92fff65" Dec 01 08:42:55 crc kubenswrapper[4689]: I1201 08:42:55.550719 4689 scope.go:117] "RemoveContainer" containerID="2dbad86a3834b9ec3334de7ccc49988da1f7e42c423801c9789ce12ba70390b1" Dec 01 08:42:55 crc kubenswrapper[4689]: I1201 08:42:55.566855 4689 scope.go:117] "RemoveContainer" containerID="6bba1efe2a197a5c47fc4220c1c53da0db51bcbdf317261c978b7f31348034f8" Dec 01 08:42:55 crc kubenswrapper[4689]: I1201 08:42:55.592266 4689 scope.go:117] "RemoveContainer" containerID="5920f4efa4e812090eb100b51bcf8e37eb82eb6b9fa495c8560ceeea3a94d55b" Dec 01 08:42:55 crc kubenswrapper[4689]: E1201 08:42:55.593857 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5920f4efa4e812090eb100b51bcf8e37eb82eb6b9fa495c8560ceeea3a94d55b\": container with ID starting with 5920f4efa4e812090eb100b51bcf8e37eb82eb6b9fa495c8560ceeea3a94d55b not found: ID does not exist" containerID="5920f4efa4e812090eb100b51bcf8e37eb82eb6b9fa495c8560ceeea3a94d55b" Dec 01 08:42:55 crc kubenswrapper[4689]: I1201 08:42:55.594053 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5920f4efa4e812090eb100b51bcf8e37eb82eb6b9fa495c8560ceeea3a94d55b"} err="failed to get container status \"5920f4efa4e812090eb100b51bcf8e37eb82eb6b9fa495c8560ceeea3a94d55b\": rpc error: code = NotFound desc = could not find container \"5920f4efa4e812090eb100b51bcf8e37eb82eb6b9fa495c8560ceeea3a94d55b\": container with ID starting with 5920f4efa4e812090eb100b51bcf8e37eb82eb6b9fa495c8560ceeea3a94d55b not found: ID does not exist" Dec 01 08:42:55 crc kubenswrapper[4689]: I1201 08:42:55.594247 4689 scope.go:117] "RemoveContainer" containerID="7190f95ad8a5cce27417903e343138ff345b97c0713609f47db91d8fe0c44a7f" Dec 01 08:42:55 crc kubenswrapper[4689]: E1201 08:42:55.594924 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7190f95ad8a5cce27417903e343138ff345b97c0713609f47db91d8fe0c44a7f\": container with ID starting with 7190f95ad8a5cce27417903e343138ff345b97c0713609f47db91d8fe0c44a7f not found: ID does not exist" containerID="7190f95ad8a5cce27417903e343138ff345b97c0713609f47db91d8fe0c44a7f" Dec 01 08:42:55 crc kubenswrapper[4689]: I1201 08:42:55.595002 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7190f95ad8a5cce27417903e343138ff345b97c0713609f47db91d8fe0c44a7f"} err="failed to get container status \"7190f95ad8a5cce27417903e343138ff345b97c0713609f47db91d8fe0c44a7f\": rpc error: code = NotFound desc = could not find container \"7190f95ad8a5cce27417903e343138ff345b97c0713609f47db91d8fe0c44a7f\": container with ID starting with 7190f95ad8a5cce27417903e343138ff345b97c0713609f47db91d8fe0c44a7f not found: ID does not exist" Dec 01 08:42:55 crc kubenswrapper[4689]: I1201 08:42:55.595058 4689 scope.go:117] "RemoveContainer" containerID="848171dd65d193193056bff78092de5ea65abaf7af4d168a9c27409765bd0ac2" Dec 01 08:42:55 crc kubenswrapper[4689]: E1201 08:42:55.595657 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"848171dd65d193193056bff78092de5ea65abaf7af4d168a9c27409765bd0ac2\": container with ID starting with 848171dd65d193193056bff78092de5ea65abaf7af4d168a9c27409765bd0ac2 not found: ID does not exist" containerID="848171dd65d193193056bff78092de5ea65abaf7af4d168a9c27409765bd0ac2" Dec 01 08:42:55 crc kubenswrapper[4689]: I1201 08:42:55.595720 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"848171dd65d193193056bff78092de5ea65abaf7af4d168a9c27409765bd0ac2"} err="failed to get container status \"848171dd65d193193056bff78092de5ea65abaf7af4d168a9c27409765bd0ac2\": rpc error: code = NotFound desc = could not find container \"848171dd65d193193056bff78092de5ea65abaf7af4d168a9c27409765bd0ac2\": container with ID starting with 848171dd65d193193056bff78092de5ea65abaf7af4d168a9c27409765bd0ac2 not found: ID does not exist" Dec 01 08:42:55 crc kubenswrapper[4689]: I1201 08:42:55.595753 4689 scope.go:117] "RemoveContainer" containerID="ad9f8bfeec0bea5ec9df44017223d633ce320bba477e99d5cc325712f92fff65" Dec 01 08:42:55 crc kubenswrapper[4689]: E1201 08:42:55.596081 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad9f8bfeec0bea5ec9df44017223d633ce320bba477e99d5cc325712f92fff65\": container with ID starting with ad9f8bfeec0bea5ec9df44017223d633ce320bba477e99d5cc325712f92fff65 not found: ID does not exist" containerID="ad9f8bfeec0bea5ec9df44017223d633ce320bba477e99d5cc325712f92fff65" Dec 01 08:42:55 crc kubenswrapper[4689]: I1201 08:42:55.596114 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad9f8bfeec0bea5ec9df44017223d633ce320bba477e99d5cc325712f92fff65"} err="failed to get container status \"ad9f8bfeec0bea5ec9df44017223d633ce320bba477e99d5cc325712f92fff65\": rpc error: code = NotFound desc = could not find container \"ad9f8bfeec0bea5ec9df44017223d633ce320bba477e99d5cc325712f92fff65\": container with ID starting with ad9f8bfeec0bea5ec9df44017223d633ce320bba477e99d5cc325712f92fff65 not found: ID does not exist" Dec 01 08:42:55 crc kubenswrapper[4689]: I1201 08:42:55.596133 4689 scope.go:117] "RemoveContainer" containerID="2dbad86a3834b9ec3334de7ccc49988da1f7e42c423801c9789ce12ba70390b1" Dec 01 08:42:55 crc kubenswrapper[4689]: E1201 08:42:55.596392 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2dbad86a3834b9ec3334de7ccc49988da1f7e42c423801c9789ce12ba70390b1\": container with ID starting with 2dbad86a3834b9ec3334de7ccc49988da1f7e42c423801c9789ce12ba70390b1 not found: ID does not exist" containerID="2dbad86a3834b9ec3334de7ccc49988da1f7e42c423801c9789ce12ba70390b1" Dec 01 08:42:55 crc kubenswrapper[4689]: I1201 08:42:55.596414 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2dbad86a3834b9ec3334de7ccc49988da1f7e42c423801c9789ce12ba70390b1"} err="failed to get container status \"2dbad86a3834b9ec3334de7ccc49988da1f7e42c423801c9789ce12ba70390b1\": rpc error: code = NotFound desc = could not find container \"2dbad86a3834b9ec3334de7ccc49988da1f7e42c423801c9789ce12ba70390b1\": container with ID starting with 2dbad86a3834b9ec3334de7ccc49988da1f7e42c423801c9789ce12ba70390b1 not found: ID does not exist" Dec 01 08:42:55 crc kubenswrapper[4689]: I1201 08:42:55.596433 4689 scope.go:117] "RemoveContainer" containerID="6bba1efe2a197a5c47fc4220c1c53da0db51bcbdf317261c978b7f31348034f8" Dec 01 08:42:55 crc kubenswrapper[4689]: E1201 08:42:55.596647 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6bba1efe2a197a5c47fc4220c1c53da0db51bcbdf317261c978b7f31348034f8\": container with ID starting with 6bba1efe2a197a5c47fc4220c1c53da0db51bcbdf317261c978b7f31348034f8 not found: ID does not exist" containerID="6bba1efe2a197a5c47fc4220c1c53da0db51bcbdf317261c978b7f31348034f8" Dec 01 08:42:55 crc kubenswrapper[4689]: I1201 08:42:55.596666 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6bba1efe2a197a5c47fc4220c1c53da0db51bcbdf317261c978b7f31348034f8"} err="failed to get container status \"6bba1efe2a197a5c47fc4220c1c53da0db51bcbdf317261c978b7f31348034f8\": rpc error: code = NotFound desc = could not find container \"6bba1efe2a197a5c47fc4220c1c53da0db51bcbdf317261c978b7f31348034f8\": container with ID starting with 6bba1efe2a197a5c47fc4220c1c53da0db51bcbdf317261c978b7f31348034f8 not found: ID does not exist" Dec 01 08:42:55 crc kubenswrapper[4689]: E1201 08:42:55.624976 4689 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 01 08:42:55 crc kubenswrapper[4689]: E1201 08:42:55.625589 4689 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 01 08:42:55 crc kubenswrapper[4689]: E1201 08:42:55.625920 4689 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 01 08:42:55 crc kubenswrapper[4689]: E1201 08:42:55.626263 4689 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 01 08:42:55 crc kubenswrapper[4689]: E1201 08:42:55.626541 4689 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 01 08:42:55 crc kubenswrapper[4689]: I1201 08:42:55.626583 4689 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Dec 01 08:42:55 crc kubenswrapper[4689]: E1201 08:42:55.626800 4689 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.190:6443: connect: connection refused" interval="200ms" Dec 01 08:42:55 crc kubenswrapper[4689]: E1201 08:42:55.828039 4689 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.190:6443: connect: connection refused" interval="400ms" Dec 01 08:42:56 crc kubenswrapper[4689]: E1201 08:42:56.230090 4689 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.190:6443: connect: connection refused" interval="800ms" Dec 01 08:42:57 crc kubenswrapper[4689]: E1201 08:42:57.031114 4689 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.190:6443: connect: connection refused" interval="1.6s" Dec 01 08:42:57 crc kubenswrapper[4689]: I1201 08:42:57.060756 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Dec 01 08:42:57 crc kubenswrapper[4689]: E1201 08:42:57.630713 4689 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.190:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187d0adce981bc60 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-01 08:42:52.92577904 +0000 UTC m=+252.998066934,LastTimestamp:2025-12-01 08:42:52.92577904 +0000 UTC m=+252.998066934,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 01 08:42:58 crc kubenswrapper[4689]: E1201 08:42:58.631988 4689 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.190:6443: connect: connection refused" interval="3.2s" Dec 01 08:43:01 crc kubenswrapper[4689]: I1201 08:43:01.049787 4689 status_manager.go:851] "Failed to get status for pod" podUID="2790f0e0-bca7-4070-8d79-72ae564043ef" pod="openshift-marketplace/community-operators-6gkfv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6gkfv\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 01 08:43:01 crc kubenswrapper[4689]: I1201 08:43:01.050522 4689 status_manager.go:851] "Failed to get status for pod" podUID="51372c30-ea27-438b-ba20-741b5e630044" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 01 08:43:01 crc kubenswrapper[4689]: E1201 08:43:01.833552 4689 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.190:6443: connect: connection refused" interval="6.4s" Dec 01 08:43:07 crc kubenswrapper[4689]: I1201 08:43:07.046741 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 08:43:07 crc kubenswrapper[4689]: I1201 08:43:07.049472 4689 status_manager.go:851] "Failed to get status for pod" podUID="2790f0e0-bca7-4070-8d79-72ae564043ef" pod="openshift-marketplace/community-operators-6gkfv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6gkfv\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 01 08:43:07 crc kubenswrapper[4689]: I1201 08:43:07.050217 4689 status_manager.go:851] "Failed to get status for pod" podUID="51372c30-ea27-438b-ba20-741b5e630044" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 01 08:43:07 crc kubenswrapper[4689]: I1201 08:43:07.082341 4689 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1946844b-83ee-401e-b3b6-5994ef81c85e" Dec 01 08:43:07 crc kubenswrapper[4689]: I1201 08:43:07.082604 4689 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1946844b-83ee-401e-b3b6-5994ef81c85e" Dec 01 08:43:07 crc kubenswrapper[4689]: E1201 08:43:07.083275 4689 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 08:43:07 crc kubenswrapper[4689]: I1201 08:43:07.084159 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 08:43:07 crc kubenswrapper[4689]: I1201 08:43:07.368929 4689 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Dec 01 08:43:07 crc kubenswrapper[4689]: I1201 08:43:07.368991 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Dec 01 08:43:07 crc kubenswrapper[4689]: I1201 08:43:07.564898 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 01 08:43:07 crc kubenswrapper[4689]: I1201 08:43:07.565429 4689 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="16d09630f7eb6a631d65473e272c4f98de5ba071d33992776b4101ac797b0854" exitCode=1 Dec 01 08:43:07 crc kubenswrapper[4689]: I1201 08:43:07.565576 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"16d09630f7eb6a631d65473e272c4f98de5ba071d33992776b4101ac797b0854"} Dec 01 08:43:07 crc kubenswrapper[4689]: I1201 08:43:07.566203 4689 scope.go:117] "RemoveContainer" containerID="16d09630f7eb6a631d65473e272c4f98de5ba071d33992776b4101ac797b0854" Dec 01 08:43:07 crc kubenswrapper[4689]: I1201 08:43:07.566939 4689 status_manager.go:851] "Failed to get status for pod" podUID="51372c30-ea27-438b-ba20-741b5e630044" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 01 08:43:07 crc kubenswrapper[4689]: I1201 08:43:07.567554 4689 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 01 08:43:07 crc kubenswrapper[4689]: I1201 08:43:07.568184 4689 status_manager.go:851] "Failed to get status for pod" podUID="2790f0e0-bca7-4070-8d79-72ae564043ef" pod="openshift-marketplace/community-operators-6gkfv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6gkfv\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 01 08:43:07 crc kubenswrapper[4689]: I1201 08:43:07.569632 4689 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="cf6d0e50890e2bd0d3bb527fffc3109dda8af27d81f09e18c7228c5cccc1dffe" exitCode=0 Dec 01 08:43:07 crc kubenswrapper[4689]: I1201 08:43:07.569698 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"cf6d0e50890e2bd0d3bb527fffc3109dda8af27d81f09e18c7228c5cccc1dffe"} Dec 01 08:43:07 crc kubenswrapper[4689]: I1201 08:43:07.569767 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"bb35609d74b1e0302398d19dc426b95eab8c877c1b6db37afb9e8e6d135a53b9"} Dec 01 08:43:07 crc kubenswrapper[4689]: I1201 08:43:07.570286 4689 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1946844b-83ee-401e-b3b6-5994ef81c85e" Dec 01 08:43:07 crc kubenswrapper[4689]: I1201 08:43:07.570322 4689 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1946844b-83ee-401e-b3b6-5994ef81c85e" Dec 01 08:43:07 crc kubenswrapper[4689]: I1201 08:43:07.570725 4689 status_manager.go:851] "Failed to get status for pod" podUID="2790f0e0-bca7-4070-8d79-72ae564043ef" pod="openshift-marketplace/community-operators-6gkfv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6gkfv\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 01 08:43:07 crc kubenswrapper[4689]: E1201 08:43:07.570862 4689 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 08:43:07 crc kubenswrapper[4689]: I1201 08:43:07.571223 4689 status_manager.go:851] "Failed to get status for pod" podUID="51372c30-ea27-438b-ba20-741b5e630044" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 01 08:43:07 crc kubenswrapper[4689]: I1201 08:43:07.571735 4689 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 01 08:43:07 crc kubenswrapper[4689]: E1201 08:43:07.633944 4689 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.190:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187d0adce981bc60 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-01 08:42:52.92577904 +0000 UTC m=+252.998066934,LastTimestamp:2025-12-01 08:42:52.92577904 +0000 UTC m=+252.998066934,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 01 08:43:08 crc kubenswrapper[4689]: I1201 08:43:08.586441 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 01 08:43:08 crc kubenswrapper[4689]: I1201 08:43:08.586845 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"bc73e3a6c8466074f7a3426599663264f2b944371848010f6ae8a2865175740f"} Dec 01 08:43:08 crc kubenswrapper[4689]: I1201 08:43:08.590709 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d3257527dfae880a78e68bb648ce124385459526492af2d2ec2637545095500e"} Dec 01 08:43:08 crc kubenswrapper[4689]: I1201 08:43:08.590735 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"a49f5edfcb90e23f0d2a242614ff87372978eb2b1cd973a3c6bc95454a2fde8c"} Dec 01 08:43:08 crc kubenswrapper[4689]: I1201 08:43:08.590744 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"52580d6899126a6c9fdf1c2e4371fc9e9a4412007ea147c573aa38cd5da517f3"} Dec 01 08:43:09 crc kubenswrapper[4689]: I1201 08:43:09.601861 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"78ce67e8486b2f36c5509ce7a3e82b7c69c9b5a8be341df137baeb8361588564"} Dec 01 08:43:09 crc kubenswrapper[4689]: I1201 08:43:09.601917 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"56ac511c83626a22dfedf8709ee91d0f462b99df8fc26c069cc38cda2b8c78b8"} Dec 01 08:43:09 crc kubenswrapper[4689]: I1201 08:43:09.602105 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 08:43:09 crc kubenswrapper[4689]: I1201 08:43:09.602213 4689 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1946844b-83ee-401e-b3b6-5994ef81c85e" Dec 01 08:43:09 crc kubenswrapper[4689]: I1201 08:43:09.602244 4689 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1946844b-83ee-401e-b3b6-5994ef81c85e" Dec 01 08:43:10 crc kubenswrapper[4689]: I1201 08:43:10.532496 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 08:43:12 crc kubenswrapper[4689]: I1201 08:43:12.084727 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 08:43:12 crc kubenswrapper[4689]: I1201 08:43:12.086168 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 08:43:12 crc kubenswrapper[4689]: I1201 08:43:12.090410 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 08:43:13 crc kubenswrapper[4689]: I1201 08:43:13.872750 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 08:43:13 crc kubenswrapper[4689]: I1201 08:43:13.880640 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 08:43:14 crc kubenswrapper[4689]: I1201 08:43:14.801735 4689 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 08:43:14 crc kubenswrapper[4689]: I1201 08:43:14.976490 4689 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="68dcbb13-0c35-4708-ba99-91470f8e1382" Dec 01 08:43:15 crc kubenswrapper[4689]: I1201 08:43:15.636637 4689 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1946844b-83ee-401e-b3b6-5994ef81c85e" Dec 01 08:43:15 crc kubenswrapper[4689]: I1201 08:43:15.636667 4689 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1946844b-83ee-401e-b3b6-5994ef81c85e" Dec 01 08:43:15 crc kubenswrapper[4689]: I1201 08:43:15.640513 4689 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="68dcbb13-0c35-4708-ba99-91470f8e1382" Dec 01 08:43:15 crc kubenswrapper[4689]: I1201 08:43:15.643130 4689 status_manager.go:308] "Container readiness changed before pod has synced" pod="openshift-kube-apiserver/kube-apiserver-crc" containerID="cri-o://52580d6899126a6c9fdf1c2e4371fc9e9a4412007ea147c573aa38cd5da517f3" Dec 01 08:43:15 crc kubenswrapper[4689]: I1201 08:43:15.643176 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 08:43:16 crc kubenswrapper[4689]: I1201 08:43:16.640776 4689 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1946844b-83ee-401e-b3b6-5994ef81c85e" Dec 01 08:43:16 crc kubenswrapper[4689]: I1201 08:43:16.640808 4689 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1946844b-83ee-401e-b3b6-5994ef81c85e" Dec 01 08:43:16 crc kubenswrapper[4689]: I1201 08:43:16.644881 4689 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="68dcbb13-0c35-4708-ba99-91470f8e1382" Dec 01 08:43:20 crc kubenswrapper[4689]: I1201 08:43:20.540624 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 08:43:24 crc kubenswrapper[4689]: I1201 08:43:24.673050 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 01 08:43:25 crc kubenswrapper[4689]: I1201 08:43:25.143451 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 01 08:43:25 crc kubenswrapper[4689]: I1201 08:43:25.327834 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 01 08:43:25 crc kubenswrapper[4689]: I1201 08:43:25.334577 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 01 08:43:25 crc kubenswrapper[4689]: I1201 08:43:25.710774 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 01 08:43:26 crc kubenswrapper[4689]: I1201 08:43:26.018757 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 01 08:43:26 crc kubenswrapper[4689]: I1201 08:43:26.241474 4689 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 01 08:43:26 crc kubenswrapper[4689]: I1201 08:43:26.289896 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 01 08:43:26 crc kubenswrapper[4689]: I1201 08:43:26.712348 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 01 08:43:26 crc kubenswrapper[4689]: I1201 08:43:26.734653 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 01 08:43:26 crc kubenswrapper[4689]: I1201 08:43:26.795302 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 01 08:43:26 crc kubenswrapper[4689]: I1201 08:43:26.862262 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 01 08:43:26 crc kubenswrapper[4689]: I1201 08:43:26.891313 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 01 08:43:27 crc kubenswrapper[4689]: I1201 08:43:27.088694 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 01 08:43:27 crc kubenswrapper[4689]: I1201 08:43:27.097700 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 01 08:43:27 crc kubenswrapper[4689]: I1201 08:43:27.440192 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 01 08:43:27 crc kubenswrapper[4689]: I1201 08:43:27.444767 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 01 08:43:27 crc kubenswrapper[4689]: I1201 08:43:27.731651 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 01 08:43:27 crc kubenswrapper[4689]: I1201 08:43:27.740778 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 01 08:43:27 crc kubenswrapper[4689]: I1201 08:43:27.773649 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 01 08:43:27 crc kubenswrapper[4689]: I1201 08:43:27.779394 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 01 08:43:27 crc kubenswrapper[4689]: I1201 08:43:27.877845 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 01 08:43:27 crc kubenswrapper[4689]: I1201 08:43:27.921654 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 01 08:43:28 crc kubenswrapper[4689]: I1201 08:43:28.026559 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 01 08:43:28 crc kubenswrapper[4689]: I1201 08:43:28.050949 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 01 08:43:28 crc kubenswrapper[4689]: I1201 08:43:28.148593 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 01 08:43:28 crc kubenswrapper[4689]: I1201 08:43:28.164191 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 01 08:43:28 crc kubenswrapper[4689]: I1201 08:43:28.340400 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 01 08:43:28 crc kubenswrapper[4689]: I1201 08:43:28.404339 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 01 08:43:28 crc kubenswrapper[4689]: I1201 08:43:28.433898 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 01 08:43:28 crc kubenswrapper[4689]: I1201 08:43:28.530952 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 01 08:43:28 crc kubenswrapper[4689]: I1201 08:43:28.605807 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 01 08:43:28 crc kubenswrapper[4689]: I1201 08:43:28.655815 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 01 08:43:28 crc kubenswrapper[4689]: I1201 08:43:28.705783 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 01 08:43:28 crc kubenswrapper[4689]: I1201 08:43:28.771223 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 01 08:43:28 crc kubenswrapper[4689]: I1201 08:43:28.784116 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 01 08:43:28 crc kubenswrapper[4689]: I1201 08:43:28.834038 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 01 08:43:28 crc kubenswrapper[4689]: I1201 08:43:28.834865 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 01 08:43:28 crc kubenswrapper[4689]: I1201 08:43:28.877136 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 01 08:43:28 crc kubenswrapper[4689]: I1201 08:43:28.994072 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 01 08:43:28 crc kubenswrapper[4689]: I1201 08:43:28.994776 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 01 08:43:29 crc kubenswrapper[4689]: I1201 08:43:29.009131 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 01 08:43:29 crc kubenswrapper[4689]: I1201 08:43:29.069989 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 01 08:43:29 crc kubenswrapper[4689]: I1201 08:43:29.078761 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 01 08:43:29 crc kubenswrapper[4689]: I1201 08:43:29.093910 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 01 08:43:29 crc kubenswrapper[4689]: I1201 08:43:29.212224 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 01 08:43:29 crc kubenswrapper[4689]: I1201 08:43:29.240934 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 01 08:43:29 crc kubenswrapper[4689]: I1201 08:43:29.247610 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 01 08:43:29 crc kubenswrapper[4689]: I1201 08:43:29.293704 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 01 08:43:29 crc kubenswrapper[4689]: I1201 08:43:29.352113 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 01 08:43:29 crc kubenswrapper[4689]: I1201 08:43:29.354795 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 01 08:43:29 crc kubenswrapper[4689]: I1201 08:43:29.529681 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 01 08:43:29 crc kubenswrapper[4689]: I1201 08:43:29.562317 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 01 08:43:29 crc kubenswrapper[4689]: I1201 08:43:29.694248 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 01 08:43:29 crc kubenswrapper[4689]: I1201 08:43:29.705556 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 01 08:43:29 crc kubenswrapper[4689]: I1201 08:43:29.759222 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 01 08:43:29 crc kubenswrapper[4689]: I1201 08:43:29.893199 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 01 08:43:29 crc kubenswrapper[4689]: I1201 08:43:29.895081 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 01 08:43:29 crc kubenswrapper[4689]: I1201 08:43:29.902745 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 01 08:43:29 crc kubenswrapper[4689]: I1201 08:43:29.999100 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 01 08:43:30 crc kubenswrapper[4689]: I1201 08:43:30.042825 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 01 08:43:30 crc kubenswrapper[4689]: I1201 08:43:30.054708 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 01 08:43:30 crc kubenswrapper[4689]: I1201 08:43:30.096319 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 01 08:43:30 crc kubenswrapper[4689]: I1201 08:43:30.108842 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 01 08:43:30 crc kubenswrapper[4689]: I1201 08:43:30.135298 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 01 08:43:30 crc kubenswrapper[4689]: I1201 08:43:30.276160 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 01 08:43:30 crc kubenswrapper[4689]: I1201 08:43:30.298566 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 01 08:43:30 crc kubenswrapper[4689]: I1201 08:43:30.358039 4689 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 01 08:43:30 crc kubenswrapper[4689]: I1201 08:43:30.401696 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 01 08:43:30 crc kubenswrapper[4689]: I1201 08:43:30.425479 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 01 08:43:30 crc kubenswrapper[4689]: I1201 08:43:30.428241 4689 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 01 08:43:30 crc kubenswrapper[4689]: I1201 08:43:30.476463 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 01 08:43:30 crc kubenswrapper[4689]: I1201 08:43:30.536235 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 01 08:43:30 crc kubenswrapper[4689]: I1201 08:43:30.547779 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 01 08:43:30 crc kubenswrapper[4689]: I1201 08:43:30.557060 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 01 08:43:30 crc kubenswrapper[4689]: I1201 08:43:30.664536 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 01 08:43:30 crc kubenswrapper[4689]: I1201 08:43:30.737833 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 01 08:43:30 crc kubenswrapper[4689]: I1201 08:43:30.808141 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 01 08:43:30 crc kubenswrapper[4689]: I1201 08:43:30.873933 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 01 08:43:30 crc kubenswrapper[4689]: I1201 08:43:30.876533 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 01 08:43:31 crc kubenswrapper[4689]: I1201 08:43:31.003260 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 01 08:43:31 crc kubenswrapper[4689]: I1201 08:43:31.044823 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 01 08:43:31 crc kubenswrapper[4689]: I1201 08:43:31.056043 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 01 08:43:31 crc kubenswrapper[4689]: I1201 08:43:31.062984 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 01 08:43:31 crc kubenswrapper[4689]: I1201 08:43:31.081651 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 01 08:43:31 crc kubenswrapper[4689]: I1201 08:43:31.109435 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 01 08:43:31 crc kubenswrapper[4689]: I1201 08:43:31.133997 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 01 08:43:31 crc kubenswrapper[4689]: I1201 08:43:31.204511 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 01 08:43:31 crc kubenswrapper[4689]: I1201 08:43:31.211013 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 01 08:43:31 crc kubenswrapper[4689]: I1201 08:43:31.310180 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 01 08:43:31 crc kubenswrapper[4689]: I1201 08:43:31.355189 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 01 08:43:31 crc kubenswrapper[4689]: I1201 08:43:31.379939 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 01 08:43:31 crc kubenswrapper[4689]: I1201 08:43:31.424439 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 01 08:43:31 crc kubenswrapper[4689]: I1201 08:43:31.424651 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 01 08:43:31 crc kubenswrapper[4689]: I1201 08:43:31.552859 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 01 08:43:31 crc kubenswrapper[4689]: I1201 08:43:31.585324 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 01 08:43:31 crc kubenswrapper[4689]: I1201 08:43:31.601625 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 01 08:43:31 crc kubenswrapper[4689]: I1201 08:43:31.712858 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 01 08:43:31 crc kubenswrapper[4689]: I1201 08:43:31.732549 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 01 08:43:31 crc kubenswrapper[4689]: I1201 08:43:31.790565 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 01 08:43:31 crc kubenswrapper[4689]: I1201 08:43:31.860240 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 01 08:43:31 crc kubenswrapper[4689]: I1201 08:43:31.935774 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 01 08:43:31 crc kubenswrapper[4689]: I1201 08:43:31.954660 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 01 08:43:32 crc kubenswrapper[4689]: I1201 08:43:32.069015 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 01 08:43:32 crc kubenswrapper[4689]: I1201 08:43:32.078332 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 01 08:43:32 crc kubenswrapper[4689]: I1201 08:43:32.109989 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 01 08:43:32 crc kubenswrapper[4689]: I1201 08:43:32.163038 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 01 08:43:32 crc kubenswrapper[4689]: I1201 08:43:32.181721 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 01 08:43:32 crc kubenswrapper[4689]: I1201 08:43:32.218214 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 01 08:43:32 crc kubenswrapper[4689]: I1201 08:43:32.250788 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 01 08:43:32 crc kubenswrapper[4689]: I1201 08:43:32.296715 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 01 08:43:32 crc kubenswrapper[4689]: I1201 08:43:32.362871 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 01 08:43:32 crc kubenswrapper[4689]: I1201 08:43:32.388260 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 01 08:43:32 crc kubenswrapper[4689]: I1201 08:43:32.448793 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 01 08:43:32 crc kubenswrapper[4689]: I1201 08:43:32.453100 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 01 08:43:32 crc kubenswrapper[4689]: I1201 08:43:32.486518 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 01 08:43:32 crc kubenswrapper[4689]: I1201 08:43:32.578157 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 01 08:43:32 crc kubenswrapper[4689]: I1201 08:43:32.618733 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 01 08:43:32 crc kubenswrapper[4689]: I1201 08:43:32.680614 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 01 08:43:32 crc kubenswrapper[4689]: I1201 08:43:32.748021 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 01 08:43:32 crc kubenswrapper[4689]: I1201 08:43:32.768270 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 01 08:43:32 crc kubenswrapper[4689]: I1201 08:43:32.769210 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 01 08:43:32 crc kubenswrapper[4689]: I1201 08:43:32.978615 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 01 08:43:33 crc kubenswrapper[4689]: I1201 08:43:33.036148 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 01 08:43:33 crc kubenswrapper[4689]: I1201 08:43:33.038039 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 01 08:43:33 crc kubenswrapper[4689]: I1201 08:43:33.043346 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 01 08:43:33 crc kubenswrapper[4689]: I1201 08:43:33.043599 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 01 08:43:33 crc kubenswrapper[4689]: I1201 08:43:33.051544 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 01 08:43:33 crc kubenswrapper[4689]: I1201 08:43:33.072724 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 01 08:43:33 crc kubenswrapper[4689]: I1201 08:43:33.107933 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 01 08:43:33 crc kubenswrapper[4689]: I1201 08:43:33.123204 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 01 08:43:33 crc kubenswrapper[4689]: I1201 08:43:33.236961 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 01 08:43:33 crc kubenswrapper[4689]: I1201 08:43:33.236995 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 01 08:43:33 crc kubenswrapper[4689]: I1201 08:43:33.260749 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 01 08:43:33 crc kubenswrapper[4689]: I1201 08:43:33.287561 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 01 08:43:33 crc kubenswrapper[4689]: I1201 08:43:33.291854 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 01 08:43:33 crc kubenswrapper[4689]: I1201 08:43:33.325993 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 01 08:43:33 crc kubenswrapper[4689]: I1201 08:43:33.383223 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 01 08:43:33 crc kubenswrapper[4689]: I1201 08:43:33.409739 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 01 08:43:33 crc kubenswrapper[4689]: I1201 08:43:33.468383 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 01 08:43:33 crc kubenswrapper[4689]: I1201 08:43:33.486652 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 01 08:43:33 crc kubenswrapper[4689]: I1201 08:43:33.530727 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 01 08:43:33 crc kubenswrapper[4689]: I1201 08:43:33.579159 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 01 08:43:33 crc kubenswrapper[4689]: I1201 08:43:33.692417 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 01 08:43:33 crc kubenswrapper[4689]: I1201 08:43:33.893939 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 01 08:43:33 crc kubenswrapper[4689]: I1201 08:43:33.894499 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 01 08:43:33 crc kubenswrapper[4689]: I1201 08:43:33.970942 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 01 08:43:33 crc kubenswrapper[4689]: I1201 08:43:33.977130 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 01 08:43:33 crc kubenswrapper[4689]: I1201 08:43:33.992938 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 01 08:43:34 crc kubenswrapper[4689]: I1201 08:43:34.184257 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 01 08:43:34 crc kubenswrapper[4689]: I1201 08:43:34.194115 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 01 08:43:34 crc kubenswrapper[4689]: I1201 08:43:34.198151 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 01 08:43:34 crc kubenswrapper[4689]: I1201 08:43:34.221730 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 01 08:43:34 crc kubenswrapper[4689]: I1201 08:43:34.253789 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 01 08:43:34 crc kubenswrapper[4689]: I1201 08:43:34.320195 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 01 08:43:34 crc kubenswrapper[4689]: I1201 08:43:34.439601 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 01 08:43:34 crc kubenswrapper[4689]: I1201 08:43:34.465920 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 01 08:43:34 crc kubenswrapper[4689]: I1201 08:43:34.493863 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 01 08:43:34 crc kubenswrapper[4689]: I1201 08:43:34.528669 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 01 08:43:34 crc kubenswrapper[4689]: I1201 08:43:34.530825 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 01 08:43:34 crc kubenswrapper[4689]: I1201 08:43:34.621795 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 01 08:43:34 crc kubenswrapper[4689]: I1201 08:43:34.621953 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 01 08:43:34 crc kubenswrapper[4689]: I1201 08:43:34.648092 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 01 08:43:34 crc kubenswrapper[4689]: I1201 08:43:34.672183 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 01 08:43:34 crc kubenswrapper[4689]: I1201 08:43:34.684441 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 01 08:43:34 crc kubenswrapper[4689]: I1201 08:43:34.697074 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 01 08:43:34 crc kubenswrapper[4689]: I1201 08:43:34.718969 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 01 08:43:34 crc kubenswrapper[4689]: I1201 08:43:34.754308 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 01 08:43:34 crc kubenswrapper[4689]: I1201 08:43:34.893186 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 01 08:43:34 crc kubenswrapper[4689]: I1201 08:43:34.980993 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 01 08:43:35 crc kubenswrapper[4689]: I1201 08:43:35.063757 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 01 08:43:35 crc kubenswrapper[4689]: I1201 08:43:35.103346 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 01 08:43:35 crc kubenswrapper[4689]: I1201 08:43:35.322405 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 01 08:43:35 crc kubenswrapper[4689]: I1201 08:43:35.324634 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 01 08:43:35 crc kubenswrapper[4689]: I1201 08:43:35.369003 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 01 08:43:35 crc kubenswrapper[4689]: I1201 08:43:35.381534 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 01 08:43:35 crc kubenswrapper[4689]: I1201 08:43:35.385732 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 01 08:43:35 crc kubenswrapper[4689]: I1201 08:43:35.399050 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 01 08:43:35 crc kubenswrapper[4689]: I1201 08:43:35.427467 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 01 08:43:35 crc kubenswrapper[4689]: I1201 08:43:35.497071 4689 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 01 08:43:35 crc kubenswrapper[4689]: I1201 08:43:35.533862 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 01 08:43:35 crc kubenswrapper[4689]: I1201 08:43:35.579944 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 01 08:43:35 crc kubenswrapper[4689]: I1201 08:43:35.803090 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 01 08:43:35 crc kubenswrapper[4689]: I1201 08:43:35.929257 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 01 08:43:35 crc kubenswrapper[4689]: I1201 08:43:35.944569 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 01 08:43:35 crc kubenswrapper[4689]: I1201 08:43:35.986717 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 01 08:43:36 crc kubenswrapper[4689]: I1201 08:43:36.007056 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 01 08:43:36 crc kubenswrapper[4689]: I1201 08:43:36.063454 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 01 08:43:36 crc kubenswrapper[4689]: I1201 08:43:36.105567 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 01 08:43:36 crc kubenswrapper[4689]: I1201 08:43:36.166118 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 01 08:43:36 crc kubenswrapper[4689]: I1201 08:43:36.183577 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 01 08:43:36 crc kubenswrapper[4689]: I1201 08:43:36.350922 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 01 08:43:36 crc kubenswrapper[4689]: I1201 08:43:36.524091 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 01 08:43:36 crc kubenswrapper[4689]: I1201 08:43:36.579669 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 01 08:43:36 crc kubenswrapper[4689]: I1201 08:43:36.670939 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 01 08:43:36 crc kubenswrapper[4689]: I1201 08:43:36.692501 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 01 08:43:36 crc kubenswrapper[4689]: I1201 08:43:36.694849 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 01 08:43:36 crc kubenswrapper[4689]: I1201 08:43:36.756072 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 01 08:43:36 crc kubenswrapper[4689]: I1201 08:43:36.786840 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 01 08:43:36 crc kubenswrapper[4689]: I1201 08:43:36.872603 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 01 08:43:36 crc kubenswrapper[4689]: I1201 08:43:36.886897 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 01 08:43:36 crc kubenswrapper[4689]: I1201 08:43:36.893947 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 01 08:43:36 crc kubenswrapper[4689]: I1201 08:43:36.908503 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 01 08:43:36 crc kubenswrapper[4689]: I1201 08:43:36.959423 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 01 08:43:36 crc kubenswrapper[4689]: I1201 08:43:36.979924 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 01 08:43:37 crc kubenswrapper[4689]: I1201 08:43:37.030964 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 01 08:43:37 crc kubenswrapper[4689]: I1201 08:43:37.097835 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 01 08:43:37 crc kubenswrapper[4689]: I1201 08:43:37.107559 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 01 08:43:37 crc kubenswrapper[4689]: I1201 08:43:37.117870 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 01 08:43:37 crc kubenswrapper[4689]: I1201 08:43:37.124562 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 01 08:43:37 crc kubenswrapper[4689]: I1201 08:43:37.213633 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 01 08:43:37 crc kubenswrapper[4689]: I1201 08:43:37.358730 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 01 08:43:37 crc kubenswrapper[4689]: I1201 08:43:37.366783 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 01 08:43:37 crc kubenswrapper[4689]: I1201 08:43:37.373057 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 01 08:43:37 crc kubenswrapper[4689]: I1201 08:43:37.466182 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 01 08:43:37 crc kubenswrapper[4689]: I1201 08:43:37.505153 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 01 08:43:37 crc kubenswrapper[4689]: I1201 08:43:37.575314 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 01 08:43:37 crc kubenswrapper[4689]: I1201 08:43:37.585286 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 01 08:43:37 crc kubenswrapper[4689]: I1201 08:43:37.605759 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 01 08:43:37 crc kubenswrapper[4689]: I1201 08:43:37.784875 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 01 08:43:37 crc kubenswrapper[4689]: I1201 08:43:37.878470 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 01 08:43:37 crc kubenswrapper[4689]: I1201 08:43:37.957532 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 01 08:43:37 crc kubenswrapper[4689]: I1201 08:43:37.977391 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 01 08:43:38 crc kubenswrapper[4689]: I1201 08:43:38.009542 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 01 08:43:38 crc kubenswrapper[4689]: I1201 08:43:38.125680 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 01 08:43:38 crc kubenswrapper[4689]: I1201 08:43:38.182734 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 01 08:43:38 crc kubenswrapper[4689]: I1201 08:43:38.204113 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 01 08:43:38 crc kubenswrapper[4689]: I1201 08:43:38.276875 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 01 08:43:38 crc kubenswrapper[4689]: I1201 08:43:38.347793 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 01 08:43:38 crc kubenswrapper[4689]: I1201 08:43:38.411897 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 01 08:43:38 crc kubenswrapper[4689]: I1201 08:43:38.425962 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 01 08:43:38 crc kubenswrapper[4689]: I1201 08:43:38.547356 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 01 08:43:38 crc kubenswrapper[4689]: I1201 08:43:38.722459 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 01 08:43:38 crc kubenswrapper[4689]: I1201 08:43:38.742883 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 01 08:43:38 crc kubenswrapper[4689]: I1201 08:43:38.867083 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 01 08:43:38 crc kubenswrapper[4689]: I1201 08:43:38.867792 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 01 08:43:38 crc kubenswrapper[4689]: I1201 08:43:38.915777 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 01 08:43:38 crc kubenswrapper[4689]: I1201 08:43:38.925993 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 01 08:43:39 crc kubenswrapper[4689]: I1201 08:43:39.035620 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 01 08:43:39 crc kubenswrapper[4689]: I1201 08:43:39.164108 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 01 08:43:39 crc kubenswrapper[4689]: I1201 08:43:39.297048 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 01 08:43:39 crc kubenswrapper[4689]: I1201 08:43:39.363732 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 01 08:43:39 crc kubenswrapper[4689]: I1201 08:43:39.450393 4689 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 01 08:43:39 crc kubenswrapper[4689]: I1201 08:43:39.459130 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-marketplace/community-operators-6gkfv"] Dec 01 08:43:39 crc kubenswrapper[4689]: I1201 08:43:39.459234 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 01 08:43:39 crc kubenswrapper[4689]: I1201 08:43:39.472697 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 08:43:39 crc kubenswrapper[4689]: I1201 08:43:39.510888 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=25.510826511 podStartE2EDuration="25.510826511s" podCreationTimestamp="2025-12-01 08:43:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:43:39.489468025 +0000 UTC m=+299.561755929" watchObservedRunningTime="2025-12-01 08:43:39.510826511 +0000 UTC m=+299.583114435" Dec 01 08:43:39 crc kubenswrapper[4689]: I1201 08:43:39.689443 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 01 08:43:39 crc kubenswrapper[4689]: I1201 08:43:39.702638 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 01 08:43:40 crc kubenswrapper[4689]: I1201 08:43:40.367471 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 01 08:43:40 crc kubenswrapper[4689]: I1201 08:43:40.484006 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 01 08:43:40 crc kubenswrapper[4689]: I1201 08:43:40.579747 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 01 08:43:40 crc kubenswrapper[4689]: I1201 08:43:40.868483 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 01 08:43:41 crc kubenswrapper[4689]: I1201 08:43:41.053793 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2790f0e0-bca7-4070-8d79-72ae564043ef" path="/var/lib/kubelet/pods/2790f0e0-bca7-4070-8d79-72ae564043ef/volumes" Dec 01 08:43:41 crc kubenswrapper[4689]: I1201 08:43:41.432824 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 01 08:43:41 crc kubenswrapper[4689]: I1201 08:43:41.512300 4689 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 01 08:43:48 crc kubenswrapper[4689]: I1201 08:43:48.844151 4689 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 01 08:43:48 crc kubenswrapper[4689]: I1201 08:43:48.844882 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://a93bdebb7a3221b22d9d7587fa265597ca1806acbc482838d0acf9cc98da8092" gracePeriod=5 Dec 01 08:43:54 crc kubenswrapper[4689]: I1201 08:43:54.440871 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 01 08:43:54 crc kubenswrapper[4689]: I1201 08:43:54.441588 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 01 08:43:54 crc kubenswrapper[4689]: I1201 08:43:54.457015 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 01 08:43:54 crc kubenswrapper[4689]: I1201 08:43:54.457084 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 01 08:43:54 crc kubenswrapper[4689]: I1201 08:43:54.457106 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 01 08:43:54 crc kubenswrapper[4689]: I1201 08:43:54.457142 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 01 08:43:54 crc kubenswrapper[4689]: I1201 08:43:54.457166 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 01 08:43:54 crc kubenswrapper[4689]: I1201 08:43:54.457283 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 08:43:54 crc kubenswrapper[4689]: I1201 08:43:54.457284 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 08:43:54 crc kubenswrapper[4689]: I1201 08:43:54.457336 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 08:43:54 crc kubenswrapper[4689]: I1201 08:43:54.457358 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 08:43:54 crc kubenswrapper[4689]: I1201 08:43:54.457440 4689 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Dec 01 08:43:54 crc kubenswrapper[4689]: I1201 08:43:54.457474 4689 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 01 08:43:54 crc kubenswrapper[4689]: I1201 08:43:54.457483 4689 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Dec 01 08:43:54 crc kubenswrapper[4689]: I1201 08:43:54.465448 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 08:43:54 crc kubenswrapper[4689]: I1201 08:43:54.559290 4689 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Dec 01 08:43:54 crc kubenswrapper[4689]: I1201 08:43:54.559328 4689 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 01 08:43:54 crc kubenswrapper[4689]: I1201 08:43:54.878427 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 01 08:43:54 crc kubenswrapper[4689]: I1201 08:43:54.878508 4689 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="a93bdebb7a3221b22d9d7587fa265597ca1806acbc482838d0acf9cc98da8092" exitCode=137 Dec 01 08:43:54 crc kubenswrapper[4689]: I1201 08:43:54.878630 4689 scope.go:117] "RemoveContainer" containerID="a93bdebb7a3221b22d9d7587fa265597ca1806acbc482838d0acf9cc98da8092" Dec 01 08:43:54 crc kubenswrapper[4689]: I1201 08:43:54.878795 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 01 08:43:54 crc kubenswrapper[4689]: I1201 08:43:54.904392 4689 scope.go:117] "RemoveContainer" containerID="a93bdebb7a3221b22d9d7587fa265597ca1806acbc482838d0acf9cc98da8092" Dec 01 08:43:54 crc kubenswrapper[4689]: E1201 08:43:54.905284 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a93bdebb7a3221b22d9d7587fa265597ca1806acbc482838d0acf9cc98da8092\": container with ID starting with a93bdebb7a3221b22d9d7587fa265597ca1806acbc482838d0acf9cc98da8092 not found: ID does not exist" containerID="a93bdebb7a3221b22d9d7587fa265597ca1806acbc482838d0acf9cc98da8092" Dec 01 08:43:54 crc kubenswrapper[4689]: I1201 08:43:54.905692 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a93bdebb7a3221b22d9d7587fa265597ca1806acbc482838d0acf9cc98da8092"} err="failed to get container status \"a93bdebb7a3221b22d9d7587fa265597ca1806acbc482838d0acf9cc98da8092\": rpc error: code = NotFound desc = could not find container \"a93bdebb7a3221b22d9d7587fa265597ca1806acbc482838d0acf9cc98da8092\": container with ID starting with a93bdebb7a3221b22d9d7587fa265597ca1806acbc482838d0acf9cc98da8092 not found: ID does not exist" Dec 01 08:43:55 crc kubenswrapper[4689]: I1201 08:43:55.054000 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Dec 01 08:43:56 crc kubenswrapper[4689]: I1201 08:43:56.356006 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4kvdm"] Dec 01 08:43:56 crc kubenswrapper[4689]: I1201 08:43:56.356692 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4kvdm" podUID="e5e4c105-766f-4c1a-befe-a059da17406f" containerName="registry-server" containerID="cri-o://21bba6ed1f84c6c8fd98e99fbd940df6c6a04e1b568eae5f7728d234dfa10950" gracePeriod=30 Dec 01 08:43:56 crc kubenswrapper[4689]: I1201 08:43:56.366470 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-twqmb"] Dec 01 08:43:56 crc kubenswrapper[4689]: I1201 08:43:56.366784 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-twqmb" podUID="a49ba834-1d80-4003-bf95-6dfd68b25a49" containerName="registry-server" containerID="cri-o://44edf9b750829c8e2029f41b7edfb58c306be2756f69913cff5e3ae9b9214d6c" gracePeriod=30 Dec 01 08:43:56 crc kubenswrapper[4689]: I1201 08:43:56.373253 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-l54ll"] Dec 01 08:43:56 crc kubenswrapper[4689]: I1201 08:43:56.373595 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-l54ll" podUID="2fd47e85-de9d-475a-8907-4e805cb1cfc8" containerName="marketplace-operator" containerID="cri-o://ee010a2ebb95779aba05be2b8efd21cb507083874d8809a3e81b3d022a5ed38f" gracePeriod=30 Dec 01 08:43:56 crc kubenswrapper[4689]: I1201 08:43:56.391994 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-c4zck"] Dec 01 08:43:56 crc kubenswrapper[4689]: I1201 08:43:56.392329 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-c4zck" podUID="1860f8a4-ce73-4d74-8dcf-0a43a90d35b9" containerName="registry-server" containerID="cri-o://994348d608ec625bf26f07ea88512d99f3e93556858eaef535eea39a7971f825" gracePeriod=30 Dec 01 08:43:56 crc kubenswrapper[4689]: I1201 08:43:56.400697 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-l9s25"] Dec 01 08:43:56 crc kubenswrapper[4689]: I1201 08:43:56.401029 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-l9s25" podUID="a02d72db-aa64-4300-acc0-93b8677bf6df" containerName="registry-server" containerID="cri-o://b90fe638e664ed622cd75be22ec5fc0d9dea5edc51ddcdc9b4cf6c17045838a0" gracePeriod=30 Dec 01 08:43:56 crc kubenswrapper[4689]: I1201 08:43:56.410305 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-z26c6"] Dec 01 08:43:56 crc kubenswrapper[4689]: I1201 08:43:56.410679 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-z26c6" podUID="6729f1b7-260e-4a90-a2da-1258e036b9ea" containerName="registry-server" containerID="cri-o://6d7d3beddc20a2bf7df1809d2beb1552a8eb8de8e78a5ee6eaaa11077bbae855" gracePeriod=30 Dec 01 08:43:56 crc kubenswrapper[4689]: I1201 08:43:56.413733 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-jhh4c"] Dec 01 08:43:56 crc kubenswrapper[4689]: E1201 08:43:56.414020 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51372c30-ea27-438b-ba20-741b5e630044" containerName="installer" Dec 01 08:43:56 crc kubenswrapper[4689]: I1201 08:43:56.414074 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="51372c30-ea27-438b-ba20-741b5e630044" containerName="installer" Dec 01 08:43:56 crc kubenswrapper[4689]: E1201 08:43:56.414093 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2790f0e0-bca7-4070-8d79-72ae564043ef" containerName="extract-utilities" Dec 01 08:43:56 crc kubenswrapper[4689]: I1201 08:43:56.414101 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="2790f0e0-bca7-4070-8d79-72ae564043ef" containerName="extract-utilities" Dec 01 08:43:56 crc kubenswrapper[4689]: E1201 08:43:56.414112 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2790f0e0-bca7-4070-8d79-72ae564043ef" containerName="registry-server" Dec 01 08:43:56 crc kubenswrapper[4689]: I1201 08:43:56.414118 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="2790f0e0-bca7-4070-8d79-72ae564043ef" containerName="registry-server" Dec 01 08:43:56 crc kubenswrapper[4689]: E1201 08:43:56.414131 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 01 08:43:56 crc kubenswrapper[4689]: I1201 08:43:56.414137 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 01 08:43:56 crc kubenswrapper[4689]: E1201 08:43:56.414147 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2790f0e0-bca7-4070-8d79-72ae564043ef" containerName="extract-content" Dec 01 08:43:56 crc kubenswrapper[4689]: I1201 08:43:56.414152 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="2790f0e0-bca7-4070-8d79-72ae564043ef" containerName="extract-content" Dec 01 08:43:56 crc kubenswrapper[4689]: I1201 08:43:56.414294 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="51372c30-ea27-438b-ba20-741b5e630044" containerName="installer" Dec 01 08:43:56 crc kubenswrapper[4689]: I1201 08:43:56.414308 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="2790f0e0-bca7-4070-8d79-72ae564043ef" containerName="registry-server" Dec 01 08:43:56 crc kubenswrapper[4689]: I1201 08:43:56.414317 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 01 08:43:56 crc kubenswrapper[4689]: I1201 08:43:56.414854 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-jhh4c" Dec 01 08:43:56 crc kubenswrapper[4689]: I1201 08:43:56.424794 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-jhh4c"] Dec 01 08:43:56 crc kubenswrapper[4689]: E1201 08:43:56.579540 4689 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 44edf9b750829c8e2029f41b7edfb58c306be2756f69913cff5e3ae9b9214d6c is running failed: container process not found" containerID="44edf9b750829c8e2029f41b7edfb58c306be2756f69913cff5e3ae9b9214d6c" cmd=["grpc_health_probe","-addr=:50051"] Dec 01 08:43:56 crc kubenswrapper[4689]: E1201 08:43:56.580450 4689 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 44edf9b750829c8e2029f41b7edfb58c306be2756f69913cff5e3ae9b9214d6c is running failed: container process not found" containerID="44edf9b750829c8e2029f41b7edfb58c306be2756f69913cff5e3ae9b9214d6c" cmd=["grpc_health_probe","-addr=:50051"] Dec 01 08:43:56 crc kubenswrapper[4689]: E1201 08:43:56.580897 4689 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 44edf9b750829c8e2029f41b7edfb58c306be2756f69913cff5e3ae9b9214d6c is running failed: container process not found" containerID="44edf9b750829c8e2029f41b7edfb58c306be2756f69913cff5e3ae9b9214d6c" cmd=["grpc_health_probe","-addr=:50051"] Dec 01 08:43:56 crc kubenswrapper[4689]: E1201 08:43:56.580961 4689 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 44edf9b750829c8e2029f41b7edfb58c306be2756f69913cff5e3ae9b9214d6c is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-twqmb" podUID="a49ba834-1d80-4003-bf95-6dfd68b25a49" containerName="registry-server" Dec 01 08:43:56 crc kubenswrapper[4689]: I1201 08:43:56.583235 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0cd9ccf0-2f85-4649-ac80-931f337566ca-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-jhh4c\" (UID: \"0cd9ccf0-2f85-4649-ac80-931f337566ca\") " pod="openshift-marketplace/marketplace-operator-79b997595-jhh4c" Dec 01 08:43:56 crc kubenswrapper[4689]: I1201 08:43:56.583325 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/0cd9ccf0-2f85-4649-ac80-931f337566ca-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-jhh4c\" (UID: \"0cd9ccf0-2f85-4649-ac80-931f337566ca\") " pod="openshift-marketplace/marketplace-operator-79b997595-jhh4c" Dec 01 08:43:56 crc kubenswrapper[4689]: I1201 08:43:56.583380 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6qrj\" (UniqueName: \"kubernetes.io/projected/0cd9ccf0-2f85-4649-ac80-931f337566ca-kube-api-access-s6qrj\") pod \"marketplace-operator-79b997595-jhh4c\" (UID: \"0cd9ccf0-2f85-4649-ac80-931f337566ca\") " pod="openshift-marketplace/marketplace-operator-79b997595-jhh4c" Dec 01 08:43:56 crc kubenswrapper[4689]: I1201 08:43:56.684603 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0cd9ccf0-2f85-4649-ac80-931f337566ca-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-jhh4c\" (UID: \"0cd9ccf0-2f85-4649-ac80-931f337566ca\") " pod="openshift-marketplace/marketplace-operator-79b997595-jhh4c" Dec 01 08:43:56 crc kubenswrapper[4689]: I1201 08:43:56.684658 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/0cd9ccf0-2f85-4649-ac80-931f337566ca-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-jhh4c\" (UID: \"0cd9ccf0-2f85-4649-ac80-931f337566ca\") " pod="openshift-marketplace/marketplace-operator-79b997595-jhh4c" Dec 01 08:43:56 crc kubenswrapper[4689]: I1201 08:43:56.684686 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6qrj\" (UniqueName: \"kubernetes.io/projected/0cd9ccf0-2f85-4649-ac80-931f337566ca-kube-api-access-s6qrj\") pod \"marketplace-operator-79b997595-jhh4c\" (UID: \"0cd9ccf0-2f85-4649-ac80-931f337566ca\") " pod="openshift-marketplace/marketplace-operator-79b997595-jhh4c" Dec 01 08:43:56 crc kubenswrapper[4689]: I1201 08:43:56.685944 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0cd9ccf0-2f85-4649-ac80-931f337566ca-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-jhh4c\" (UID: \"0cd9ccf0-2f85-4649-ac80-931f337566ca\") " pod="openshift-marketplace/marketplace-operator-79b997595-jhh4c" Dec 01 08:43:56 crc kubenswrapper[4689]: I1201 08:43:56.693845 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/0cd9ccf0-2f85-4649-ac80-931f337566ca-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-jhh4c\" (UID: \"0cd9ccf0-2f85-4649-ac80-931f337566ca\") " pod="openshift-marketplace/marketplace-operator-79b997595-jhh4c" Dec 01 08:43:56 crc kubenswrapper[4689]: I1201 08:43:56.704352 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6qrj\" (UniqueName: \"kubernetes.io/projected/0cd9ccf0-2f85-4649-ac80-931f337566ca-kube-api-access-s6qrj\") pod \"marketplace-operator-79b997595-jhh4c\" (UID: \"0cd9ccf0-2f85-4649-ac80-931f337566ca\") " pod="openshift-marketplace/marketplace-operator-79b997595-jhh4c" Dec 01 08:43:56 crc kubenswrapper[4689]: I1201 08:43:56.735102 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-jhh4c" Dec 01 08:43:56 crc kubenswrapper[4689]: I1201 08:43:56.849197 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4kvdm" Dec 01 08:43:56 crc kubenswrapper[4689]: I1201 08:43:56.867884 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c4zck" Dec 01 08:43:56 crc kubenswrapper[4689]: I1201 08:43:56.909164 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-l54ll" Dec 01 08:43:56 crc kubenswrapper[4689]: I1201 08:43:56.931121 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l9s25" Dec 01 08:43:56 crc kubenswrapper[4689]: I1201 08:43:56.936735 4689 generic.go:334] "Generic (PLEG): container finished" podID="2fd47e85-de9d-475a-8907-4e805cb1cfc8" containerID="ee010a2ebb95779aba05be2b8efd21cb507083874d8809a3e81b3d022a5ed38f" exitCode=0 Dec 01 08:43:56 crc kubenswrapper[4689]: I1201 08:43:56.936846 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-l54ll" Dec 01 08:43:56 crc kubenswrapper[4689]: I1201 08:43:56.937135 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-l54ll" event={"ID":"2fd47e85-de9d-475a-8907-4e805cb1cfc8","Type":"ContainerDied","Data":"ee010a2ebb95779aba05be2b8efd21cb507083874d8809a3e81b3d022a5ed38f"} Dec 01 08:43:56 crc kubenswrapper[4689]: I1201 08:43:56.937209 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-l54ll" event={"ID":"2fd47e85-de9d-475a-8907-4e805cb1cfc8","Type":"ContainerDied","Data":"98586172a1630518d8d2fb97cf9c11296c621272754524499e6ac7c4d592ade5"} Dec 01 08:43:56 crc kubenswrapper[4689]: I1201 08:43:56.937232 4689 scope.go:117] "RemoveContainer" containerID="ee010a2ebb95779aba05be2b8efd21cb507083874d8809a3e81b3d022a5ed38f" Dec 01 08:43:56 crc kubenswrapper[4689]: I1201 08:43:56.942276 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-twqmb" Dec 01 08:43:56 crc kubenswrapper[4689]: I1201 08:43:56.944841 4689 generic.go:334] "Generic (PLEG): container finished" podID="a02d72db-aa64-4300-acc0-93b8677bf6df" containerID="b90fe638e664ed622cd75be22ec5fc0d9dea5edc51ddcdc9b4cf6c17045838a0" exitCode=0 Dec 01 08:43:56 crc kubenswrapper[4689]: I1201 08:43:56.944889 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l9s25" event={"ID":"a02d72db-aa64-4300-acc0-93b8677bf6df","Type":"ContainerDied","Data":"b90fe638e664ed622cd75be22ec5fc0d9dea5edc51ddcdc9b4cf6c17045838a0"} Dec 01 08:43:56 crc kubenswrapper[4689]: I1201 08:43:56.944955 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l9s25" Dec 01 08:43:56 crc kubenswrapper[4689]: I1201 08:43:56.952693 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z26c6" Dec 01 08:43:56 crc kubenswrapper[4689]: I1201 08:43:56.952946 4689 generic.go:334] "Generic (PLEG): container finished" podID="6729f1b7-260e-4a90-a2da-1258e036b9ea" containerID="6d7d3beddc20a2bf7df1809d2beb1552a8eb8de8e78a5ee6eaaa11077bbae855" exitCode=0 Dec 01 08:43:56 crc kubenswrapper[4689]: I1201 08:43:56.953062 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z26c6" event={"ID":"6729f1b7-260e-4a90-a2da-1258e036b9ea","Type":"ContainerDied","Data":"6d7d3beddc20a2bf7df1809d2beb1552a8eb8de8e78a5ee6eaaa11077bbae855"} Dec 01 08:43:56 crc kubenswrapper[4689]: I1201 08:43:56.962712 4689 scope.go:117] "RemoveContainer" containerID="ee010a2ebb95779aba05be2b8efd21cb507083874d8809a3e81b3d022a5ed38f" Dec 01 08:43:56 crc kubenswrapper[4689]: E1201 08:43:56.963208 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee010a2ebb95779aba05be2b8efd21cb507083874d8809a3e81b3d022a5ed38f\": container with ID starting with ee010a2ebb95779aba05be2b8efd21cb507083874d8809a3e81b3d022a5ed38f not found: ID does not exist" containerID="ee010a2ebb95779aba05be2b8efd21cb507083874d8809a3e81b3d022a5ed38f" Dec 01 08:43:56 crc kubenswrapper[4689]: I1201 08:43:56.963275 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee010a2ebb95779aba05be2b8efd21cb507083874d8809a3e81b3d022a5ed38f"} err="failed to get container status \"ee010a2ebb95779aba05be2b8efd21cb507083874d8809a3e81b3d022a5ed38f\": rpc error: code = NotFound desc = could not find container \"ee010a2ebb95779aba05be2b8efd21cb507083874d8809a3e81b3d022a5ed38f\": container with ID starting with ee010a2ebb95779aba05be2b8efd21cb507083874d8809a3e81b3d022a5ed38f not found: ID does not exist" Dec 01 08:43:56 crc kubenswrapper[4689]: I1201 08:43:56.963321 4689 scope.go:117] "RemoveContainer" containerID="b90fe638e664ed622cd75be22ec5fc0d9dea5edc51ddcdc9b4cf6c17045838a0" Dec 01 08:43:56 crc kubenswrapper[4689]: I1201 08:43:56.969711 4689 generic.go:334] "Generic (PLEG): container finished" podID="e5e4c105-766f-4c1a-befe-a059da17406f" containerID="21bba6ed1f84c6c8fd98e99fbd940df6c6a04e1b568eae5f7728d234dfa10950" exitCode=0 Dec 01 08:43:56 crc kubenswrapper[4689]: I1201 08:43:56.969887 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4kvdm" Dec 01 08:43:56 crc kubenswrapper[4689]: I1201 08:43:56.970869 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4kvdm" event={"ID":"e5e4c105-766f-4c1a-befe-a059da17406f","Type":"ContainerDied","Data":"21bba6ed1f84c6c8fd98e99fbd940df6c6a04e1b568eae5f7728d234dfa10950"} Dec 01 08:43:56 crc kubenswrapper[4689]: I1201 08:43:56.970930 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4kvdm" event={"ID":"e5e4c105-766f-4c1a-befe-a059da17406f","Type":"ContainerDied","Data":"46ad426173657b5d5f0ed4f4d81dabf7c8b6aa86903a0e975aaf9b32be2a200a"} Dec 01 08:43:56 crc kubenswrapper[4689]: I1201 08:43:56.987694 4689 generic.go:334] "Generic (PLEG): container finished" podID="a49ba834-1d80-4003-bf95-6dfd68b25a49" containerID="44edf9b750829c8e2029f41b7edfb58c306be2756f69913cff5e3ae9b9214d6c" exitCode=0 Dec 01 08:43:56 crc kubenswrapper[4689]: I1201 08:43:56.987757 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-twqmb" Dec 01 08:43:56 crc kubenswrapper[4689]: I1201 08:43:56.987747 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-twqmb" event={"ID":"a49ba834-1d80-4003-bf95-6dfd68b25a49","Type":"ContainerDied","Data":"44edf9b750829c8e2029f41b7edfb58c306be2756f69913cff5e3ae9b9214d6c"} Dec 01 08:43:56 crc kubenswrapper[4689]: I1201 08:43:56.987923 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-twqmb" event={"ID":"a49ba834-1d80-4003-bf95-6dfd68b25a49","Type":"ContainerDied","Data":"806cd1a6841fce146b5c5da9d75005f3dac73b478d3551d9ac473708a515e4d6"} Dec 01 08:43:56 crc kubenswrapper[4689]: I1201 08:43:56.996754 4689 scope.go:117] "RemoveContainer" containerID="3c37e4e3406a81a196f0e8d6453acc11d0c09f19a4d62a25aa0197583e22e209" Dec 01 08:43:57 crc kubenswrapper[4689]: I1201 08:43:57.001994 4689 generic.go:334] "Generic (PLEG): container finished" podID="1860f8a4-ce73-4d74-8dcf-0a43a90d35b9" containerID="994348d608ec625bf26f07ea88512d99f3e93556858eaef535eea39a7971f825" exitCode=0 Dec 01 08:43:57 crc kubenswrapper[4689]: I1201 08:43:57.002390 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c4zck" Dec 01 08:43:57 crc kubenswrapper[4689]: I1201 08:43:57.002403 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c4zck" event={"ID":"1860f8a4-ce73-4d74-8dcf-0a43a90d35b9","Type":"ContainerDied","Data":"994348d608ec625bf26f07ea88512d99f3e93556858eaef535eea39a7971f825"} Dec 01 08:43:57 crc kubenswrapper[4689]: I1201 08:43:57.002484 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c4zck" event={"ID":"1860f8a4-ce73-4d74-8dcf-0a43a90d35b9","Type":"ContainerDied","Data":"cd9db9cebf3ac95af38dd29dfd136498499bb028fa8e151c4d3a553200e92585"} Dec 01 08:43:57 crc kubenswrapper[4689]: I1201 08:43:57.002410 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5e4c105-766f-4c1a-befe-a059da17406f-catalog-content\") pod \"e5e4c105-766f-4c1a-befe-a059da17406f\" (UID: \"e5e4c105-766f-4c1a-befe-a059da17406f\") " Dec 01 08:43:57 crc kubenswrapper[4689]: I1201 08:43:57.003991 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5e4c105-766f-4c1a-befe-a059da17406f-utilities\") pod \"e5e4c105-766f-4c1a-befe-a059da17406f\" (UID: \"e5e4c105-766f-4c1a-befe-a059da17406f\") " Dec 01 08:43:57 crc kubenswrapper[4689]: I1201 08:43:57.004513 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1860f8a4-ce73-4d74-8dcf-0a43a90d35b9-catalog-content\") pod \"1860f8a4-ce73-4d74-8dcf-0a43a90d35b9\" (UID: \"1860f8a4-ce73-4d74-8dcf-0a43a90d35b9\") " Dec 01 08:43:57 crc kubenswrapper[4689]: I1201 08:43:57.004545 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xffcg\" (UniqueName: \"kubernetes.io/projected/2fd47e85-de9d-475a-8907-4e805cb1cfc8-kube-api-access-xffcg\") pod \"2fd47e85-de9d-475a-8907-4e805cb1cfc8\" (UID: \"2fd47e85-de9d-475a-8907-4e805cb1cfc8\") " Dec 01 08:43:57 crc kubenswrapper[4689]: I1201 08:43:57.004588 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nrb74\" (UniqueName: \"kubernetes.io/projected/e5e4c105-766f-4c1a-befe-a059da17406f-kube-api-access-nrb74\") pod \"e5e4c105-766f-4c1a-befe-a059da17406f\" (UID: \"e5e4c105-766f-4c1a-befe-a059da17406f\") " Dec 01 08:43:57 crc kubenswrapper[4689]: I1201 08:43:57.004619 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2fd47e85-de9d-475a-8907-4e805cb1cfc8-marketplace-operator-metrics\") pod \"2fd47e85-de9d-475a-8907-4e805cb1cfc8\" (UID: \"2fd47e85-de9d-475a-8907-4e805cb1cfc8\") " Dec 01 08:43:57 crc kubenswrapper[4689]: I1201 08:43:57.004676 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a02d72db-aa64-4300-acc0-93b8677bf6df-catalog-content\") pod \"a02d72db-aa64-4300-acc0-93b8677bf6df\" (UID: \"a02d72db-aa64-4300-acc0-93b8677bf6df\") " Dec 01 08:43:57 crc kubenswrapper[4689]: I1201 08:43:57.004702 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mxp5n\" (UniqueName: \"kubernetes.io/projected/6729f1b7-260e-4a90-a2da-1258e036b9ea-kube-api-access-mxp5n\") pod \"6729f1b7-260e-4a90-a2da-1258e036b9ea\" (UID: \"6729f1b7-260e-4a90-a2da-1258e036b9ea\") " Dec 01 08:43:57 crc kubenswrapper[4689]: I1201 08:43:57.004745 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j6z2f\" (UniqueName: \"kubernetes.io/projected/a49ba834-1d80-4003-bf95-6dfd68b25a49-kube-api-access-j6z2f\") pod \"a49ba834-1d80-4003-bf95-6dfd68b25a49\" (UID: \"a49ba834-1d80-4003-bf95-6dfd68b25a49\") " Dec 01 08:43:57 crc kubenswrapper[4689]: I1201 08:43:57.004774 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2fd47e85-de9d-475a-8907-4e805cb1cfc8-marketplace-trusted-ca\") pod \"2fd47e85-de9d-475a-8907-4e805cb1cfc8\" (UID: \"2fd47e85-de9d-475a-8907-4e805cb1cfc8\") " Dec 01 08:43:57 crc kubenswrapper[4689]: I1201 08:43:57.004818 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1860f8a4-ce73-4d74-8dcf-0a43a90d35b9-utilities\") pod \"1860f8a4-ce73-4d74-8dcf-0a43a90d35b9\" (UID: \"1860f8a4-ce73-4d74-8dcf-0a43a90d35b9\") " Dec 01 08:43:57 crc kubenswrapper[4689]: I1201 08:43:57.004832 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5e4c105-766f-4c1a-befe-a059da17406f-utilities" (OuterVolumeSpecName: "utilities") pod "e5e4c105-766f-4c1a-befe-a059da17406f" (UID: "e5e4c105-766f-4c1a-befe-a059da17406f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 08:43:57 crc kubenswrapper[4689]: I1201 08:43:57.004846 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6729f1b7-260e-4a90-a2da-1258e036b9ea-catalog-content\") pod \"6729f1b7-260e-4a90-a2da-1258e036b9ea\" (UID: \"6729f1b7-260e-4a90-a2da-1258e036b9ea\") " Dec 01 08:43:57 crc kubenswrapper[4689]: I1201 08:43:57.004892 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a49ba834-1d80-4003-bf95-6dfd68b25a49-utilities\") pod \"a49ba834-1d80-4003-bf95-6dfd68b25a49\" (UID: \"a49ba834-1d80-4003-bf95-6dfd68b25a49\") " Dec 01 08:43:57 crc kubenswrapper[4689]: I1201 08:43:57.004923 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a02d72db-aa64-4300-acc0-93b8677bf6df-utilities\") pod \"a02d72db-aa64-4300-acc0-93b8677bf6df\" (UID: \"a02d72db-aa64-4300-acc0-93b8677bf6df\") " Dec 01 08:43:57 crc kubenswrapper[4689]: I1201 08:43:57.004944 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fnvrm\" (UniqueName: \"kubernetes.io/projected/1860f8a4-ce73-4d74-8dcf-0a43a90d35b9-kube-api-access-fnvrm\") pod \"1860f8a4-ce73-4d74-8dcf-0a43a90d35b9\" (UID: \"1860f8a4-ce73-4d74-8dcf-0a43a90d35b9\") " Dec 01 08:43:57 crc kubenswrapper[4689]: I1201 08:43:57.004999 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6729f1b7-260e-4a90-a2da-1258e036b9ea-utilities\") pod \"6729f1b7-260e-4a90-a2da-1258e036b9ea\" (UID: \"6729f1b7-260e-4a90-a2da-1258e036b9ea\") " Dec 01 08:43:57 crc kubenswrapper[4689]: I1201 08:43:57.005538 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a49ba834-1d80-4003-bf95-6dfd68b25a49-catalog-content\") pod \"a49ba834-1d80-4003-bf95-6dfd68b25a49\" (UID: \"a49ba834-1d80-4003-bf95-6dfd68b25a49\") " Dec 01 08:43:57 crc kubenswrapper[4689]: I1201 08:43:57.005570 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kq9wb\" (UniqueName: \"kubernetes.io/projected/a02d72db-aa64-4300-acc0-93b8677bf6df-kube-api-access-kq9wb\") pod \"a02d72db-aa64-4300-acc0-93b8677bf6df\" (UID: \"a02d72db-aa64-4300-acc0-93b8677bf6df\") " Dec 01 08:43:57 crc kubenswrapper[4689]: I1201 08:43:57.006646 4689 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5e4c105-766f-4c1a-befe-a059da17406f-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 08:43:57 crc kubenswrapper[4689]: I1201 08:43:57.006577 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1860f8a4-ce73-4d74-8dcf-0a43a90d35b9-utilities" (OuterVolumeSpecName: "utilities") pod "1860f8a4-ce73-4d74-8dcf-0a43a90d35b9" (UID: "1860f8a4-ce73-4d74-8dcf-0a43a90d35b9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 08:43:57 crc kubenswrapper[4689]: I1201 08:43:57.008325 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5e4c105-766f-4c1a-befe-a059da17406f-kube-api-access-nrb74" (OuterVolumeSpecName: "kube-api-access-nrb74") pod "e5e4c105-766f-4c1a-befe-a059da17406f" (UID: "e5e4c105-766f-4c1a-befe-a059da17406f"). InnerVolumeSpecName "kube-api-access-nrb74". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:43:57 crc kubenswrapper[4689]: I1201 08:43:57.008809 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a02d72db-aa64-4300-acc0-93b8677bf6df-utilities" (OuterVolumeSpecName: "utilities") pod "a02d72db-aa64-4300-acc0-93b8677bf6df" (UID: "a02d72db-aa64-4300-acc0-93b8677bf6df"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 08:43:57 crc kubenswrapper[4689]: I1201 08:43:57.010207 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a49ba834-1d80-4003-bf95-6dfd68b25a49-utilities" (OuterVolumeSpecName: "utilities") pod "a49ba834-1d80-4003-bf95-6dfd68b25a49" (UID: "a49ba834-1d80-4003-bf95-6dfd68b25a49"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 08:43:57 crc kubenswrapper[4689]: I1201 08:43:57.011706 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6729f1b7-260e-4a90-a2da-1258e036b9ea-utilities" (OuterVolumeSpecName: "utilities") pod "6729f1b7-260e-4a90-a2da-1258e036b9ea" (UID: "6729f1b7-260e-4a90-a2da-1258e036b9ea"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 08:43:57 crc kubenswrapper[4689]: I1201 08:43:57.012670 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fd47e85-de9d-475a-8907-4e805cb1cfc8-kube-api-access-xffcg" (OuterVolumeSpecName: "kube-api-access-xffcg") pod "2fd47e85-de9d-475a-8907-4e805cb1cfc8" (UID: "2fd47e85-de9d-475a-8907-4e805cb1cfc8"). InnerVolumeSpecName "kube-api-access-xffcg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:43:57 crc kubenswrapper[4689]: I1201 08:43:57.014119 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1860f8a4-ce73-4d74-8dcf-0a43a90d35b9-kube-api-access-fnvrm" (OuterVolumeSpecName: "kube-api-access-fnvrm") pod "1860f8a4-ce73-4d74-8dcf-0a43a90d35b9" (UID: "1860f8a4-ce73-4d74-8dcf-0a43a90d35b9"). InnerVolumeSpecName "kube-api-access-fnvrm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:43:57 crc kubenswrapper[4689]: I1201 08:43:57.014396 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a02d72db-aa64-4300-acc0-93b8677bf6df-kube-api-access-kq9wb" (OuterVolumeSpecName: "kube-api-access-kq9wb") pod "a02d72db-aa64-4300-acc0-93b8677bf6df" (UID: "a02d72db-aa64-4300-acc0-93b8677bf6df"). InnerVolumeSpecName "kube-api-access-kq9wb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:43:57 crc kubenswrapper[4689]: I1201 08:43:57.016085 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fd47e85-de9d-475a-8907-4e805cb1cfc8-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "2fd47e85-de9d-475a-8907-4e805cb1cfc8" (UID: "2fd47e85-de9d-475a-8907-4e805cb1cfc8"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:43:57 crc kubenswrapper[4689]: I1201 08:43:57.016169 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2fd47e85-de9d-475a-8907-4e805cb1cfc8-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "2fd47e85-de9d-475a-8907-4e805cb1cfc8" (UID: "2fd47e85-de9d-475a-8907-4e805cb1cfc8"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:43:57 crc kubenswrapper[4689]: I1201 08:43:57.017013 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a49ba834-1d80-4003-bf95-6dfd68b25a49-kube-api-access-j6z2f" (OuterVolumeSpecName: "kube-api-access-j6z2f") pod "a49ba834-1d80-4003-bf95-6dfd68b25a49" (UID: "a49ba834-1d80-4003-bf95-6dfd68b25a49"). InnerVolumeSpecName "kube-api-access-j6z2f". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:43:57 crc kubenswrapper[4689]: I1201 08:43:57.027109 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1860f8a4-ce73-4d74-8dcf-0a43a90d35b9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1860f8a4-ce73-4d74-8dcf-0a43a90d35b9" (UID: "1860f8a4-ce73-4d74-8dcf-0a43a90d35b9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 08:43:57 crc kubenswrapper[4689]: I1201 08:43:57.039281 4689 scope.go:117] "RemoveContainer" containerID="a6d1801b3be7323c166a363fd3eb4cbf9313874621e48d6e2b858add8dbcf416" Dec 01 08:43:57 crc kubenswrapper[4689]: I1201 08:43:57.062004 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6729f1b7-260e-4a90-a2da-1258e036b9ea-kube-api-access-mxp5n" (OuterVolumeSpecName: "kube-api-access-mxp5n") pod "6729f1b7-260e-4a90-a2da-1258e036b9ea" (UID: "6729f1b7-260e-4a90-a2da-1258e036b9ea"). InnerVolumeSpecName "kube-api-access-mxp5n". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:43:57 crc kubenswrapper[4689]: I1201 08:43:57.074435 4689 scope.go:117] "RemoveContainer" containerID="6d7d3beddc20a2bf7df1809d2beb1552a8eb8de8e78a5ee6eaaa11077bbae855" Dec 01 08:43:57 crc kubenswrapper[4689]: I1201 08:43:57.107977 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xffcg\" (UniqueName: \"kubernetes.io/projected/2fd47e85-de9d-475a-8907-4e805cb1cfc8-kube-api-access-xffcg\") on node \"crc\" DevicePath \"\"" Dec 01 08:43:57 crc kubenswrapper[4689]: I1201 08:43:57.108017 4689 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1860f8a4-ce73-4d74-8dcf-0a43a90d35b9-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 08:43:57 crc kubenswrapper[4689]: I1201 08:43:57.108027 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nrb74\" (UniqueName: \"kubernetes.io/projected/e5e4c105-766f-4c1a-befe-a059da17406f-kube-api-access-nrb74\") on node \"crc\" DevicePath \"\"" Dec 01 08:43:57 crc kubenswrapper[4689]: I1201 08:43:57.108038 4689 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2fd47e85-de9d-475a-8907-4e805cb1cfc8-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 01 08:43:57 crc kubenswrapper[4689]: I1201 08:43:57.108048 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mxp5n\" (UniqueName: \"kubernetes.io/projected/6729f1b7-260e-4a90-a2da-1258e036b9ea-kube-api-access-mxp5n\") on node \"crc\" DevicePath \"\"" Dec 01 08:43:57 crc kubenswrapper[4689]: I1201 08:43:57.108057 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j6z2f\" (UniqueName: \"kubernetes.io/projected/a49ba834-1d80-4003-bf95-6dfd68b25a49-kube-api-access-j6z2f\") on node \"crc\" DevicePath \"\"" Dec 01 08:43:57 crc kubenswrapper[4689]: I1201 08:43:57.108067 4689 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2fd47e85-de9d-475a-8907-4e805cb1cfc8-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 01 08:43:57 crc kubenswrapper[4689]: I1201 08:43:57.108078 4689 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1860f8a4-ce73-4d74-8dcf-0a43a90d35b9-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 08:43:57 crc kubenswrapper[4689]: I1201 08:43:57.108088 4689 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a49ba834-1d80-4003-bf95-6dfd68b25a49-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 08:43:57 crc kubenswrapper[4689]: I1201 08:43:57.108096 4689 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a02d72db-aa64-4300-acc0-93b8677bf6df-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 08:43:57 crc kubenswrapper[4689]: I1201 08:43:57.108104 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fnvrm\" (UniqueName: \"kubernetes.io/projected/1860f8a4-ce73-4d74-8dcf-0a43a90d35b9-kube-api-access-fnvrm\") on node \"crc\" DevicePath \"\"" Dec 01 08:43:57 crc kubenswrapper[4689]: I1201 08:43:57.108113 4689 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6729f1b7-260e-4a90-a2da-1258e036b9ea-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 08:43:57 crc kubenswrapper[4689]: I1201 08:43:57.108122 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kq9wb\" (UniqueName: \"kubernetes.io/projected/a02d72db-aa64-4300-acc0-93b8677bf6df-kube-api-access-kq9wb\") on node \"crc\" DevicePath \"\"" Dec 01 08:43:57 crc kubenswrapper[4689]: I1201 08:43:57.117032 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5e4c105-766f-4c1a-befe-a059da17406f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e5e4c105-766f-4c1a-befe-a059da17406f" (UID: "e5e4c105-766f-4c1a-befe-a059da17406f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 08:43:57 crc kubenswrapper[4689]: I1201 08:43:57.117089 4689 scope.go:117] "RemoveContainer" containerID="3d0a6f0755b434599c6185fe6a3b4666c869a86a3a50a4c67602865af3f6b235" Dec 01 08:43:57 crc kubenswrapper[4689]: I1201 08:43:57.149984 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a49ba834-1d80-4003-bf95-6dfd68b25a49-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a49ba834-1d80-4003-bf95-6dfd68b25a49" (UID: "a49ba834-1d80-4003-bf95-6dfd68b25a49"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 08:43:57 crc kubenswrapper[4689]: I1201 08:43:57.152336 4689 scope.go:117] "RemoveContainer" containerID="5d810298f1db3066bb5475e8df6c93e4dc11fe825d906aaaef3cd96b0a54ec4c" Dec 01 08:43:57 crc kubenswrapper[4689]: I1201 08:43:57.168570 4689 scope.go:117] "RemoveContainer" containerID="21bba6ed1f84c6c8fd98e99fbd940df6c6a04e1b568eae5f7728d234dfa10950" Dec 01 08:43:57 crc kubenswrapper[4689]: I1201 08:43:57.182168 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a02d72db-aa64-4300-acc0-93b8677bf6df-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a02d72db-aa64-4300-acc0-93b8677bf6df" (UID: "a02d72db-aa64-4300-acc0-93b8677bf6df"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 08:43:57 crc kubenswrapper[4689]: I1201 08:43:57.184585 4689 scope.go:117] "RemoveContainer" containerID="cea92daafe0d95df7f372a07fd8913471a77c2417c4737a6c2fd48b03844ca9b" Dec 01 08:43:57 crc kubenswrapper[4689]: I1201 08:43:57.197334 4689 scope.go:117] "RemoveContainer" containerID="5a221929012e73570e9ecb5e31c4d018eae5a0f70b37169178a2bd6ee1d6e9a5" Dec 01 08:43:57 crc kubenswrapper[4689]: I1201 08:43:57.209418 4689 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a02d72db-aa64-4300-acc0-93b8677bf6df-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 08:43:57 crc kubenswrapper[4689]: I1201 08:43:57.209446 4689 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a49ba834-1d80-4003-bf95-6dfd68b25a49-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 08:43:57 crc kubenswrapper[4689]: I1201 08:43:57.209455 4689 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5e4c105-766f-4c1a-befe-a059da17406f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 08:43:57 crc kubenswrapper[4689]: I1201 08:43:57.222270 4689 scope.go:117] "RemoveContainer" containerID="21bba6ed1f84c6c8fd98e99fbd940df6c6a04e1b568eae5f7728d234dfa10950" Dec 01 08:43:57 crc kubenswrapper[4689]: E1201 08:43:57.222688 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21bba6ed1f84c6c8fd98e99fbd940df6c6a04e1b568eae5f7728d234dfa10950\": container with ID starting with 21bba6ed1f84c6c8fd98e99fbd940df6c6a04e1b568eae5f7728d234dfa10950 not found: ID does not exist" containerID="21bba6ed1f84c6c8fd98e99fbd940df6c6a04e1b568eae5f7728d234dfa10950" Dec 01 08:43:57 crc kubenswrapper[4689]: I1201 08:43:57.222724 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21bba6ed1f84c6c8fd98e99fbd940df6c6a04e1b568eae5f7728d234dfa10950"} err="failed to get container status \"21bba6ed1f84c6c8fd98e99fbd940df6c6a04e1b568eae5f7728d234dfa10950\": rpc error: code = NotFound desc = could not find container \"21bba6ed1f84c6c8fd98e99fbd940df6c6a04e1b568eae5f7728d234dfa10950\": container with ID starting with 21bba6ed1f84c6c8fd98e99fbd940df6c6a04e1b568eae5f7728d234dfa10950 not found: ID does not exist" Dec 01 08:43:57 crc kubenswrapper[4689]: I1201 08:43:57.222747 4689 scope.go:117] "RemoveContainer" containerID="cea92daafe0d95df7f372a07fd8913471a77c2417c4737a6c2fd48b03844ca9b" Dec 01 08:43:57 crc kubenswrapper[4689]: E1201 08:43:57.223177 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cea92daafe0d95df7f372a07fd8913471a77c2417c4737a6c2fd48b03844ca9b\": container with ID starting with cea92daafe0d95df7f372a07fd8913471a77c2417c4737a6c2fd48b03844ca9b not found: ID does not exist" containerID="cea92daafe0d95df7f372a07fd8913471a77c2417c4737a6c2fd48b03844ca9b" Dec 01 08:43:57 crc kubenswrapper[4689]: I1201 08:43:57.223204 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cea92daafe0d95df7f372a07fd8913471a77c2417c4737a6c2fd48b03844ca9b"} err="failed to get container status \"cea92daafe0d95df7f372a07fd8913471a77c2417c4737a6c2fd48b03844ca9b\": rpc error: code = NotFound desc = could not find container \"cea92daafe0d95df7f372a07fd8913471a77c2417c4737a6c2fd48b03844ca9b\": container with ID starting with cea92daafe0d95df7f372a07fd8913471a77c2417c4737a6c2fd48b03844ca9b not found: ID does not exist" Dec 01 08:43:57 crc kubenswrapper[4689]: I1201 08:43:57.223220 4689 scope.go:117] "RemoveContainer" containerID="5a221929012e73570e9ecb5e31c4d018eae5a0f70b37169178a2bd6ee1d6e9a5" Dec 01 08:43:57 crc kubenswrapper[4689]: E1201 08:43:57.223502 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a221929012e73570e9ecb5e31c4d018eae5a0f70b37169178a2bd6ee1d6e9a5\": container with ID starting with 5a221929012e73570e9ecb5e31c4d018eae5a0f70b37169178a2bd6ee1d6e9a5 not found: ID does not exist" containerID="5a221929012e73570e9ecb5e31c4d018eae5a0f70b37169178a2bd6ee1d6e9a5" Dec 01 08:43:57 crc kubenswrapper[4689]: I1201 08:43:57.223523 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a221929012e73570e9ecb5e31c4d018eae5a0f70b37169178a2bd6ee1d6e9a5"} err="failed to get container status \"5a221929012e73570e9ecb5e31c4d018eae5a0f70b37169178a2bd6ee1d6e9a5\": rpc error: code = NotFound desc = could not find container \"5a221929012e73570e9ecb5e31c4d018eae5a0f70b37169178a2bd6ee1d6e9a5\": container with ID starting with 5a221929012e73570e9ecb5e31c4d018eae5a0f70b37169178a2bd6ee1d6e9a5 not found: ID does not exist" Dec 01 08:43:57 crc kubenswrapper[4689]: I1201 08:43:57.223536 4689 scope.go:117] "RemoveContainer" containerID="44edf9b750829c8e2029f41b7edfb58c306be2756f69913cff5e3ae9b9214d6c" Dec 01 08:43:57 crc kubenswrapper[4689]: I1201 08:43:57.234998 4689 scope.go:117] "RemoveContainer" containerID="49e50dc2437dd2c604e494752cbc28981dc54fd4501f6cd3a953cdacc9fcc5f0" Dec 01 08:43:57 crc kubenswrapper[4689]: I1201 08:43:57.244144 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6729f1b7-260e-4a90-a2da-1258e036b9ea-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6729f1b7-260e-4a90-a2da-1258e036b9ea" (UID: "6729f1b7-260e-4a90-a2da-1258e036b9ea"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 08:43:57 crc kubenswrapper[4689]: I1201 08:43:57.263206 4689 scope.go:117] "RemoveContainer" containerID="79386418abdc31ad13ebb605959faa59aa6ddcac8acde324c5265a0c5a8b6fc4" Dec 01 08:43:57 crc kubenswrapper[4689]: I1201 08:43:57.272073 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-l54ll"] Dec 01 08:43:57 crc kubenswrapper[4689]: I1201 08:43:57.279127 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-l54ll"] Dec 01 08:43:57 crc kubenswrapper[4689]: I1201 08:43:57.286520 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-jhh4c"] Dec 01 08:43:57 crc kubenswrapper[4689]: I1201 08:43:57.288140 4689 scope.go:117] "RemoveContainer" containerID="44edf9b750829c8e2029f41b7edfb58c306be2756f69913cff5e3ae9b9214d6c" Dec 01 08:43:57 crc kubenswrapper[4689]: E1201 08:43:57.292046 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44edf9b750829c8e2029f41b7edfb58c306be2756f69913cff5e3ae9b9214d6c\": container with ID starting with 44edf9b750829c8e2029f41b7edfb58c306be2756f69913cff5e3ae9b9214d6c not found: ID does not exist" containerID="44edf9b750829c8e2029f41b7edfb58c306be2756f69913cff5e3ae9b9214d6c" Dec 01 08:43:57 crc kubenswrapper[4689]: I1201 08:43:57.292120 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44edf9b750829c8e2029f41b7edfb58c306be2756f69913cff5e3ae9b9214d6c"} err="failed to get container status \"44edf9b750829c8e2029f41b7edfb58c306be2756f69913cff5e3ae9b9214d6c\": rpc error: code = NotFound desc = could not find container \"44edf9b750829c8e2029f41b7edfb58c306be2756f69913cff5e3ae9b9214d6c\": container with ID starting with 44edf9b750829c8e2029f41b7edfb58c306be2756f69913cff5e3ae9b9214d6c not found: ID does not exist" Dec 01 08:43:57 crc kubenswrapper[4689]: I1201 08:43:57.292174 4689 scope.go:117] "RemoveContainer" containerID="49e50dc2437dd2c604e494752cbc28981dc54fd4501f6cd3a953cdacc9fcc5f0" Dec 01 08:43:57 crc kubenswrapper[4689]: E1201 08:43:57.293124 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49e50dc2437dd2c604e494752cbc28981dc54fd4501f6cd3a953cdacc9fcc5f0\": container with ID starting with 49e50dc2437dd2c604e494752cbc28981dc54fd4501f6cd3a953cdacc9fcc5f0 not found: ID does not exist" containerID="49e50dc2437dd2c604e494752cbc28981dc54fd4501f6cd3a953cdacc9fcc5f0" Dec 01 08:43:57 crc kubenswrapper[4689]: I1201 08:43:57.293326 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49e50dc2437dd2c604e494752cbc28981dc54fd4501f6cd3a953cdacc9fcc5f0"} err="failed to get container status \"49e50dc2437dd2c604e494752cbc28981dc54fd4501f6cd3a953cdacc9fcc5f0\": rpc error: code = NotFound desc = could not find container \"49e50dc2437dd2c604e494752cbc28981dc54fd4501f6cd3a953cdacc9fcc5f0\": container with ID starting with 49e50dc2437dd2c604e494752cbc28981dc54fd4501f6cd3a953cdacc9fcc5f0 not found: ID does not exist" Dec 01 08:43:57 crc kubenswrapper[4689]: I1201 08:43:57.293563 4689 scope.go:117] "RemoveContainer" containerID="79386418abdc31ad13ebb605959faa59aa6ddcac8acde324c5265a0c5a8b6fc4" Dec 01 08:43:57 crc kubenswrapper[4689]: E1201 08:43:57.294962 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79386418abdc31ad13ebb605959faa59aa6ddcac8acde324c5265a0c5a8b6fc4\": container with ID starting with 79386418abdc31ad13ebb605959faa59aa6ddcac8acde324c5265a0c5a8b6fc4 not found: ID does not exist" containerID="79386418abdc31ad13ebb605959faa59aa6ddcac8acde324c5265a0c5a8b6fc4" Dec 01 08:43:57 crc kubenswrapper[4689]: I1201 08:43:57.295011 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79386418abdc31ad13ebb605959faa59aa6ddcac8acde324c5265a0c5a8b6fc4"} err="failed to get container status \"79386418abdc31ad13ebb605959faa59aa6ddcac8acde324c5265a0c5a8b6fc4\": rpc error: code = NotFound desc = could not find container \"79386418abdc31ad13ebb605959faa59aa6ddcac8acde324c5265a0c5a8b6fc4\": container with ID starting with 79386418abdc31ad13ebb605959faa59aa6ddcac8acde324c5265a0c5a8b6fc4 not found: ID does not exist" Dec 01 08:43:57 crc kubenswrapper[4689]: I1201 08:43:57.295034 4689 scope.go:117] "RemoveContainer" containerID="994348d608ec625bf26f07ea88512d99f3e93556858eaef535eea39a7971f825" Dec 01 08:43:57 crc kubenswrapper[4689]: I1201 08:43:57.301964 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-l9s25"] Dec 01 08:43:57 crc kubenswrapper[4689]: I1201 08:43:57.310428 4689 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6729f1b7-260e-4a90-a2da-1258e036b9ea-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 08:43:57 crc kubenswrapper[4689]: I1201 08:43:57.316129 4689 scope.go:117] "RemoveContainer" containerID="465640181ae9fe565d15e3e966890f7db41a724afa7d0c97a9cf5f9701cbebc4" Dec 01 08:43:57 crc kubenswrapper[4689]: I1201 08:43:57.316392 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-l9s25"] Dec 01 08:43:57 crc kubenswrapper[4689]: I1201 08:43:57.320230 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4kvdm"] Dec 01 08:43:57 crc kubenswrapper[4689]: I1201 08:43:57.330947 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4kvdm"] Dec 01 08:43:57 crc kubenswrapper[4689]: I1201 08:43:57.338515 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-c4zck"] Dec 01 08:43:57 crc kubenswrapper[4689]: I1201 08:43:57.341565 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-c4zck"] Dec 01 08:43:57 crc kubenswrapper[4689]: I1201 08:43:57.345795 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-twqmb"] Dec 01 08:43:57 crc kubenswrapper[4689]: I1201 08:43:57.349360 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-twqmb"] Dec 01 08:43:57 crc kubenswrapper[4689]: I1201 08:43:57.350327 4689 scope.go:117] "RemoveContainer" containerID="0aad43468abdef26208728422bec95dbd4ee44d2763ecf3f3efe6054880a857b" Dec 01 08:43:57 crc kubenswrapper[4689]: I1201 08:43:57.375454 4689 scope.go:117] "RemoveContainer" containerID="994348d608ec625bf26f07ea88512d99f3e93556858eaef535eea39a7971f825" Dec 01 08:43:57 crc kubenswrapper[4689]: E1201 08:43:57.375850 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"994348d608ec625bf26f07ea88512d99f3e93556858eaef535eea39a7971f825\": container with ID starting with 994348d608ec625bf26f07ea88512d99f3e93556858eaef535eea39a7971f825 not found: ID does not exist" containerID="994348d608ec625bf26f07ea88512d99f3e93556858eaef535eea39a7971f825" Dec 01 08:43:57 crc kubenswrapper[4689]: I1201 08:43:57.375889 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"994348d608ec625bf26f07ea88512d99f3e93556858eaef535eea39a7971f825"} err="failed to get container status \"994348d608ec625bf26f07ea88512d99f3e93556858eaef535eea39a7971f825\": rpc error: code = NotFound desc = could not find container \"994348d608ec625bf26f07ea88512d99f3e93556858eaef535eea39a7971f825\": container with ID starting with 994348d608ec625bf26f07ea88512d99f3e93556858eaef535eea39a7971f825 not found: ID does not exist" Dec 01 08:43:57 crc kubenswrapper[4689]: I1201 08:43:57.375923 4689 scope.go:117] "RemoveContainer" containerID="465640181ae9fe565d15e3e966890f7db41a724afa7d0c97a9cf5f9701cbebc4" Dec 01 08:43:57 crc kubenswrapper[4689]: E1201 08:43:57.376266 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"465640181ae9fe565d15e3e966890f7db41a724afa7d0c97a9cf5f9701cbebc4\": container with ID starting with 465640181ae9fe565d15e3e966890f7db41a724afa7d0c97a9cf5f9701cbebc4 not found: ID does not exist" containerID="465640181ae9fe565d15e3e966890f7db41a724afa7d0c97a9cf5f9701cbebc4" Dec 01 08:43:57 crc kubenswrapper[4689]: I1201 08:43:57.376292 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"465640181ae9fe565d15e3e966890f7db41a724afa7d0c97a9cf5f9701cbebc4"} err="failed to get container status \"465640181ae9fe565d15e3e966890f7db41a724afa7d0c97a9cf5f9701cbebc4\": rpc error: code = NotFound desc = could not find container \"465640181ae9fe565d15e3e966890f7db41a724afa7d0c97a9cf5f9701cbebc4\": container with ID starting with 465640181ae9fe565d15e3e966890f7db41a724afa7d0c97a9cf5f9701cbebc4 not found: ID does not exist" Dec 01 08:43:57 crc kubenswrapper[4689]: I1201 08:43:57.376318 4689 scope.go:117] "RemoveContainer" containerID="0aad43468abdef26208728422bec95dbd4ee44d2763ecf3f3efe6054880a857b" Dec 01 08:43:57 crc kubenswrapper[4689]: E1201 08:43:57.376662 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0aad43468abdef26208728422bec95dbd4ee44d2763ecf3f3efe6054880a857b\": container with ID starting with 0aad43468abdef26208728422bec95dbd4ee44d2763ecf3f3efe6054880a857b not found: ID does not exist" containerID="0aad43468abdef26208728422bec95dbd4ee44d2763ecf3f3efe6054880a857b" Dec 01 08:43:57 crc kubenswrapper[4689]: I1201 08:43:57.376680 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0aad43468abdef26208728422bec95dbd4ee44d2763ecf3f3efe6054880a857b"} err="failed to get container status \"0aad43468abdef26208728422bec95dbd4ee44d2763ecf3f3efe6054880a857b\": rpc error: code = NotFound desc = could not find container \"0aad43468abdef26208728422bec95dbd4ee44d2763ecf3f3efe6054880a857b\": container with ID starting with 0aad43468abdef26208728422bec95dbd4ee44d2763ecf3f3efe6054880a857b not found: ID does not exist" Dec 01 08:43:57 crc kubenswrapper[4689]: I1201 08:43:57.933351 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 01 08:43:58 crc kubenswrapper[4689]: I1201 08:43:58.010705 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-jhh4c" event={"ID":"0cd9ccf0-2f85-4649-ac80-931f337566ca","Type":"ContainerStarted","Data":"a44c6ac6c569313a1279c318dc779effd9720f53f67036c49136a7b38e19dcc8"} Dec 01 08:43:58 crc kubenswrapper[4689]: I1201 08:43:58.010755 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-jhh4c" event={"ID":"0cd9ccf0-2f85-4649-ac80-931f337566ca","Type":"ContainerStarted","Data":"bf6b7050f5d31d575f7173b889c2938d294c5bc3637a5841bb6883513cb69b74"} Dec 01 08:43:58 crc kubenswrapper[4689]: I1201 08:43:58.013588 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z26c6" event={"ID":"6729f1b7-260e-4a90-a2da-1258e036b9ea","Type":"ContainerDied","Data":"3024ef3e65427744783eb0d8f92b3875f5d2a9ef21f13896f13dec1f9767c687"} Dec 01 08:43:58 crc kubenswrapper[4689]: I1201 08:43:58.013620 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z26c6" Dec 01 08:43:58 crc kubenswrapper[4689]: I1201 08:43:58.027670 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-jhh4c" podStartSLOduration=2.027645341 podStartE2EDuration="2.027645341s" podCreationTimestamp="2025-12-01 08:43:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:43:58.027070096 +0000 UTC m=+318.099358010" watchObservedRunningTime="2025-12-01 08:43:58.027645341 +0000 UTC m=+318.099933245" Dec 01 08:43:58 crc kubenswrapper[4689]: I1201 08:43:58.066055 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-z26c6"] Dec 01 08:43:58 crc kubenswrapper[4689]: I1201 08:43:58.066105 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-z26c6"] Dec 01 08:43:59 crc kubenswrapper[4689]: I1201 08:43:59.020057 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-jhh4c" Dec 01 08:43:59 crc kubenswrapper[4689]: I1201 08:43:59.023652 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-jhh4c" Dec 01 08:43:59 crc kubenswrapper[4689]: I1201 08:43:59.054472 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1860f8a4-ce73-4d74-8dcf-0a43a90d35b9" path="/var/lib/kubelet/pods/1860f8a4-ce73-4d74-8dcf-0a43a90d35b9/volumes" Dec 01 08:43:59 crc kubenswrapper[4689]: I1201 08:43:59.055193 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2fd47e85-de9d-475a-8907-4e805cb1cfc8" path="/var/lib/kubelet/pods/2fd47e85-de9d-475a-8907-4e805cb1cfc8/volumes" Dec 01 08:43:59 crc kubenswrapper[4689]: I1201 08:43:59.055677 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6729f1b7-260e-4a90-a2da-1258e036b9ea" path="/var/lib/kubelet/pods/6729f1b7-260e-4a90-a2da-1258e036b9ea/volumes" Dec 01 08:43:59 crc kubenswrapper[4689]: I1201 08:43:59.056805 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a02d72db-aa64-4300-acc0-93b8677bf6df" path="/var/lib/kubelet/pods/a02d72db-aa64-4300-acc0-93b8677bf6df/volumes" Dec 01 08:43:59 crc kubenswrapper[4689]: I1201 08:43:59.057466 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a49ba834-1d80-4003-bf95-6dfd68b25a49" path="/var/lib/kubelet/pods/a49ba834-1d80-4003-bf95-6dfd68b25a49/volumes" Dec 01 08:43:59 crc kubenswrapper[4689]: I1201 08:43:59.058534 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5e4c105-766f-4c1a-befe-a059da17406f" path="/var/lib/kubelet/pods/e5e4c105-766f-4c1a-befe-a059da17406f/volumes" Dec 01 08:44:02 crc kubenswrapper[4689]: I1201 08:44:02.781150 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-9nx2j"] Dec 01 08:44:02 crc kubenswrapper[4689]: I1201 08:44:02.782028 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-9nx2j" podUID="5fb20738-492b-4b13-bf8a-5c32aabc0f32" containerName="controller-manager" containerID="cri-o://1cfced87066f601fd4e4bbc26e411cfd82fa6de9dcd5fee9ce9c0459836affaa" gracePeriod=30 Dec 01 08:44:02 crc kubenswrapper[4689]: I1201 08:44:02.863974 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-6z2v4"] Dec 01 08:44:02 crc kubenswrapper[4689]: I1201 08:44:02.864620 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6z2v4" podUID="bd8122c2-aaf0-4148-849c-ca4502dd0f55" containerName="route-controller-manager" containerID="cri-o://7298609177a1a785d5c63f463609e12770765cb02525245e890c6e41230a272e" gracePeriod=30 Dec 01 08:44:03 crc kubenswrapper[4689]: I1201 08:44:03.048558 4689 generic.go:334] "Generic (PLEG): container finished" podID="5fb20738-492b-4b13-bf8a-5c32aabc0f32" containerID="1cfced87066f601fd4e4bbc26e411cfd82fa6de9dcd5fee9ce9c0459836affaa" exitCode=0 Dec 01 08:44:03 crc kubenswrapper[4689]: I1201 08:44:03.054618 4689 generic.go:334] "Generic (PLEG): container finished" podID="bd8122c2-aaf0-4148-849c-ca4502dd0f55" containerID="7298609177a1a785d5c63f463609e12770765cb02525245e890c6e41230a272e" exitCode=0 Dec 01 08:44:03 crc kubenswrapper[4689]: I1201 08:44:03.056488 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-9nx2j" event={"ID":"5fb20738-492b-4b13-bf8a-5c32aabc0f32","Type":"ContainerDied","Data":"1cfced87066f601fd4e4bbc26e411cfd82fa6de9dcd5fee9ce9c0459836affaa"} Dec 01 08:44:03 crc kubenswrapper[4689]: I1201 08:44:03.056543 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6z2v4" event={"ID":"bd8122c2-aaf0-4148-849c-ca4502dd0f55","Type":"ContainerDied","Data":"7298609177a1a785d5c63f463609e12770765cb02525245e890c6e41230a272e"} Dec 01 08:44:03 crc kubenswrapper[4689]: I1201 08:44:03.259716 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-9nx2j" Dec 01 08:44:03 crc kubenswrapper[4689]: I1201 08:44:03.305882 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6z2v4" Dec 01 08:44:03 crc kubenswrapper[4689]: I1201 08:44:03.389473 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fb20738-492b-4b13-bf8a-5c32aabc0f32-config\") pod \"5fb20738-492b-4b13-bf8a-5c32aabc0f32\" (UID: \"5fb20738-492b-4b13-bf8a-5c32aabc0f32\") " Dec 01 08:44:03 crc kubenswrapper[4689]: I1201 08:44:03.389523 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5fb20738-492b-4b13-bf8a-5c32aabc0f32-proxy-ca-bundles\") pod \"5fb20738-492b-4b13-bf8a-5c32aabc0f32\" (UID: \"5fb20738-492b-4b13-bf8a-5c32aabc0f32\") " Dec 01 08:44:03 crc kubenswrapper[4689]: I1201 08:44:03.389551 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2z6gq\" (UniqueName: \"kubernetes.io/projected/5fb20738-492b-4b13-bf8a-5c32aabc0f32-kube-api-access-2z6gq\") pod \"5fb20738-492b-4b13-bf8a-5c32aabc0f32\" (UID: \"5fb20738-492b-4b13-bf8a-5c32aabc0f32\") " Dec 01 08:44:03 crc kubenswrapper[4689]: I1201 08:44:03.389908 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5fb20738-492b-4b13-bf8a-5c32aabc0f32-client-ca\") pod \"5fb20738-492b-4b13-bf8a-5c32aabc0f32\" (UID: \"5fb20738-492b-4b13-bf8a-5c32aabc0f32\") " Dec 01 08:44:03 crc kubenswrapper[4689]: I1201 08:44:03.391014 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fb20738-492b-4b13-bf8a-5c32aabc0f32-client-ca" (OuterVolumeSpecName: "client-ca") pod "5fb20738-492b-4b13-bf8a-5c32aabc0f32" (UID: "5fb20738-492b-4b13-bf8a-5c32aabc0f32"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:44:03 crc kubenswrapper[4689]: I1201 08:44:03.391050 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fb20738-492b-4b13-bf8a-5c32aabc0f32-config" (OuterVolumeSpecName: "config") pod "5fb20738-492b-4b13-bf8a-5c32aabc0f32" (UID: "5fb20738-492b-4b13-bf8a-5c32aabc0f32"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:44:03 crc kubenswrapper[4689]: I1201 08:44:03.391102 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5fb20738-492b-4b13-bf8a-5c32aabc0f32-serving-cert\") pod \"5fb20738-492b-4b13-bf8a-5c32aabc0f32\" (UID: \"5fb20738-492b-4b13-bf8a-5c32aabc0f32\") " Dec 01 08:44:03 crc kubenswrapper[4689]: I1201 08:44:03.391335 4689 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fb20738-492b-4b13-bf8a-5c32aabc0f32-config\") on node \"crc\" DevicePath \"\"" Dec 01 08:44:03 crc kubenswrapper[4689]: I1201 08:44:03.391354 4689 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5fb20738-492b-4b13-bf8a-5c32aabc0f32-client-ca\") on node \"crc\" DevicePath \"\"" Dec 01 08:44:03 crc kubenswrapper[4689]: I1201 08:44:03.391428 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fb20738-492b-4b13-bf8a-5c32aabc0f32-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "5fb20738-492b-4b13-bf8a-5c32aabc0f32" (UID: "5fb20738-492b-4b13-bf8a-5c32aabc0f32"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:44:03 crc kubenswrapper[4689]: I1201 08:44:03.396392 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fb20738-492b-4b13-bf8a-5c32aabc0f32-kube-api-access-2z6gq" (OuterVolumeSpecName: "kube-api-access-2z6gq") pod "5fb20738-492b-4b13-bf8a-5c32aabc0f32" (UID: "5fb20738-492b-4b13-bf8a-5c32aabc0f32"). InnerVolumeSpecName "kube-api-access-2z6gq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:44:03 crc kubenswrapper[4689]: I1201 08:44:03.396686 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fb20738-492b-4b13-bf8a-5c32aabc0f32-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5fb20738-492b-4b13-bf8a-5c32aabc0f32" (UID: "5fb20738-492b-4b13-bf8a-5c32aabc0f32"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:44:03 crc kubenswrapper[4689]: I1201 08:44:03.492224 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd8122c2-aaf0-4148-849c-ca4502dd0f55-config\") pod \"bd8122c2-aaf0-4148-849c-ca4502dd0f55\" (UID: \"bd8122c2-aaf0-4148-849c-ca4502dd0f55\") " Dec 01 08:44:03 crc kubenswrapper[4689]: I1201 08:44:03.492327 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bd8122c2-aaf0-4148-849c-ca4502dd0f55-serving-cert\") pod \"bd8122c2-aaf0-4148-849c-ca4502dd0f55\" (UID: \"bd8122c2-aaf0-4148-849c-ca4502dd0f55\") " Dec 01 08:44:03 crc kubenswrapper[4689]: I1201 08:44:03.493786 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zxzzf\" (UniqueName: \"kubernetes.io/projected/bd8122c2-aaf0-4148-849c-ca4502dd0f55-kube-api-access-zxzzf\") pod \"bd8122c2-aaf0-4148-849c-ca4502dd0f55\" (UID: \"bd8122c2-aaf0-4148-849c-ca4502dd0f55\") " Dec 01 08:44:03 crc kubenswrapper[4689]: I1201 08:44:03.493848 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bd8122c2-aaf0-4148-849c-ca4502dd0f55-client-ca\") pod \"bd8122c2-aaf0-4148-849c-ca4502dd0f55\" (UID: \"bd8122c2-aaf0-4148-849c-ca4502dd0f55\") " Dec 01 08:44:03 crc kubenswrapper[4689]: I1201 08:44:03.493871 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd8122c2-aaf0-4148-849c-ca4502dd0f55-config" (OuterVolumeSpecName: "config") pod "bd8122c2-aaf0-4148-849c-ca4502dd0f55" (UID: "bd8122c2-aaf0-4148-849c-ca4502dd0f55"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:44:03 crc kubenswrapper[4689]: I1201 08:44:03.494277 4689 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5fb20738-492b-4b13-bf8a-5c32aabc0f32-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 01 08:44:03 crc kubenswrapper[4689]: I1201 08:44:03.494291 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2z6gq\" (UniqueName: \"kubernetes.io/projected/5fb20738-492b-4b13-bf8a-5c32aabc0f32-kube-api-access-2z6gq\") on node \"crc\" DevicePath \"\"" Dec 01 08:44:03 crc kubenswrapper[4689]: I1201 08:44:03.494306 4689 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd8122c2-aaf0-4148-849c-ca4502dd0f55-config\") on node \"crc\" DevicePath \"\"" Dec 01 08:44:03 crc kubenswrapper[4689]: I1201 08:44:03.494315 4689 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5fb20738-492b-4b13-bf8a-5c32aabc0f32-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 08:44:03 crc kubenswrapper[4689]: I1201 08:44:03.494815 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd8122c2-aaf0-4148-849c-ca4502dd0f55-client-ca" (OuterVolumeSpecName: "client-ca") pod "bd8122c2-aaf0-4148-849c-ca4502dd0f55" (UID: "bd8122c2-aaf0-4148-849c-ca4502dd0f55"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:44:03 crc kubenswrapper[4689]: I1201 08:44:03.497070 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd8122c2-aaf0-4148-849c-ca4502dd0f55-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bd8122c2-aaf0-4148-849c-ca4502dd0f55" (UID: "bd8122c2-aaf0-4148-849c-ca4502dd0f55"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:44:03 crc kubenswrapper[4689]: I1201 08:44:03.497171 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd8122c2-aaf0-4148-849c-ca4502dd0f55-kube-api-access-zxzzf" (OuterVolumeSpecName: "kube-api-access-zxzzf") pod "bd8122c2-aaf0-4148-849c-ca4502dd0f55" (UID: "bd8122c2-aaf0-4148-849c-ca4502dd0f55"). InnerVolumeSpecName "kube-api-access-zxzzf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:44:03 crc kubenswrapper[4689]: I1201 08:44:03.595831 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zxzzf\" (UniqueName: \"kubernetes.io/projected/bd8122c2-aaf0-4148-849c-ca4502dd0f55-kube-api-access-zxzzf\") on node \"crc\" DevicePath \"\"" Dec 01 08:44:03 crc kubenswrapper[4689]: I1201 08:44:03.595887 4689 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bd8122c2-aaf0-4148-849c-ca4502dd0f55-client-ca\") on node \"crc\" DevicePath \"\"" Dec 01 08:44:03 crc kubenswrapper[4689]: I1201 08:44:03.595899 4689 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bd8122c2-aaf0-4148-849c-ca4502dd0f55-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 08:44:03 crc kubenswrapper[4689]: I1201 08:44:03.969868 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5494bbdbdf-rthft"] Dec 01 08:44:03 crc kubenswrapper[4689]: E1201 08:44:03.970178 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a02d72db-aa64-4300-acc0-93b8677bf6df" containerName="registry-server" Dec 01 08:44:03 crc kubenswrapper[4689]: I1201 08:44:03.970200 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="a02d72db-aa64-4300-acc0-93b8677bf6df" containerName="registry-server" Dec 01 08:44:03 crc kubenswrapper[4689]: E1201 08:44:03.970217 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1860f8a4-ce73-4d74-8dcf-0a43a90d35b9" containerName="extract-content" Dec 01 08:44:03 crc kubenswrapper[4689]: I1201 08:44:03.970226 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="1860f8a4-ce73-4d74-8dcf-0a43a90d35b9" containerName="extract-content" Dec 01 08:44:03 crc kubenswrapper[4689]: E1201 08:44:03.970240 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a02d72db-aa64-4300-acc0-93b8677bf6df" containerName="extract-utilities" Dec 01 08:44:03 crc kubenswrapper[4689]: I1201 08:44:03.970249 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="a02d72db-aa64-4300-acc0-93b8677bf6df" containerName="extract-utilities" Dec 01 08:44:03 crc kubenswrapper[4689]: E1201 08:44:03.970259 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5e4c105-766f-4c1a-befe-a059da17406f" containerName="extract-utilities" Dec 01 08:44:03 crc kubenswrapper[4689]: I1201 08:44:03.970267 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5e4c105-766f-4c1a-befe-a059da17406f" containerName="extract-utilities" Dec 01 08:44:03 crc kubenswrapper[4689]: E1201 08:44:03.970278 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6729f1b7-260e-4a90-a2da-1258e036b9ea" containerName="extract-content" Dec 01 08:44:03 crc kubenswrapper[4689]: I1201 08:44:03.970285 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="6729f1b7-260e-4a90-a2da-1258e036b9ea" containerName="extract-content" Dec 01 08:44:03 crc kubenswrapper[4689]: E1201 08:44:03.970295 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a49ba834-1d80-4003-bf95-6dfd68b25a49" containerName="extract-utilities" Dec 01 08:44:03 crc kubenswrapper[4689]: I1201 08:44:03.971100 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="a49ba834-1d80-4003-bf95-6dfd68b25a49" containerName="extract-utilities" Dec 01 08:44:03 crc kubenswrapper[4689]: E1201 08:44:03.971121 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1860f8a4-ce73-4d74-8dcf-0a43a90d35b9" containerName="extract-utilities" Dec 01 08:44:03 crc kubenswrapper[4689]: I1201 08:44:03.971130 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="1860f8a4-ce73-4d74-8dcf-0a43a90d35b9" containerName="extract-utilities" Dec 01 08:44:03 crc kubenswrapper[4689]: E1201 08:44:03.971143 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a49ba834-1d80-4003-bf95-6dfd68b25a49" containerName="registry-server" Dec 01 08:44:03 crc kubenswrapper[4689]: I1201 08:44:03.971150 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="a49ba834-1d80-4003-bf95-6dfd68b25a49" containerName="registry-server" Dec 01 08:44:03 crc kubenswrapper[4689]: E1201 08:44:03.971161 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a49ba834-1d80-4003-bf95-6dfd68b25a49" containerName="extract-content" Dec 01 08:44:03 crc kubenswrapper[4689]: I1201 08:44:03.971169 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="a49ba834-1d80-4003-bf95-6dfd68b25a49" containerName="extract-content" Dec 01 08:44:03 crc kubenswrapper[4689]: E1201 08:44:03.971181 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6729f1b7-260e-4a90-a2da-1258e036b9ea" containerName="registry-server" Dec 01 08:44:03 crc kubenswrapper[4689]: I1201 08:44:03.971188 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="6729f1b7-260e-4a90-a2da-1258e036b9ea" containerName="registry-server" Dec 01 08:44:03 crc kubenswrapper[4689]: E1201 08:44:03.971199 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd8122c2-aaf0-4148-849c-ca4502dd0f55" containerName="route-controller-manager" Dec 01 08:44:03 crc kubenswrapper[4689]: I1201 08:44:03.971209 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd8122c2-aaf0-4148-849c-ca4502dd0f55" containerName="route-controller-manager" Dec 01 08:44:03 crc kubenswrapper[4689]: E1201 08:44:03.971221 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fb20738-492b-4b13-bf8a-5c32aabc0f32" containerName="controller-manager" Dec 01 08:44:03 crc kubenswrapper[4689]: I1201 08:44:03.971228 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fb20738-492b-4b13-bf8a-5c32aabc0f32" containerName="controller-manager" Dec 01 08:44:03 crc kubenswrapper[4689]: E1201 08:44:03.971239 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a02d72db-aa64-4300-acc0-93b8677bf6df" containerName="extract-content" Dec 01 08:44:03 crc kubenswrapper[4689]: I1201 08:44:03.971246 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="a02d72db-aa64-4300-acc0-93b8677bf6df" containerName="extract-content" Dec 01 08:44:03 crc kubenswrapper[4689]: E1201 08:44:03.971256 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5e4c105-766f-4c1a-befe-a059da17406f" containerName="extract-content" Dec 01 08:44:03 crc kubenswrapper[4689]: I1201 08:44:03.971263 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5e4c105-766f-4c1a-befe-a059da17406f" containerName="extract-content" Dec 01 08:44:03 crc kubenswrapper[4689]: E1201 08:44:03.971275 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6729f1b7-260e-4a90-a2da-1258e036b9ea" containerName="extract-utilities" Dec 01 08:44:03 crc kubenswrapper[4689]: I1201 08:44:03.971283 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="6729f1b7-260e-4a90-a2da-1258e036b9ea" containerName="extract-utilities" Dec 01 08:44:03 crc kubenswrapper[4689]: E1201 08:44:03.971294 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1860f8a4-ce73-4d74-8dcf-0a43a90d35b9" containerName="registry-server" Dec 01 08:44:03 crc kubenswrapper[4689]: I1201 08:44:03.971300 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="1860f8a4-ce73-4d74-8dcf-0a43a90d35b9" containerName="registry-server" Dec 01 08:44:03 crc kubenswrapper[4689]: E1201 08:44:03.971308 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fd47e85-de9d-475a-8907-4e805cb1cfc8" containerName="marketplace-operator" Dec 01 08:44:03 crc kubenswrapper[4689]: I1201 08:44:03.971316 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fd47e85-de9d-475a-8907-4e805cb1cfc8" containerName="marketplace-operator" Dec 01 08:44:03 crc kubenswrapper[4689]: E1201 08:44:03.971327 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5e4c105-766f-4c1a-befe-a059da17406f" containerName="registry-server" Dec 01 08:44:03 crc kubenswrapper[4689]: I1201 08:44:03.971333 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5e4c105-766f-4c1a-befe-a059da17406f" containerName="registry-server" Dec 01 08:44:03 crc kubenswrapper[4689]: I1201 08:44:03.971497 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd8122c2-aaf0-4148-849c-ca4502dd0f55" containerName="route-controller-manager" Dec 01 08:44:03 crc kubenswrapper[4689]: I1201 08:44:03.971512 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fd47e85-de9d-475a-8907-4e805cb1cfc8" containerName="marketplace-operator" Dec 01 08:44:03 crc kubenswrapper[4689]: I1201 08:44:03.971523 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fb20738-492b-4b13-bf8a-5c32aabc0f32" containerName="controller-manager" Dec 01 08:44:03 crc kubenswrapper[4689]: I1201 08:44:03.971535 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5e4c105-766f-4c1a-befe-a059da17406f" containerName="registry-server" Dec 01 08:44:03 crc kubenswrapper[4689]: I1201 08:44:03.971547 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="1860f8a4-ce73-4d74-8dcf-0a43a90d35b9" containerName="registry-server" Dec 01 08:44:03 crc kubenswrapper[4689]: I1201 08:44:03.971555 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="6729f1b7-260e-4a90-a2da-1258e036b9ea" containerName="registry-server" Dec 01 08:44:03 crc kubenswrapper[4689]: I1201 08:44:03.971565 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="a49ba834-1d80-4003-bf95-6dfd68b25a49" containerName="registry-server" Dec 01 08:44:03 crc kubenswrapper[4689]: I1201 08:44:03.971576 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="a02d72db-aa64-4300-acc0-93b8677bf6df" containerName="registry-server" Dec 01 08:44:03 crc kubenswrapper[4689]: I1201 08:44:03.972065 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5494bbdbdf-rthft" Dec 01 08:44:03 crc kubenswrapper[4689]: I1201 08:44:03.984250 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5494bbdbdf-rthft"] Dec 01 08:44:03 crc kubenswrapper[4689]: I1201 08:44:03.991335 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-795b8d5757-fflsj"] Dec 01 08:44:03 crc kubenswrapper[4689]: I1201 08:44:03.992248 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-795b8d5757-fflsj" Dec 01 08:44:04 crc kubenswrapper[4689]: I1201 08:44:04.001340 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/742b088a-8fb2-4638-a203-1df77aea26de-proxy-ca-bundles\") pod \"controller-manager-5494bbdbdf-rthft\" (UID: \"742b088a-8fb2-4638-a203-1df77aea26de\") " pod="openshift-controller-manager/controller-manager-5494bbdbdf-rthft" Dec 01 08:44:04 crc kubenswrapper[4689]: I1201 08:44:04.001525 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9598129f-1ef1-45e5-9fd0-c8ad7a816f3e-serving-cert\") pod \"route-controller-manager-795b8d5757-fflsj\" (UID: \"9598129f-1ef1-45e5-9fd0-c8ad7a816f3e\") " pod="openshift-route-controller-manager/route-controller-manager-795b8d5757-fflsj" Dec 01 08:44:04 crc kubenswrapper[4689]: I1201 08:44:04.001554 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/742b088a-8fb2-4638-a203-1df77aea26de-serving-cert\") pod \"controller-manager-5494bbdbdf-rthft\" (UID: \"742b088a-8fb2-4638-a203-1df77aea26de\") " pod="openshift-controller-manager/controller-manager-5494bbdbdf-rthft" Dec 01 08:44:04 crc kubenswrapper[4689]: I1201 08:44:04.001726 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkcb9\" (UniqueName: \"kubernetes.io/projected/9598129f-1ef1-45e5-9fd0-c8ad7a816f3e-kube-api-access-wkcb9\") pod \"route-controller-manager-795b8d5757-fflsj\" (UID: \"9598129f-1ef1-45e5-9fd0-c8ad7a816f3e\") " pod="openshift-route-controller-manager/route-controller-manager-795b8d5757-fflsj" Dec 01 08:44:04 crc kubenswrapper[4689]: I1201 08:44:04.001821 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/742b088a-8fb2-4638-a203-1df77aea26de-client-ca\") pod \"controller-manager-5494bbdbdf-rthft\" (UID: \"742b088a-8fb2-4638-a203-1df77aea26de\") " pod="openshift-controller-manager/controller-manager-5494bbdbdf-rthft" Dec 01 08:44:04 crc kubenswrapper[4689]: I1201 08:44:04.001932 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ck4m\" (UniqueName: \"kubernetes.io/projected/742b088a-8fb2-4638-a203-1df77aea26de-kube-api-access-6ck4m\") pod \"controller-manager-5494bbdbdf-rthft\" (UID: \"742b088a-8fb2-4638-a203-1df77aea26de\") " pod="openshift-controller-manager/controller-manager-5494bbdbdf-rthft" Dec 01 08:44:04 crc kubenswrapper[4689]: I1201 08:44:04.002016 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/742b088a-8fb2-4638-a203-1df77aea26de-config\") pod \"controller-manager-5494bbdbdf-rthft\" (UID: \"742b088a-8fb2-4638-a203-1df77aea26de\") " pod="openshift-controller-manager/controller-manager-5494bbdbdf-rthft" Dec 01 08:44:04 crc kubenswrapper[4689]: I1201 08:44:04.002090 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9598129f-1ef1-45e5-9fd0-c8ad7a816f3e-client-ca\") pod \"route-controller-manager-795b8d5757-fflsj\" (UID: \"9598129f-1ef1-45e5-9fd0-c8ad7a816f3e\") " pod="openshift-route-controller-manager/route-controller-manager-795b8d5757-fflsj" Dec 01 08:44:04 crc kubenswrapper[4689]: I1201 08:44:04.002181 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9598129f-1ef1-45e5-9fd0-c8ad7a816f3e-config\") pod \"route-controller-manager-795b8d5757-fflsj\" (UID: \"9598129f-1ef1-45e5-9fd0-c8ad7a816f3e\") " pod="openshift-route-controller-manager/route-controller-manager-795b8d5757-fflsj" Dec 01 08:44:04 crc kubenswrapper[4689]: I1201 08:44:04.027635 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-795b8d5757-fflsj"] Dec 01 08:44:04 crc kubenswrapper[4689]: I1201 08:44:04.062105 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6z2v4" Dec 01 08:44:04 crc kubenswrapper[4689]: I1201 08:44:04.062139 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6z2v4" event={"ID":"bd8122c2-aaf0-4148-849c-ca4502dd0f55","Type":"ContainerDied","Data":"55e58fa576eba472106579380938f50b800bf2c97237f09253ead86b0cc82e12"} Dec 01 08:44:04 crc kubenswrapper[4689]: I1201 08:44:04.062216 4689 scope.go:117] "RemoveContainer" containerID="7298609177a1a785d5c63f463609e12770765cb02525245e890c6e41230a272e" Dec 01 08:44:04 crc kubenswrapper[4689]: I1201 08:44:04.072176 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-9nx2j" event={"ID":"5fb20738-492b-4b13-bf8a-5c32aabc0f32","Type":"ContainerDied","Data":"58416c2f7a7456ce974f1716db49d1c6087a0dd645aad9bc922f1ae2ea44c60a"} Dec 01 08:44:04 crc kubenswrapper[4689]: I1201 08:44:04.072498 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-9nx2j" Dec 01 08:44:04 crc kubenswrapper[4689]: I1201 08:44:04.084497 4689 scope.go:117] "RemoveContainer" containerID="1cfced87066f601fd4e4bbc26e411cfd82fa6de9dcd5fee9ce9c0459836affaa" Dec 01 08:44:04 crc kubenswrapper[4689]: I1201 08:44:04.100394 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-6z2v4"] Dec 01 08:44:04 crc kubenswrapper[4689]: I1201 08:44:04.103278 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ck4m\" (UniqueName: \"kubernetes.io/projected/742b088a-8fb2-4638-a203-1df77aea26de-kube-api-access-6ck4m\") pod \"controller-manager-5494bbdbdf-rthft\" (UID: \"742b088a-8fb2-4638-a203-1df77aea26de\") " pod="openshift-controller-manager/controller-manager-5494bbdbdf-rthft" Dec 01 08:44:04 crc kubenswrapper[4689]: I1201 08:44:04.103416 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/742b088a-8fb2-4638-a203-1df77aea26de-config\") pod \"controller-manager-5494bbdbdf-rthft\" (UID: \"742b088a-8fb2-4638-a203-1df77aea26de\") " pod="openshift-controller-manager/controller-manager-5494bbdbdf-rthft" Dec 01 08:44:04 crc kubenswrapper[4689]: I1201 08:44:04.105014 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9598129f-1ef1-45e5-9fd0-c8ad7a816f3e-client-ca\") pod \"route-controller-manager-795b8d5757-fflsj\" (UID: \"9598129f-1ef1-45e5-9fd0-c8ad7a816f3e\") " pod="openshift-route-controller-manager/route-controller-manager-795b8d5757-fflsj" Dec 01 08:44:04 crc kubenswrapper[4689]: I1201 08:44:04.105133 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9598129f-1ef1-45e5-9fd0-c8ad7a816f3e-config\") pod \"route-controller-manager-795b8d5757-fflsj\" (UID: \"9598129f-1ef1-45e5-9fd0-c8ad7a816f3e\") " pod="openshift-route-controller-manager/route-controller-manager-795b8d5757-fflsj" Dec 01 08:44:04 crc kubenswrapper[4689]: I1201 08:44:04.105251 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/742b088a-8fb2-4638-a203-1df77aea26de-proxy-ca-bundles\") pod \"controller-manager-5494bbdbdf-rthft\" (UID: \"742b088a-8fb2-4638-a203-1df77aea26de\") " pod="openshift-controller-manager/controller-manager-5494bbdbdf-rthft" Dec 01 08:44:04 crc kubenswrapper[4689]: I1201 08:44:04.105288 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/742b088a-8fb2-4638-a203-1df77aea26de-serving-cert\") pod \"controller-manager-5494bbdbdf-rthft\" (UID: \"742b088a-8fb2-4638-a203-1df77aea26de\") " pod="openshift-controller-manager/controller-manager-5494bbdbdf-rthft" Dec 01 08:44:04 crc kubenswrapper[4689]: I1201 08:44:04.105313 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9598129f-1ef1-45e5-9fd0-c8ad7a816f3e-serving-cert\") pod \"route-controller-manager-795b8d5757-fflsj\" (UID: \"9598129f-1ef1-45e5-9fd0-c8ad7a816f3e\") " pod="openshift-route-controller-manager/route-controller-manager-795b8d5757-fflsj" Dec 01 08:44:04 crc kubenswrapper[4689]: I1201 08:44:04.105357 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkcb9\" (UniqueName: \"kubernetes.io/projected/9598129f-1ef1-45e5-9fd0-c8ad7a816f3e-kube-api-access-wkcb9\") pod \"route-controller-manager-795b8d5757-fflsj\" (UID: \"9598129f-1ef1-45e5-9fd0-c8ad7a816f3e\") " pod="openshift-route-controller-manager/route-controller-manager-795b8d5757-fflsj" Dec 01 08:44:04 crc kubenswrapper[4689]: I1201 08:44:04.105522 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/742b088a-8fb2-4638-a203-1df77aea26de-client-ca\") pod \"controller-manager-5494bbdbdf-rthft\" (UID: \"742b088a-8fb2-4638-a203-1df77aea26de\") " pod="openshift-controller-manager/controller-manager-5494bbdbdf-rthft" Dec 01 08:44:04 crc kubenswrapper[4689]: I1201 08:44:04.106982 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9598129f-1ef1-45e5-9fd0-c8ad7a816f3e-client-ca\") pod \"route-controller-manager-795b8d5757-fflsj\" (UID: \"9598129f-1ef1-45e5-9fd0-c8ad7a816f3e\") " pod="openshift-route-controller-manager/route-controller-manager-795b8d5757-fflsj" Dec 01 08:44:04 crc kubenswrapper[4689]: I1201 08:44:04.107057 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9598129f-1ef1-45e5-9fd0-c8ad7a816f3e-config\") pod \"route-controller-manager-795b8d5757-fflsj\" (UID: \"9598129f-1ef1-45e5-9fd0-c8ad7a816f3e\") " pod="openshift-route-controller-manager/route-controller-manager-795b8d5757-fflsj" Dec 01 08:44:04 crc kubenswrapper[4689]: I1201 08:44:04.110623 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/742b088a-8fb2-4638-a203-1df77aea26de-proxy-ca-bundles\") pod \"controller-manager-5494bbdbdf-rthft\" (UID: \"742b088a-8fb2-4638-a203-1df77aea26de\") " pod="openshift-controller-manager/controller-manager-5494bbdbdf-rthft" Dec 01 08:44:04 crc kubenswrapper[4689]: I1201 08:44:04.110961 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/742b088a-8fb2-4638-a203-1df77aea26de-client-ca\") pod \"controller-manager-5494bbdbdf-rthft\" (UID: \"742b088a-8fb2-4638-a203-1df77aea26de\") " pod="openshift-controller-manager/controller-manager-5494bbdbdf-rthft" Dec 01 08:44:04 crc kubenswrapper[4689]: I1201 08:44:04.116045 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/742b088a-8fb2-4638-a203-1df77aea26de-serving-cert\") pod \"controller-manager-5494bbdbdf-rthft\" (UID: \"742b088a-8fb2-4638-a203-1df77aea26de\") " pod="openshift-controller-manager/controller-manager-5494bbdbdf-rthft" Dec 01 08:44:04 crc kubenswrapper[4689]: I1201 08:44:04.116049 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9598129f-1ef1-45e5-9fd0-c8ad7a816f3e-serving-cert\") pod \"route-controller-manager-795b8d5757-fflsj\" (UID: \"9598129f-1ef1-45e5-9fd0-c8ad7a816f3e\") " pod="openshift-route-controller-manager/route-controller-manager-795b8d5757-fflsj" Dec 01 08:44:04 crc kubenswrapper[4689]: I1201 08:44:04.121564 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/742b088a-8fb2-4638-a203-1df77aea26de-config\") pod \"controller-manager-5494bbdbdf-rthft\" (UID: \"742b088a-8fb2-4638-a203-1df77aea26de\") " pod="openshift-controller-manager/controller-manager-5494bbdbdf-rthft" Dec 01 08:44:04 crc kubenswrapper[4689]: I1201 08:44:04.121675 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-6z2v4"] Dec 01 08:44:04 crc kubenswrapper[4689]: I1201 08:44:04.126460 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-9nx2j"] Dec 01 08:44:04 crc kubenswrapper[4689]: I1201 08:44:04.130436 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ck4m\" (UniqueName: \"kubernetes.io/projected/742b088a-8fb2-4638-a203-1df77aea26de-kube-api-access-6ck4m\") pod \"controller-manager-5494bbdbdf-rthft\" (UID: \"742b088a-8fb2-4638-a203-1df77aea26de\") " pod="openshift-controller-manager/controller-manager-5494bbdbdf-rthft" Dec 01 08:44:04 crc kubenswrapper[4689]: I1201 08:44:04.130905 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-9nx2j"] Dec 01 08:44:04 crc kubenswrapper[4689]: I1201 08:44:04.136345 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkcb9\" (UniqueName: \"kubernetes.io/projected/9598129f-1ef1-45e5-9fd0-c8ad7a816f3e-kube-api-access-wkcb9\") pod \"route-controller-manager-795b8d5757-fflsj\" (UID: \"9598129f-1ef1-45e5-9fd0-c8ad7a816f3e\") " pod="openshift-route-controller-manager/route-controller-manager-795b8d5757-fflsj" Dec 01 08:44:04 crc kubenswrapper[4689]: I1201 08:44:04.293445 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5494bbdbdf-rthft" Dec 01 08:44:04 crc kubenswrapper[4689]: I1201 08:44:04.314585 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-795b8d5757-fflsj" Dec 01 08:44:04 crc kubenswrapper[4689]: I1201 08:44:04.539876 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5494bbdbdf-rthft"] Dec 01 08:44:04 crc kubenswrapper[4689]: W1201 08:44:04.608498 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9598129f_1ef1_45e5_9fd0_c8ad7a816f3e.slice/crio-cedb01b2a0592bd0ca0d30135a84c23b5ce9034c94d530162c21b1b7a569d0b3 WatchSource:0}: Error finding container cedb01b2a0592bd0ca0d30135a84c23b5ce9034c94d530162c21b1b7a569d0b3: Status 404 returned error can't find the container with id cedb01b2a0592bd0ca0d30135a84c23b5ce9034c94d530162c21b1b7a569d0b3 Dec 01 08:44:04 crc kubenswrapper[4689]: I1201 08:44:04.620747 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-795b8d5757-fflsj"] Dec 01 08:44:05 crc kubenswrapper[4689]: I1201 08:44:05.054863 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fb20738-492b-4b13-bf8a-5c32aabc0f32" path="/var/lib/kubelet/pods/5fb20738-492b-4b13-bf8a-5c32aabc0f32/volumes" Dec 01 08:44:05 crc kubenswrapper[4689]: I1201 08:44:05.056287 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd8122c2-aaf0-4148-849c-ca4502dd0f55" path="/var/lib/kubelet/pods/bd8122c2-aaf0-4148-849c-ca4502dd0f55/volumes" Dec 01 08:44:05 crc kubenswrapper[4689]: I1201 08:44:05.078452 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-795b8d5757-fflsj" event={"ID":"9598129f-1ef1-45e5-9fd0-c8ad7a816f3e","Type":"ContainerStarted","Data":"414a82c1cd2315604265047a2344934cf5524d2c8e394a7a0bc83c7f74061314"} Dec 01 08:44:05 crc kubenswrapper[4689]: I1201 08:44:05.078523 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-795b8d5757-fflsj" event={"ID":"9598129f-1ef1-45e5-9fd0-c8ad7a816f3e","Type":"ContainerStarted","Data":"cedb01b2a0592bd0ca0d30135a84c23b5ce9034c94d530162c21b1b7a569d0b3"} Dec 01 08:44:05 crc kubenswrapper[4689]: I1201 08:44:05.078900 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-795b8d5757-fflsj" Dec 01 08:44:05 crc kubenswrapper[4689]: I1201 08:44:05.081977 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5494bbdbdf-rthft" event={"ID":"742b088a-8fb2-4638-a203-1df77aea26de","Type":"ContainerStarted","Data":"199179aa4943f8df5f0731a84ed1f04dcbcd766b2b352163809069d98556c7bb"} Dec 01 08:44:05 crc kubenswrapper[4689]: I1201 08:44:05.082033 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5494bbdbdf-rthft" event={"ID":"742b088a-8fb2-4638-a203-1df77aea26de","Type":"ContainerStarted","Data":"fbeae8ed7eae474de795173d00ed2d3cbbbe22aac4f0c13f772a598c47516950"} Dec 01 08:44:05 crc kubenswrapper[4689]: I1201 08:44:05.082815 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5494bbdbdf-rthft" Dec 01 08:44:05 crc kubenswrapper[4689]: I1201 08:44:05.097224 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5494bbdbdf-rthft" Dec 01 08:44:05 crc kubenswrapper[4689]: I1201 08:44:05.108557 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-795b8d5757-fflsj" podStartSLOduration=2.108532244 podStartE2EDuration="2.108532244s" podCreationTimestamp="2025-12-01 08:44:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:44:05.105654254 +0000 UTC m=+325.177942178" watchObservedRunningTime="2025-12-01 08:44:05.108532244 +0000 UTC m=+325.180820148" Dec 01 08:44:05 crc kubenswrapper[4689]: I1201 08:44:05.130408 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5494bbdbdf-rthft" podStartSLOduration=2.130385145 podStartE2EDuration="2.130385145s" podCreationTimestamp="2025-12-01 08:44:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:44:05.127980567 +0000 UTC m=+325.200268471" watchObservedRunningTime="2025-12-01 08:44:05.130385145 +0000 UTC m=+325.202673049" Dec 01 08:44:05 crc kubenswrapper[4689]: I1201 08:44:05.192080 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-795b8d5757-fflsj" Dec 01 08:44:22 crc kubenswrapper[4689]: I1201 08:44:22.817132 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5494bbdbdf-rthft"] Dec 01 08:44:22 crc kubenswrapper[4689]: I1201 08:44:22.818749 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5494bbdbdf-rthft" podUID="742b088a-8fb2-4638-a203-1df77aea26de" containerName="controller-manager" containerID="cri-o://199179aa4943f8df5f0731a84ed1f04dcbcd766b2b352163809069d98556c7bb" gracePeriod=30 Dec 01 08:44:23 crc kubenswrapper[4689]: I1201 08:44:23.182599 4689 generic.go:334] "Generic (PLEG): container finished" podID="742b088a-8fb2-4638-a203-1df77aea26de" containerID="199179aa4943f8df5f0731a84ed1f04dcbcd766b2b352163809069d98556c7bb" exitCode=0 Dec 01 08:44:23 crc kubenswrapper[4689]: I1201 08:44:23.182655 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5494bbdbdf-rthft" event={"ID":"742b088a-8fb2-4638-a203-1df77aea26de","Type":"ContainerDied","Data":"199179aa4943f8df5f0731a84ed1f04dcbcd766b2b352163809069d98556c7bb"} Dec 01 08:44:23 crc kubenswrapper[4689]: I1201 08:44:23.294745 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5494bbdbdf-rthft" Dec 01 08:44:23 crc kubenswrapper[4689]: I1201 08:44:23.362305 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/742b088a-8fb2-4638-a203-1df77aea26de-serving-cert\") pod \"742b088a-8fb2-4638-a203-1df77aea26de\" (UID: \"742b088a-8fb2-4638-a203-1df77aea26de\") " Dec 01 08:44:23 crc kubenswrapper[4689]: I1201 08:44:23.362410 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/742b088a-8fb2-4638-a203-1df77aea26de-proxy-ca-bundles\") pod \"742b088a-8fb2-4638-a203-1df77aea26de\" (UID: \"742b088a-8fb2-4638-a203-1df77aea26de\") " Dec 01 08:44:23 crc kubenswrapper[4689]: I1201 08:44:23.363255 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/742b088a-8fb2-4638-a203-1df77aea26de-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "742b088a-8fb2-4638-a203-1df77aea26de" (UID: "742b088a-8fb2-4638-a203-1df77aea26de"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:44:23 crc kubenswrapper[4689]: I1201 08:44:23.363550 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ck4m\" (UniqueName: \"kubernetes.io/projected/742b088a-8fb2-4638-a203-1df77aea26de-kube-api-access-6ck4m\") pod \"742b088a-8fb2-4638-a203-1df77aea26de\" (UID: \"742b088a-8fb2-4638-a203-1df77aea26de\") " Dec 01 08:44:23 crc kubenswrapper[4689]: I1201 08:44:23.363579 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/742b088a-8fb2-4638-a203-1df77aea26de-config\") pod \"742b088a-8fb2-4638-a203-1df77aea26de\" (UID: \"742b088a-8fb2-4638-a203-1df77aea26de\") " Dec 01 08:44:23 crc kubenswrapper[4689]: I1201 08:44:23.363975 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/742b088a-8fb2-4638-a203-1df77aea26de-client-ca\") pod \"742b088a-8fb2-4638-a203-1df77aea26de\" (UID: \"742b088a-8fb2-4638-a203-1df77aea26de\") " Dec 01 08:44:23 crc kubenswrapper[4689]: I1201 08:44:23.364207 4689 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/742b088a-8fb2-4638-a203-1df77aea26de-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 01 08:44:23 crc kubenswrapper[4689]: I1201 08:44:23.364484 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/742b088a-8fb2-4638-a203-1df77aea26de-client-ca" (OuterVolumeSpecName: "client-ca") pod "742b088a-8fb2-4638-a203-1df77aea26de" (UID: "742b088a-8fb2-4638-a203-1df77aea26de"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:44:23 crc kubenswrapper[4689]: I1201 08:44:23.364852 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/742b088a-8fb2-4638-a203-1df77aea26de-config" (OuterVolumeSpecName: "config") pod "742b088a-8fb2-4638-a203-1df77aea26de" (UID: "742b088a-8fb2-4638-a203-1df77aea26de"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:44:23 crc kubenswrapper[4689]: I1201 08:44:23.369740 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/742b088a-8fb2-4638-a203-1df77aea26de-kube-api-access-6ck4m" (OuterVolumeSpecName: "kube-api-access-6ck4m") pod "742b088a-8fb2-4638-a203-1df77aea26de" (UID: "742b088a-8fb2-4638-a203-1df77aea26de"). InnerVolumeSpecName "kube-api-access-6ck4m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:44:23 crc kubenswrapper[4689]: I1201 08:44:23.370092 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/742b088a-8fb2-4638-a203-1df77aea26de-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "742b088a-8fb2-4638-a203-1df77aea26de" (UID: "742b088a-8fb2-4638-a203-1df77aea26de"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:44:23 crc kubenswrapper[4689]: I1201 08:44:23.465630 4689 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/742b088a-8fb2-4638-a203-1df77aea26de-config\") on node \"crc\" DevicePath \"\"" Dec 01 08:44:23 crc kubenswrapper[4689]: I1201 08:44:23.465666 4689 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/742b088a-8fb2-4638-a203-1df77aea26de-client-ca\") on node \"crc\" DevicePath \"\"" Dec 01 08:44:23 crc kubenswrapper[4689]: I1201 08:44:23.465678 4689 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/742b088a-8fb2-4638-a203-1df77aea26de-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 08:44:23 crc kubenswrapper[4689]: I1201 08:44:23.465689 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ck4m\" (UniqueName: \"kubernetes.io/projected/742b088a-8fb2-4638-a203-1df77aea26de-kube-api-access-6ck4m\") on node \"crc\" DevicePath \"\"" Dec 01 08:44:24 crc kubenswrapper[4689]: I1201 08:44:24.189413 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5494bbdbdf-rthft" event={"ID":"742b088a-8fb2-4638-a203-1df77aea26de","Type":"ContainerDied","Data":"fbeae8ed7eae474de795173d00ed2d3cbbbe22aac4f0c13f772a598c47516950"} Dec 01 08:44:24 crc kubenswrapper[4689]: I1201 08:44:24.189463 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5494bbdbdf-rthft" Dec 01 08:44:24 crc kubenswrapper[4689]: I1201 08:44:24.189612 4689 scope.go:117] "RemoveContainer" containerID="199179aa4943f8df5f0731a84ed1f04dcbcd766b2b352163809069d98556c7bb" Dec 01 08:44:24 crc kubenswrapper[4689]: I1201 08:44:24.254457 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5494bbdbdf-rthft"] Dec 01 08:44:24 crc kubenswrapper[4689]: I1201 08:44:24.260254 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5494bbdbdf-rthft"] Dec 01 08:44:24 crc kubenswrapper[4689]: I1201 08:44:24.399229 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-689b8cbc5f-scmr6"] Dec 01 08:44:24 crc kubenswrapper[4689]: E1201 08:44:24.399707 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="742b088a-8fb2-4638-a203-1df77aea26de" containerName="controller-manager" Dec 01 08:44:24 crc kubenswrapper[4689]: I1201 08:44:24.399754 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="742b088a-8fb2-4638-a203-1df77aea26de" containerName="controller-manager" Dec 01 08:44:24 crc kubenswrapper[4689]: I1201 08:44:24.400004 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="742b088a-8fb2-4638-a203-1df77aea26de" containerName="controller-manager" Dec 01 08:44:24 crc kubenswrapper[4689]: I1201 08:44:24.400741 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-689b8cbc5f-scmr6" Dec 01 08:44:24 crc kubenswrapper[4689]: I1201 08:44:24.406492 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 01 08:44:24 crc kubenswrapper[4689]: I1201 08:44:24.406604 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 01 08:44:24 crc kubenswrapper[4689]: I1201 08:44:24.406492 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 01 08:44:24 crc kubenswrapper[4689]: I1201 08:44:24.407255 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 01 08:44:24 crc kubenswrapper[4689]: I1201 08:44:24.407447 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 01 08:44:24 crc kubenswrapper[4689]: I1201 08:44:24.412838 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 01 08:44:24 crc kubenswrapper[4689]: I1201 08:44:24.426616 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 01 08:44:24 crc kubenswrapper[4689]: I1201 08:44:24.429612 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-689b8cbc5f-scmr6"] Dec 01 08:44:24 crc kubenswrapper[4689]: I1201 08:44:24.478617 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fad122ae-5995-4afe-8520-d3f958ff065c-serving-cert\") pod \"controller-manager-689b8cbc5f-scmr6\" (UID: \"fad122ae-5995-4afe-8520-d3f958ff065c\") " pod="openshift-controller-manager/controller-manager-689b8cbc5f-scmr6" Dec 01 08:44:24 crc kubenswrapper[4689]: I1201 08:44:24.478756 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fad122ae-5995-4afe-8520-d3f958ff065c-proxy-ca-bundles\") pod \"controller-manager-689b8cbc5f-scmr6\" (UID: \"fad122ae-5995-4afe-8520-d3f958ff065c\") " pod="openshift-controller-manager/controller-manager-689b8cbc5f-scmr6" Dec 01 08:44:24 crc kubenswrapper[4689]: I1201 08:44:24.478822 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fad122ae-5995-4afe-8520-d3f958ff065c-client-ca\") pod \"controller-manager-689b8cbc5f-scmr6\" (UID: \"fad122ae-5995-4afe-8520-d3f958ff065c\") " pod="openshift-controller-manager/controller-manager-689b8cbc5f-scmr6" Dec 01 08:44:24 crc kubenswrapper[4689]: I1201 08:44:24.478897 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fad122ae-5995-4afe-8520-d3f958ff065c-config\") pod \"controller-manager-689b8cbc5f-scmr6\" (UID: \"fad122ae-5995-4afe-8520-d3f958ff065c\") " pod="openshift-controller-manager/controller-manager-689b8cbc5f-scmr6" Dec 01 08:44:24 crc kubenswrapper[4689]: I1201 08:44:24.478961 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czhmc\" (UniqueName: \"kubernetes.io/projected/fad122ae-5995-4afe-8520-d3f958ff065c-kube-api-access-czhmc\") pod \"controller-manager-689b8cbc5f-scmr6\" (UID: \"fad122ae-5995-4afe-8520-d3f958ff065c\") " pod="openshift-controller-manager/controller-manager-689b8cbc5f-scmr6" Dec 01 08:44:24 crc kubenswrapper[4689]: I1201 08:44:24.580330 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fad122ae-5995-4afe-8520-d3f958ff065c-proxy-ca-bundles\") pod \"controller-manager-689b8cbc5f-scmr6\" (UID: \"fad122ae-5995-4afe-8520-d3f958ff065c\") " pod="openshift-controller-manager/controller-manager-689b8cbc5f-scmr6" Dec 01 08:44:24 crc kubenswrapper[4689]: I1201 08:44:24.580425 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fad122ae-5995-4afe-8520-d3f958ff065c-client-ca\") pod \"controller-manager-689b8cbc5f-scmr6\" (UID: \"fad122ae-5995-4afe-8520-d3f958ff065c\") " pod="openshift-controller-manager/controller-manager-689b8cbc5f-scmr6" Dec 01 08:44:24 crc kubenswrapper[4689]: I1201 08:44:24.580489 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fad122ae-5995-4afe-8520-d3f958ff065c-config\") pod \"controller-manager-689b8cbc5f-scmr6\" (UID: \"fad122ae-5995-4afe-8520-d3f958ff065c\") " pod="openshift-controller-manager/controller-manager-689b8cbc5f-scmr6" Dec 01 08:44:24 crc kubenswrapper[4689]: I1201 08:44:24.580511 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czhmc\" (UniqueName: \"kubernetes.io/projected/fad122ae-5995-4afe-8520-d3f958ff065c-kube-api-access-czhmc\") pod \"controller-manager-689b8cbc5f-scmr6\" (UID: \"fad122ae-5995-4afe-8520-d3f958ff065c\") " pod="openshift-controller-manager/controller-manager-689b8cbc5f-scmr6" Dec 01 08:44:24 crc kubenswrapper[4689]: I1201 08:44:24.580609 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fad122ae-5995-4afe-8520-d3f958ff065c-serving-cert\") pod \"controller-manager-689b8cbc5f-scmr6\" (UID: \"fad122ae-5995-4afe-8520-d3f958ff065c\") " pod="openshift-controller-manager/controller-manager-689b8cbc5f-scmr6" Dec 01 08:44:24 crc kubenswrapper[4689]: I1201 08:44:24.581710 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fad122ae-5995-4afe-8520-d3f958ff065c-client-ca\") pod \"controller-manager-689b8cbc5f-scmr6\" (UID: \"fad122ae-5995-4afe-8520-d3f958ff065c\") " pod="openshift-controller-manager/controller-manager-689b8cbc5f-scmr6" Dec 01 08:44:24 crc kubenswrapper[4689]: I1201 08:44:24.583180 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fad122ae-5995-4afe-8520-d3f958ff065c-config\") pod \"controller-manager-689b8cbc5f-scmr6\" (UID: \"fad122ae-5995-4afe-8520-d3f958ff065c\") " pod="openshift-controller-manager/controller-manager-689b8cbc5f-scmr6" Dec 01 08:44:24 crc kubenswrapper[4689]: I1201 08:44:24.583919 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fad122ae-5995-4afe-8520-d3f958ff065c-proxy-ca-bundles\") pod \"controller-manager-689b8cbc5f-scmr6\" (UID: \"fad122ae-5995-4afe-8520-d3f958ff065c\") " pod="openshift-controller-manager/controller-manager-689b8cbc5f-scmr6" Dec 01 08:44:24 crc kubenswrapper[4689]: I1201 08:44:24.586453 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fad122ae-5995-4afe-8520-d3f958ff065c-serving-cert\") pod \"controller-manager-689b8cbc5f-scmr6\" (UID: \"fad122ae-5995-4afe-8520-d3f958ff065c\") " pod="openshift-controller-manager/controller-manager-689b8cbc5f-scmr6" Dec 01 08:44:24 crc kubenswrapper[4689]: I1201 08:44:24.607652 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czhmc\" (UniqueName: \"kubernetes.io/projected/fad122ae-5995-4afe-8520-d3f958ff065c-kube-api-access-czhmc\") pod \"controller-manager-689b8cbc5f-scmr6\" (UID: \"fad122ae-5995-4afe-8520-d3f958ff065c\") " pod="openshift-controller-manager/controller-manager-689b8cbc5f-scmr6" Dec 01 08:44:24 crc kubenswrapper[4689]: I1201 08:44:24.735389 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-689b8cbc5f-scmr6" Dec 01 08:44:24 crc kubenswrapper[4689]: I1201 08:44:24.993549 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-689b8cbc5f-scmr6"] Dec 01 08:44:25 crc kubenswrapper[4689]: W1201 08:44:25.005984 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfad122ae_5995_4afe_8520_d3f958ff065c.slice/crio-c1684639d84bc1a6479a0fc5de95a47e0a594c10aeaf2ba0931dc50433394a46 WatchSource:0}: Error finding container c1684639d84bc1a6479a0fc5de95a47e0a594c10aeaf2ba0931dc50433394a46: Status 404 returned error can't find the container with id c1684639d84bc1a6479a0fc5de95a47e0a594c10aeaf2ba0931dc50433394a46 Dec 01 08:44:25 crc kubenswrapper[4689]: I1201 08:44:25.058984 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="742b088a-8fb2-4638-a203-1df77aea26de" path="/var/lib/kubelet/pods/742b088a-8fb2-4638-a203-1df77aea26de/volumes" Dec 01 08:44:25 crc kubenswrapper[4689]: I1201 08:44:25.197941 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-689b8cbc5f-scmr6" event={"ID":"fad122ae-5995-4afe-8520-d3f958ff065c","Type":"ContainerStarted","Data":"e21104b04c58702ea57e260a2598cbda6483ee4f24dfb6f5c4f2d8736f1c7f72"} Dec 01 08:44:25 crc kubenswrapper[4689]: I1201 08:44:25.198013 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-689b8cbc5f-scmr6" event={"ID":"fad122ae-5995-4afe-8520-d3f958ff065c","Type":"ContainerStarted","Data":"c1684639d84bc1a6479a0fc5de95a47e0a594c10aeaf2ba0931dc50433394a46"} Dec 01 08:44:25 crc kubenswrapper[4689]: I1201 08:44:25.198475 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-689b8cbc5f-scmr6" Dec 01 08:44:25 crc kubenswrapper[4689]: I1201 08:44:25.200659 4689 patch_prober.go:28] interesting pod/controller-manager-689b8cbc5f-scmr6 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.61:8443/healthz\": dial tcp 10.217.0.61:8443: connect: connection refused" start-of-body= Dec 01 08:44:25 crc kubenswrapper[4689]: I1201 08:44:25.201038 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-689b8cbc5f-scmr6" podUID="fad122ae-5995-4afe-8520-d3f958ff065c" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.61:8443/healthz\": dial tcp 10.217.0.61:8443: connect: connection refused" Dec 01 08:44:25 crc kubenswrapper[4689]: I1201 08:44:25.223091 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-689b8cbc5f-scmr6" podStartSLOduration=3.22302976 podStartE2EDuration="3.22302976s" podCreationTimestamp="2025-12-01 08:44:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:44:25.221172661 +0000 UTC m=+345.293460585" watchObservedRunningTime="2025-12-01 08:44:25.22302976 +0000 UTC m=+345.295317664" Dec 01 08:44:26 crc kubenswrapper[4689]: I1201 08:44:26.212924 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-689b8cbc5f-scmr6" Dec 01 08:44:30 crc kubenswrapper[4689]: I1201 08:44:30.848645 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5nsm4"] Dec 01 08:44:30 crc kubenswrapper[4689]: I1201 08:44:30.850671 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5nsm4" Dec 01 08:44:30 crc kubenswrapper[4689]: I1201 08:44:30.853431 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 01 08:44:30 crc kubenswrapper[4689]: I1201 08:44:30.867386 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5nsm4"] Dec 01 08:44:30 crc kubenswrapper[4689]: I1201 08:44:30.870122 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cea5449-8a30-47d4-bb0f-e7a6c785bee5-utilities\") pod \"certified-operators-5nsm4\" (UID: \"3cea5449-8a30-47d4-bb0f-e7a6c785bee5\") " pod="openshift-marketplace/certified-operators-5nsm4" Dec 01 08:44:30 crc kubenswrapper[4689]: I1201 08:44:30.870211 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6wfv\" (UniqueName: \"kubernetes.io/projected/3cea5449-8a30-47d4-bb0f-e7a6c785bee5-kube-api-access-n6wfv\") pod \"certified-operators-5nsm4\" (UID: \"3cea5449-8a30-47d4-bb0f-e7a6c785bee5\") " pod="openshift-marketplace/certified-operators-5nsm4" Dec 01 08:44:30 crc kubenswrapper[4689]: I1201 08:44:30.870299 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cea5449-8a30-47d4-bb0f-e7a6c785bee5-catalog-content\") pod \"certified-operators-5nsm4\" (UID: \"3cea5449-8a30-47d4-bb0f-e7a6c785bee5\") " pod="openshift-marketplace/certified-operators-5nsm4" Dec 01 08:44:30 crc kubenswrapper[4689]: I1201 08:44:30.972428 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cea5449-8a30-47d4-bb0f-e7a6c785bee5-catalog-content\") pod \"certified-operators-5nsm4\" (UID: \"3cea5449-8a30-47d4-bb0f-e7a6c785bee5\") " pod="openshift-marketplace/certified-operators-5nsm4" Dec 01 08:44:30 crc kubenswrapper[4689]: I1201 08:44:30.972548 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cea5449-8a30-47d4-bb0f-e7a6c785bee5-utilities\") pod \"certified-operators-5nsm4\" (UID: \"3cea5449-8a30-47d4-bb0f-e7a6c785bee5\") " pod="openshift-marketplace/certified-operators-5nsm4" Dec 01 08:44:30 crc kubenswrapper[4689]: I1201 08:44:30.972605 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6wfv\" (UniqueName: \"kubernetes.io/projected/3cea5449-8a30-47d4-bb0f-e7a6c785bee5-kube-api-access-n6wfv\") pod \"certified-operators-5nsm4\" (UID: \"3cea5449-8a30-47d4-bb0f-e7a6c785bee5\") " pod="openshift-marketplace/certified-operators-5nsm4" Dec 01 08:44:30 crc kubenswrapper[4689]: I1201 08:44:30.973405 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cea5449-8a30-47d4-bb0f-e7a6c785bee5-utilities\") pod \"certified-operators-5nsm4\" (UID: \"3cea5449-8a30-47d4-bb0f-e7a6c785bee5\") " pod="openshift-marketplace/certified-operators-5nsm4" Dec 01 08:44:30 crc kubenswrapper[4689]: I1201 08:44:30.974736 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cea5449-8a30-47d4-bb0f-e7a6c785bee5-catalog-content\") pod \"certified-operators-5nsm4\" (UID: \"3cea5449-8a30-47d4-bb0f-e7a6c785bee5\") " pod="openshift-marketplace/certified-operators-5nsm4" Dec 01 08:44:31 crc kubenswrapper[4689]: I1201 08:44:31.005841 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6wfv\" (UniqueName: \"kubernetes.io/projected/3cea5449-8a30-47d4-bb0f-e7a6c785bee5-kube-api-access-n6wfv\") pod \"certified-operators-5nsm4\" (UID: \"3cea5449-8a30-47d4-bb0f-e7a6c785bee5\") " pod="openshift-marketplace/certified-operators-5nsm4" Dec 01 08:44:31 crc kubenswrapper[4689]: I1201 08:44:31.043645 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fwm44"] Dec 01 08:44:31 crc kubenswrapper[4689]: I1201 08:44:31.045609 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fwm44" Dec 01 08:44:31 crc kubenswrapper[4689]: I1201 08:44:31.048226 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 01 08:44:31 crc kubenswrapper[4689]: I1201 08:44:31.057702 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fwm44"] Dec 01 08:44:31 crc kubenswrapper[4689]: I1201 08:44:31.074084 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62dbk\" (UniqueName: \"kubernetes.io/projected/665830e4-f511-4fa5-8892-75d5bc618ede-kube-api-access-62dbk\") pod \"community-operators-fwm44\" (UID: \"665830e4-f511-4fa5-8892-75d5bc618ede\") " pod="openshift-marketplace/community-operators-fwm44" Dec 01 08:44:31 crc kubenswrapper[4689]: I1201 08:44:31.074177 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/665830e4-f511-4fa5-8892-75d5bc618ede-utilities\") pod \"community-operators-fwm44\" (UID: \"665830e4-f511-4fa5-8892-75d5bc618ede\") " pod="openshift-marketplace/community-operators-fwm44" Dec 01 08:44:31 crc kubenswrapper[4689]: I1201 08:44:31.074204 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/665830e4-f511-4fa5-8892-75d5bc618ede-catalog-content\") pod \"community-operators-fwm44\" (UID: \"665830e4-f511-4fa5-8892-75d5bc618ede\") " pod="openshift-marketplace/community-operators-fwm44" Dec 01 08:44:31 crc kubenswrapper[4689]: I1201 08:44:31.175773 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62dbk\" (UniqueName: \"kubernetes.io/projected/665830e4-f511-4fa5-8892-75d5bc618ede-kube-api-access-62dbk\") pod \"community-operators-fwm44\" (UID: \"665830e4-f511-4fa5-8892-75d5bc618ede\") " pod="openshift-marketplace/community-operators-fwm44" Dec 01 08:44:31 crc kubenswrapper[4689]: I1201 08:44:31.175914 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/665830e4-f511-4fa5-8892-75d5bc618ede-utilities\") pod \"community-operators-fwm44\" (UID: \"665830e4-f511-4fa5-8892-75d5bc618ede\") " pod="openshift-marketplace/community-operators-fwm44" Dec 01 08:44:31 crc kubenswrapper[4689]: I1201 08:44:31.175957 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/665830e4-f511-4fa5-8892-75d5bc618ede-catalog-content\") pod \"community-operators-fwm44\" (UID: \"665830e4-f511-4fa5-8892-75d5bc618ede\") " pod="openshift-marketplace/community-operators-fwm44" Dec 01 08:44:31 crc kubenswrapper[4689]: I1201 08:44:31.176033 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5nsm4" Dec 01 08:44:31 crc kubenswrapper[4689]: I1201 08:44:31.176680 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/665830e4-f511-4fa5-8892-75d5bc618ede-utilities\") pod \"community-operators-fwm44\" (UID: \"665830e4-f511-4fa5-8892-75d5bc618ede\") " pod="openshift-marketplace/community-operators-fwm44" Dec 01 08:44:31 crc kubenswrapper[4689]: I1201 08:44:31.177153 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/665830e4-f511-4fa5-8892-75d5bc618ede-catalog-content\") pod \"community-operators-fwm44\" (UID: \"665830e4-f511-4fa5-8892-75d5bc618ede\") " pod="openshift-marketplace/community-operators-fwm44" Dec 01 08:44:31 crc kubenswrapper[4689]: I1201 08:44:31.201554 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62dbk\" (UniqueName: \"kubernetes.io/projected/665830e4-f511-4fa5-8892-75d5bc618ede-kube-api-access-62dbk\") pod \"community-operators-fwm44\" (UID: \"665830e4-f511-4fa5-8892-75d5bc618ede\") " pod="openshift-marketplace/community-operators-fwm44" Dec 01 08:44:31 crc kubenswrapper[4689]: I1201 08:44:31.364138 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fwm44" Dec 01 08:44:31 crc kubenswrapper[4689]: I1201 08:44:31.585338 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5nsm4"] Dec 01 08:44:31 crc kubenswrapper[4689]: I1201 08:44:31.788627 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fwm44"] Dec 01 08:44:31 crc kubenswrapper[4689]: W1201 08:44:31.810587 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod665830e4_f511_4fa5_8892_75d5bc618ede.slice/crio-a65e49ec654ad782658313b1670c9c985aa6ad3f814ab6a67e567a542fe0a641 WatchSource:0}: Error finding container a65e49ec654ad782658313b1670c9c985aa6ad3f814ab6a67e567a542fe0a641: Status 404 returned error can't find the container with id a65e49ec654ad782658313b1670c9c985aa6ad3f814ab6a67e567a542fe0a641 Dec 01 08:44:32 crc kubenswrapper[4689]: I1201 08:44:32.248180 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5nsm4" event={"ID":"3cea5449-8a30-47d4-bb0f-e7a6c785bee5","Type":"ContainerDied","Data":"9e0f552e5d51370ecee3da9e52fa22558aeaa4e4e726f0bfd28611fd6c5916df"} Dec 01 08:44:32 crc kubenswrapper[4689]: I1201 08:44:32.248109 4689 generic.go:334] "Generic (PLEG): container finished" podID="3cea5449-8a30-47d4-bb0f-e7a6c785bee5" containerID="9e0f552e5d51370ecee3da9e52fa22558aeaa4e4e726f0bfd28611fd6c5916df" exitCode=0 Dec 01 08:44:32 crc kubenswrapper[4689]: I1201 08:44:32.251670 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5nsm4" event={"ID":"3cea5449-8a30-47d4-bb0f-e7a6c785bee5","Type":"ContainerStarted","Data":"c1c3c5ce456957152c5b37bbff1540029765867af1993862606c6b0de61d7587"} Dec 01 08:44:32 crc kubenswrapper[4689]: I1201 08:44:32.254834 4689 generic.go:334] "Generic (PLEG): container finished" podID="665830e4-f511-4fa5-8892-75d5bc618ede" containerID="3d563009ebaf57a3dcd9f9be083026b5f77c39693b0822802e64fa7c9618cf11" exitCode=0 Dec 01 08:44:32 crc kubenswrapper[4689]: I1201 08:44:32.254894 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fwm44" event={"ID":"665830e4-f511-4fa5-8892-75d5bc618ede","Type":"ContainerDied","Data":"3d563009ebaf57a3dcd9f9be083026b5f77c39693b0822802e64fa7c9618cf11"} Dec 01 08:44:32 crc kubenswrapper[4689]: I1201 08:44:32.254935 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fwm44" event={"ID":"665830e4-f511-4fa5-8892-75d5bc618ede","Type":"ContainerStarted","Data":"a65e49ec654ad782658313b1670c9c985aa6ad3f814ab6a67e567a542fe0a641"} Dec 01 08:44:33 crc kubenswrapper[4689]: I1201 08:44:33.238437 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-kww7g"] Dec 01 08:44:33 crc kubenswrapper[4689]: I1201 08:44:33.239762 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kww7g" Dec 01 08:44:33 crc kubenswrapper[4689]: I1201 08:44:33.241488 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 01 08:44:33 crc kubenswrapper[4689]: I1201 08:44:33.256505 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kww7g"] Dec 01 08:44:33 crc kubenswrapper[4689]: I1201 08:44:33.263206 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fwm44" event={"ID":"665830e4-f511-4fa5-8892-75d5bc618ede","Type":"ContainerStarted","Data":"a8943d91f9c8f49d61d8b5a2c4c8a9d0327d57711f58c983a5e4c8db06ec1d1f"} Dec 01 08:44:33 crc kubenswrapper[4689]: I1201 08:44:33.265409 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5nsm4" event={"ID":"3cea5449-8a30-47d4-bb0f-e7a6c785bee5","Type":"ContainerStarted","Data":"6f47080a003109db1b177605f6f2c99acbb0534f2b65729a696bf34b99fbc36f"} Dec 01 08:44:33 crc kubenswrapper[4689]: I1201 08:44:33.312038 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/075f35f7-3a97-4145-b911-9a14de1e1fee-utilities\") pod \"redhat-marketplace-kww7g\" (UID: \"075f35f7-3a97-4145-b911-9a14de1e1fee\") " pod="openshift-marketplace/redhat-marketplace-kww7g" Dec 01 08:44:33 crc kubenswrapper[4689]: I1201 08:44:33.312104 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4qnv\" (UniqueName: \"kubernetes.io/projected/075f35f7-3a97-4145-b911-9a14de1e1fee-kube-api-access-q4qnv\") pod \"redhat-marketplace-kww7g\" (UID: \"075f35f7-3a97-4145-b911-9a14de1e1fee\") " pod="openshift-marketplace/redhat-marketplace-kww7g" Dec 01 08:44:33 crc kubenswrapper[4689]: I1201 08:44:33.312153 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/075f35f7-3a97-4145-b911-9a14de1e1fee-catalog-content\") pod \"redhat-marketplace-kww7g\" (UID: \"075f35f7-3a97-4145-b911-9a14de1e1fee\") " pod="openshift-marketplace/redhat-marketplace-kww7g" Dec 01 08:44:33 crc kubenswrapper[4689]: I1201 08:44:33.414497 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/075f35f7-3a97-4145-b911-9a14de1e1fee-utilities\") pod \"redhat-marketplace-kww7g\" (UID: \"075f35f7-3a97-4145-b911-9a14de1e1fee\") " pod="openshift-marketplace/redhat-marketplace-kww7g" Dec 01 08:44:33 crc kubenswrapper[4689]: I1201 08:44:33.414580 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/075f35f7-3a97-4145-b911-9a14de1e1fee-utilities\") pod \"redhat-marketplace-kww7g\" (UID: \"075f35f7-3a97-4145-b911-9a14de1e1fee\") " pod="openshift-marketplace/redhat-marketplace-kww7g" Dec 01 08:44:33 crc kubenswrapper[4689]: I1201 08:44:33.414646 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4qnv\" (UniqueName: \"kubernetes.io/projected/075f35f7-3a97-4145-b911-9a14de1e1fee-kube-api-access-q4qnv\") pod \"redhat-marketplace-kww7g\" (UID: \"075f35f7-3a97-4145-b911-9a14de1e1fee\") " pod="openshift-marketplace/redhat-marketplace-kww7g" Dec 01 08:44:33 crc kubenswrapper[4689]: I1201 08:44:33.415015 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/075f35f7-3a97-4145-b911-9a14de1e1fee-catalog-content\") pod \"redhat-marketplace-kww7g\" (UID: \"075f35f7-3a97-4145-b911-9a14de1e1fee\") " pod="openshift-marketplace/redhat-marketplace-kww7g" Dec 01 08:44:33 crc kubenswrapper[4689]: I1201 08:44:33.415282 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/075f35f7-3a97-4145-b911-9a14de1e1fee-catalog-content\") pod \"redhat-marketplace-kww7g\" (UID: \"075f35f7-3a97-4145-b911-9a14de1e1fee\") " pod="openshift-marketplace/redhat-marketplace-kww7g" Dec 01 08:44:33 crc kubenswrapper[4689]: I1201 08:44:33.436417 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2vf7n"] Dec 01 08:44:33 crc kubenswrapper[4689]: I1201 08:44:33.437494 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4qnv\" (UniqueName: \"kubernetes.io/projected/075f35f7-3a97-4145-b911-9a14de1e1fee-kube-api-access-q4qnv\") pod \"redhat-marketplace-kww7g\" (UID: \"075f35f7-3a97-4145-b911-9a14de1e1fee\") " pod="openshift-marketplace/redhat-marketplace-kww7g" Dec 01 08:44:33 crc kubenswrapper[4689]: I1201 08:44:33.438988 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2vf7n" Dec 01 08:44:33 crc kubenswrapper[4689]: I1201 08:44:33.441240 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 01 08:44:33 crc kubenswrapper[4689]: I1201 08:44:33.471540 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2vf7n"] Dec 01 08:44:33 crc kubenswrapper[4689]: I1201 08:44:33.516701 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11f527ec-49a1-4be9-a67b-676eb6b8feba-catalog-content\") pod \"redhat-operators-2vf7n\" (UID: \"11f527ec-49a1-4be9-a67b-676eb6b8feba\") " pod="openshift-marketplace/redhat-operators-2vf7n" Dec 01 08:44:33 crc kubenswrapper[4689]: I1201 08:44:33.516785 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11f527ec-49a1-4be9-a67b-676eb6b8feba-utilities\") pod \"redhat-operators-2vf7n\" (UID: \"11f527ec-49a1-4be9-a67b-676eb6b8feba\") " pod="openshift-marketplace/redhat-operators-2vf7n" Dec 01 08:44:33 crc kubenswrapper[4689]: I1201 08:44:33.516939 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxt6n\" (UniqueName: \"kubernetes.io/projected/11f527ec-49a1-4be9-a67b-676eb6b8feba-kube-api-access-kxt6n\") pod \"redhat-operators-2vf7n\" (UID: \"11f527ec-49a1-4be9-a67b-676eb6b8feba\") " pod="openshift-marketplace/redhat-operators-2vf7n" Dec 01 08:44:33 crc kubenswrapper[4689]: I1201 08:44:33.553185 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kww7g" Dec 01 08:44:33 crc kubenswrapper[4689]: I1201 08:44:33.618386 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11f527ec-49a1-4be9-a67b-676eb6b8feba-catalog-content\") pod \"redhat-operators-2vf7n\" (UID: \"11f527ec-49a1-4be9-a67b-676eb6b8feba\") " pod="openshift-marketplace/redhat-operators-2vf7n" Dec 01 08:44:33 crc kubenswrapper[4689]: I1201 08:44:33.618447 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11f527ec-49a1-4be9-a67b-676eb6b8feba-utilities\") pod \"redhat-operators-2vf7n\" (UID: \"11f527ec-49a1-4be9-a67b-676eb6b8feba\") " pod="openshift-marketplace/redhat-operators-2vf7n" Dec 01 08:44:33 crc kubenswrapper[4689]: I1201 08:44:33.618477 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxt6n\" (UniqueName: \"kubernetes.io/projected/11f527ec-49a1-4be9-a67b-676eb6b8feba-kube-api-access-kxt6n\") pod \"redhat-operators-2vf7n\" (UID: \"11f527ec-49a1-4be9-a67b-676eb6b8feba\") " pod="openshift-marketplace/redhat-operators-2vf7n" Dec 01 08:44:33 crc kubenswrapper[4689]: I1201 08:44:33.619302 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11f527ec-49a1-4be9-a67b-676eb6b8feba-utilities\") pod \"redhat-operators-2vf7n\" (UID: \"11f527ec-49a1-4be9-a67b-676eb6b8feba\") " pod="openshift-marketplace/redhat-operators-2vf7n" Dec 01 08:44:33 crc kubenswrapper[4689]: I1201 08:44:33.619977 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11f527ec-49a1-4be9-a67b-676eb6b8feba-catalog-content\") pod \"redhat-operators-2vf7n\" (UID: \"11f527ec-49a1-4be9-a67b-676eb6b8feba\") " pod="openshift-marketplace/redhat-operators-2vf7n" Dec 01 08:44:33 crc kubenswrapper[4689]: I1201 08:44:33.649395 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxt6n\" (UniqueName: \"kubernetes.io/projected/11f527ec-49a1-4be9-a67b-676eb6b8feba-kube-api-access-kxt6n\") pod \"redhat-operators-2vf7n\" (UID: \"11f527ec-49a1-4be9-a67b-676eb6b8feba\") " pod="openshift-marketplace/redhat-operators-2vf7n" Dec 01 08:44:33 crc kubenswrapper[4689]: I1201 08:44:33.779476 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2vf7n" Dec 01 08:44:34 crc kubenswrapper[4689]: I1201 08:44:34.006997 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kww7g"] Dec 01 08:44:34 crc kubenswrapper[4689]: W1201 08:44:34.012309 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod075f35f7_3a97_4145_b911_9a14de1e1fee.slice/crio-23e0546d04cfd5f11fe642657435a522b2815be06a156fb3184bdf44a54ffb22 WatchSource:0}: Error finding container 23e0546d04cfd5f11fe642657435a522b2815be06a156fb3184bdf44a54ffb22: Status 404 returned error can't find the container with id 23e0546d04cfd5f11fe642657435a522b2815be06a156fb3184bdf44a54ffb22 Dec 01 08:44:34 crc kubenswrapper[4689]: I1201 08:44:34.180552 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2vf7n"] Dec 01 08:44:34 crc kubenswrapper[4689]: W1201 08:44:34.188476 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod11f527ec_49a1_4be9_a67b_676eb6b8feba.slice/crio-7be9bbe66fd991d8f3c438310e018aea940428d810204be741002de6aeeb369d WatchSource:0}: Error finding container 7be9bbe66fd991d8f3c438310e018aea940428d810204be741002de6aeeb369d: Status 404 returned error can't find the container with id 7be9bbe66fd991d8f3c438310e018aea940428d810204be741002de6aeeb369d Dec 01 08:44:34 crc kubenswrapper[4689]: I1201 08:44:34.271906 4689 generic.go:334] "Generic (PLEG): container finished" podID="665830e4-f511-4fa5-8892-75d5bc618ede" containerID="a8943d91f9c8f49d61d8b5a2c4c8a9d0327d57711f58c983a5e4c8db06ec1d1f" exitCode=0 Dec 01 08:44:34 crc kubenswrapper[4689]: I1201 08:44:34.271973 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fwm44" event={"ID":"665830e4-f511-4fa5-8892-75d5bc618ede","Type":"ContainerDied","Data":"a8943d91f9c8f49d61d8b5a2c4c8a9d0327d57711f58c983a5e4c8db06ec1d1f"} Dec 01 08:44:34 crc kubenswrapper[4689]: I1201 08:44:34.277278 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2vf7n" event={"ID":"11f527ec-49a1-4be9-a67b-676eb6b8feba","Type":"ContainerStarted","Data":"7be9bbe66fd991d8f3c438310e018aea940428d810204be741002de6aeeb369d"} Dec 01 08:44:34 crc kubenswrapper[4689]: I1201 08:44:34.279328 4689 generic.go:334] "Generic (PLEG): container finished" podID="3cea5449-8a30-47d4-bb0f-e7a6c785bee5" containerID="6f47080a003109db1b177605f6f2c99acbb0534f2b65729a696bf34b99fbc36f" exitCode=0 Dec 01 08:44:34 crc kubenswrapper[4689]: I1201 08:44:34.279445 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5nsm4" event={"ID":"3cea5449-8a30-47d4-bb0f-e7a6c785bee5","Type":"ContainerDied","Data":"6f47080a003109db1b177605f6f2c99acbb0534f2b65729a696bf34b99fbc36f"} Dec 01 08:44:34 crc kubenswrapper[4689]: I1201 08:44:34.287886 4689 generic.go:334] "Generic (PLEG): container finished" podID="075f35f7-3a97-4145-b911-9a14de1e1fee" containerID="1d13d3e14db1e382a8179a9c5aa19f3f6a0f5d6fa13a17d04a7b1fc4c19bad1c" exitCode=0 Dec 01 08:44:34 crc kubenswrapper[4689]: I1201 08:44:34.287947 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kww7g" event={"ID":"075f35f7-3a97-4145-b911-9a14de1e1fee","Type":"ContainerDied","Data":"1d13d3e14db1e382a8179a9c5aa19f3f6a0f5d6fa13a17d04a7b1fc4c19bad1c"} Dec 01 08:44:34 crc kubenswrapper[4689]: I1201 08:44:34.288010 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kww7g" event={"ID":"075f35f7-3a97-4145-b911-9a14de1e1fee","Type":"ContainerStarted","Data":"23e0546d04cfd5f11fe642657435a522b2815be06a156fb3184bdf44a54ffb22"} Dec 01 08:44:35 crc kubenswrapper[4689]: I1201 08:44:35.294418 4689 generic.go:334] "Generic (PLEG): container finished" podID="11f527ec-49a1-4be9-a67b-676eb6b8feba" containerID="321bb0363a22afa815aeccbb7e2317c8f1ecd29e2f013151d5ead992d19df1d6" exitCode=0 Dec 01 08:44:35 crc kubenswrapper[4689]: I1201 08:44:35.294497 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2vf7n" event={"ID":"11f527ec-49a1-4be9-a67b-676eb6b8feba","Type":"ContainerDied","Data":"321bb0363a22afa815aeccbb7e2317c8f1ecd29e2f013151d5ead992d19df1d6"} Dec 01 08:44:35 crc kubenswrapper[4689]: I1201 08:44:35.297500 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5nsm4" event={"ID":"3cea5449-8a30-47d4-bb0f-e7a6c785bee5","Type":"ContainerStarted","Data":"b14c3972166e41cd84a67ac790ffac6b4f5208884b3f4d5b89f935967d37ada1"} Dec 01 08:44:35 crc kubenswrapper[4689]: I1201 08:44:35.301082 4689 generic.go:334] "Generic (PLEG): container finished" podID="075f35f7-3a97-4145-b911-9a14de1e1fee" containerID="eaa946eda817a0c464bd5df87528a71d85af03019fcfbcd9d326480c4c03d3f5" exitCode=0 Dec 01 08:44:35 crc kubenswrapper[4689]: I1201 08:44:35.301135 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kww7g" event={"ID":"075f35f7-3a97-4145-b911-9a14de1e1fee","Type":"ContainerDied","Data":"eaa946eda817a0c464bd5df87528a71d85af03019fcfbcd9d326480c4c03d3f5"} Dec 01 08:44:35 crc kubenswrapper[4689]: I1201 08:44:35.305738 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fwm44" event={"ID":"665830e4-f511-4fa5-8892-75d5bc618ede","Type":"ContainerStarted","Data":"46e6e610b7b5577742f57bba5b457128cab81fb37921c5d835f9531f553c86b1"} Dec 01 08:44:35 crc kubenswrapper[4689]: I1201 08:44:35.342509 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fwm44" podStartSLOduration=1.532545012 podStartE2EDuration="4.342481577s" podCreationTimestamp="2025-12-01 08:44:31 +0000 UTC" firstStartedPulling="2025-12-01 08:44:32.257409132 +0000 UTC m=+352.329697076" lastFinishedPulling="2025-12-01 08:44:35.067345737 +0000 UTC m=+355.139633641" observedRunningTime="2025-12-01 08:44:35.337822026 +0000 UTC m=+355.410109930" watchObservedRunningTime="2025-12-01 08:44:35.342481577 +0000 UTC m=+355.414769491" Dec 01 08:44:35 crc kubenswrapper[4689]: I1201 08:44:35.381524 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5nsm4" podStartSLOduration=2.646162337 podStartE2EDuration="5.381498245s" podCreationTimestamp="2025-12-01 08:44:30 +0000 UTC" firstStartedPulling="2025-12-01 08:44:32.250651316 +0000 UTC m=+352.322939260" lastFinishedPulling="2025-12-01 08:44:34.985987254 +0000 UTC m=+355.058275168" observedRunningTime="2025-12-01 08:44:35.379694798 +0000 UTC m=+355.451982712" watchObservedRunningTime="2025-12-01 08:44:35.381498245 +0000 UTC m=+355.453786149" Dec 01 08:44:36 crc kubenswrapper[4689]: I1201 08:44:36.313328 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2vf7n" event={"ID":"11f527ec-49a1-4be9-a67b-676eb6b8feba","Type":"ContainerStarted","Data":"c250e1c09abbe4cfeac68589c70837f02f71124099d0f521d6d4897d8159d1a2"} Dec 01 08:44:36 crc kubenswrapper[4689]: I1201 08:44:36.315698 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kww7g" event={"ID":"075f35f7-3a97-4145-b911-9a14de1e1fee","Type":"ContainerStarted","Data":"c989906a35afbeb013d4915d8aad2db10a5e0b86f55c8a77a23847d84a0a7eef"} Dec 01 08:44:36 crc kubenswrapper[4689]: I1201 08:44:36.358408 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-kww7g" podStartSLOduration=1.858728784 podStartE2EDuration="3.358383047s" podCreationTimestamp="2025-12-01 08:44:33 +0000 UTC" firstStartedPulling="2025-12-01 08:44:34.293008001 +0000 UTC m=+354.365295905" lastFinishedPulling="2025-12-01 08:44:35.792662254 +0000 UTC m=+355.864950168" observedRunningTime="2025-12-01 08:44:36.354915067 +0000 UTC m=+356.427202971" watchObservedRunningTime="2025-12-01 08:44:36.358383047 +0000 UTC m=+356.430670961" Dec 01 08:44:37 crc kubenswrapper[4689]: I1201 08:44:37.324842 4689 generic.go:334] "Generic (PLEG): container finished" podID="11f527ec-49a1-4be9-a67b-676eb6b8feba" containerID="c250e1c09abbe4cfeac68589c70837f02f71124099d0f521d6d4897d8159d1a2" exitCode=0 Dec 01 08:44:37 crc kubenswrapper[4689]: I1201 08:44:37.325611 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2vf7n" event={"ID":"11f527ec-49a1-4be9-a67b-676eb6b8feba","Type":"ContainerDied","Data":"c250e1c09abbe4cfeac68589c70837f02f71124099d0f521d6d4897d8159d1a2"} Dec 01 08:44:38 crc kubenswrapper[4689]: I1201 08:44:38.332617 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2vf7n" event={"ID":"11f527ec-49a1-4be9-a67b-676eb6b8feba","Type":"ContainerStarted","Data":"81968c68957a5006b78a87fab0e9790709b8c04149c52284e4311d180d38d9b4"} Dec 01 08:44:38 crc kubenswrapper[4689]: I1201 08:44:38.354189 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2vf7n" podStartSLOduration=2.794487772 podStartE2EDuration="5.354167046s" podCreationTimestamp="2025-12-01 08:44:33 +0000 UTC" firstStartedPulling="2025-12-01 08:44:35.299094645 +0000 UTC m=+355.371382549" lastFinishedPulling="2025-12-01 08:44:37.858773919 +0000 UTC m=+357.931061823" observedRunningTime="2025-12-01 08:44:38.350780547 +0000 UTC m=+358.423068451" watchObservedRunningTime="2025-12-01 08:44:38.354167046 +0000 UTC m=+358.426454950" Dec 01 08:44:39 crc kubenswrapper[4689]: I1201 08:44:39.147636 4689 patch_prober.go:28] interesting pod/machine-config-daemon-hmdnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 08:44:39 crc kubenswrapper[4689]: I1201 08:44:39.148080 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 08:44:41 crc kubenswrapper[4689]: I1201 08:44:41.176543 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5nsm4" Dec 01 08:44:41 crc kubenswrapper[4689]: I1201 08:44:41.176609 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5nsm4" Dec 01 08:44:41 crc kubenswrapper[4689]: I1201 08:44:41.246852 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5nsm4" Dec 01 08:44:41 crc kubenswrapper[4689]: I1201 08:44:41.364525 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fwm44" Dec 01 08:44:41 crc kubenswrapper[4689]: I1201 08:44:41.364587 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fwm44" Dec 01 08:44:41 crc kubenswrapper[4689]: I1201 08:44:41.394806 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5nsm4" Dec 01 08:44:41 crc kubenswrapper[4689]: I1201 08:44:41.406440 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fwm44" Dec 01 08:44:42 crc kubenswrapper[4689]: I1201 08:44:42.451970 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fwm44" Dec 01 08:44:43 crc kubenswrapper[4689]: I1201 08:44:43.553879 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-kww7g" Dec 01 08:44:43 crc kubenswrapper[4689]: I1201 08:44:43.554345 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-kww7g" Dec 01 08:44:43 crc kubenswrapper[4689]: I1201 08:44:43.601161 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-kww7g" Dec 01 08:44:43 crc kubenswrapper[4689]: I1201 08:44:43.779768 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2vf7n" Dec 01 08:44:43 crc kubenswrapper[4689]: I1201 08:44:43.779837 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2vf7n" Dec 01 08:44:43 crc kubenswrapper[4689]: I1201 08:44:43.849183 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2vf7n" Dec 01 08:44:44 crc kubenswrapper[4689]: I1201 08:44:44.408412 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-kww7g" Dec 01 08:44:44 crc kubenswrapper[4689]: I1201 08:44:44.446669 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2vf7n" Dec 01 08:44:57 crc kubenswrapper[4689]: I1201 08:44:57.824447 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-tg572"] Dec 01 08:44:57 crc kubenswrapper[4689]: I1201 08:44:57.826234 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-tg572" Dec 01 08:44:57 crc kubenswrapper[4689]: I1201 08:44:57.852752 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-tg572"] Dec 01 08:44:58 crc kubenswrapper[4689]: I1201 08:44:58.015245 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5b45e776-d57b-4922-b11b-80b8de9f85d3-bound-sa-token\") pod \"image-registry-66df7c8f76-tg572\" (UID: \"5b45e776-d57b-4922-b11b-80b8de9f85d3\") " pod="openshift-image-registry/image-registry-66df7c8f76-tg572" Dec 01 08:44:58 crc kubenswrapper[4689]: I1201 08:44:58.015311 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5b45e776-d57b-4922-b11b-80b8de9f85d3-ca-trust-extracted\") pod \"image-registry-66df7c8f76-tg572\" (UID: \"5b45e776-d57b-4922-b11b-80b8de9f85d3\") " pod="openshift-image-registry/image-registry-66df7c8f76-tg572" Dec 01 08:44:58 crc kubenswrapper[4689]: I1201 08:44:58.015340 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-tg572\" (UID: \"5b45e776-d57b-4922-b11b-80b8de9f85d3\") " pod="openshift-image-registry/image-registry-66df7c8f76-tg572" Dec 01 08:44:58 crc kubenswrapper[4689]: I1201 08:44:58.015735 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5b45e776-d57b-4922-b11b-80b8de9f85d3-registry-tls\") pod \"image-registry-66df7c8f76-tg572\" (UID: \"5b45e776-d57b-4922-b11b-80b8de9f85d3\") " pod="openshift-image-registry/image-registry-66df7c8f76-tg572" Dec 01 08:44:58 crc kubenswrapper[4689]: I1201 08:44:58.015903 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5b45e776-d57b-4922-b11b-80b8de9f85d3-installation-pull-secrets\") pod \"image-registry-66df7c8f76-tg572\" (UID: \"5b45e776-d57b-4922-b11b-80b8de9f85d3\") " pod="openshift-image-registry/image-registry-66df7c8f76-tg572" Dec 01 08:44:58 crc kubenswrapper[4689]: I1201 08:44:58.016174 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5b45e776-d57b-4922-b11b-80b8de9f85d3-trusted-ca\") pod \"image-registry-66df7c8f76-tg572\" (UID: \"5b45e776-d57b-4922-b11b-80b8de9f85d3\") " pod="openshift-image-registry/image-registry-66df7c8f76-tg572" Dec 01 08:44:58 crc kubenswrapper[4689]: I1201 08:44:58.016390 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5b45e776-d57b-4922-b11b-80b8de9f85d3-registry-certificates\") pod \"image-registry-66df7c8f76-tg572\" (UID: \"5b45e776-d57b-4922-b11b-80b8de9f85d3\") " pod="openshift-image-registry/image-registry-66df7c8f76-tg572" Dec 01 08:44:58 crc kubenswrapper[4689]: I1201 08:44:58.016459 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwlbt\" (UniqueName: \"kubernetes.io/projected/5b45e776-d57b-4922-b11b-80b8de9f85d3-kube-api-access-lwlbt\") pod \"image-registry-66df7c8f76-tg572\" (UID: \"5b45e776-d57b-4922-b11b-80b8de9f85d3\") " pod="openshift-image-registry/image-registry-66df7c8f76-tg572" Dec 01 08:44:58 crc kubenswrapper[4689]: I1201 08:44:58.055646 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-tg572\" (UID: \"5b45e776-d57b-4922-b11b-80b8de9f85d3\") " pod="openshift-image-registry/image-registry-66df7c8f76-tg572" Dec 01 08:44:58 crc kubenswrapper[4689]: I1201 08:44:58.118305 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5b45e776-d57b-4922-b11b-80b8de9f85d3-trusted-ca\") pod \"image-registry-66df7c8f76-tg572\" (UID: \"5b45e776-d57b-4922-b11b-80b8de9f85d3\") " pod="openshift-image-registry/image-registry-66df7c8f76-tg572" Dec 01 08:44:58 crc kubenswrapper[4689]: I1201 08:44:58.118910 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5b45e776-d57b-4922-b11b-80b8de9f85d3-registry-certificates\") pod \"image-registry-66df7c8f76-tg572\" (UID: \"5b45e776-d57b-4922-b11b-80b8de9f85d3\") " pod="openshift-image-registry/image-registry-66df7c8f76-tg572" Dec 01 08:44:58 crc kubenswrapper[4689]: I1201 08:44:58.118953 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwlbt\" (UniqueName: \"kubernetes.io/projected/5b45e776-d57b-4922-b11b-80b8de9f85d3-kube-api-access-lwlbt\") pod \"image-registry-66df7c8f76-tg572\" (UID: \"5b45e776-d57b-4922-b11b-80b8de9f85d3\") " pod="openshift-image-registry/image-registry-66df7c8f76-tg572" Dec 01 08:44:58 crc kubenswrapper[4689]: I1201 08:44:58.118999 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5b45e776-d57b-4922-b11b-80b8de9f85d3-bound-sa-token\") pod \"image-registry-66df7c8f76-tg572\" (UID: \"5b45e776-d57b-4922-b11b-80b8de9f85d3\") " pod="openshift-image-registry/image-registry-66df7c8f76-tg572" Dec 01 08:44:58 crc kubenswrapper[4689]: I1201 08:44:58.119035 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5b45e776-d57b-4922-b11b-80b8de9f85d3-ca-trust-extracted\") pod \"image-registry-66df7c8f76-tg572\" (UID: \"5b45e776-d57b-4922-b11b-80b8de9f85d3\") " pod="openshift-image-registry/image-registry-66df7c8f76-tg572" Dec 01 08:44:58 crc kubenswrapper[4689]: I1201 08:44:58.119068 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5b45e776-d57b-4922-b11b-80b8de9f85d3-registry-tls\") pod \"image-registry-66df7c8f76-tg572\" (UID: \"5b45e776-d57b-4922-b11b-80b8de9f85d3\") " pod="openshift-image-registry/image-registry-66df7c8f76-tg572" Dec 01 08:44:58 crc kubenswrapper[4689]: I1201 08:44:58.119109 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5b45e776-d57b-4922-b11b-80b8de9f85d3-installation-pull-secrets\") pod \"image-registry-66df7c8f76-tg572\" (UID: \"5b45e776-d57b-4922-b11b-80b8de9f85d3\") " pod="openshift-image-registry/image-registry-66df7c8f76-tg572" Dec 01 08:44:58 crc kubenswrapper[4689]: I1201 08:44:58.119634 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5b45e776-d57b-4922-b11b-80b8de9f85d3-trusted-ca\") pod \"image-registry-66df7c8f76-tg572\" (UID: \"5b45e776-d57b-4922-b11b-80b8de9f85d3\") " pod="openshift-image-registry/image-registry-66df7c8f76-tg572" Dec 01 08:44:58 crc kubenswrapper[4689]: I1201 08:44:58.120116 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5b45e776-d57b-4922-b11b-80b8de9f85d3-ca-trust-extracted\") pod \"image-registry-66df7c8f76-tg572\" (UID: \"5b45e776-d57b-4922-b11b-80b8de9f85d3\") " pod="openshift-image-registry/image-registry-66df7c8f76-tg572" Dec 01 08:44:58 crc kubenswrapper[4689]: I1201 08:44:58.121658 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5b45e776-d57b-4922-b11b-80b8de9f85d3-registry-certificates\") pod \"image-registry-66df7c8f76-tg572\" (UID: \"5b45e776-d57b-4922-b11b-80b8de9f85d3\") " pod="openshift-image-registry/image-registry-66df7c8f76-tg572" Dec 01 08:44:58 crc kubenswrapper[4689]: I1201 08:44:58.130282 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5b45e776-d57b-4922-b11b-80b8de9f85d3-registry-tls\") pod \"image-registry-66df7c8f76-tg572\" (UID: \"5b45e776-d57b-4922-b11b-80b8de9f85d3\") " pod="openshift-image-registry/image-registry-66df7c8f76-tg572" Dec 01 08:44:58 crc kubenswrapper[4689]: I1201 08:44:58.144003 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwlbt\" (UniqueName: \"kubernetes.io/projected/5b45e776-d57b-4922-b11b-80b8de9f85d3-kube-api-access-lwlbt\") pod \"image-registry-66df7c8f76-tg572\" (UID: \"5b45e776-d57b-4922-b11b-80b8de9f85d3\") " pod="openshift-image-registry/image-registry-66df7c8f76-tg572" Dec 01 08:44:58 crc kubenswrapper[4689]: I1201 08:44:58.145156 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5b45e776-d57b-4922-b11b-80b8de9f85d3-bound-sa-token\") pod \"image-registry-66df7c8f76-tg572\" (UID: \"5b45e776-d57b-4922-b11b-80b8de9f85d3\") " pod="openshift-image-registry/image-registry-66df7c8f76-tg572" Dec 01 08:44:58 crc kubenswrapper[4689]: I1201 08:44:58.148130 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5b45e776-d57b-4922-b11b-80b8de9f85d3-installation-pull-secrets\") pod \"image-registry-66df7c8f76-tg572\" (UID: \"5b45e776-d57b-4922-b11b-80b8de9f85d3\") " pod="openshift-image-registry/image-registry-66df7c8f76-tg572" Dec 01 08:44:58 crc kubenswrapper[4689]: I1201 08:44:58.150062 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-tg572" Dec 01 08:44:58 crc kubenswrapper[4689]: I1201 08:44:58.721235 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-tg572"] Dec 01 08:44:59 crc kubenswrapper[4689]: I1201 08:44:59.461162 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-tg572" event={"ID":"5b45e776-d57b-4922-b11b-80b8de9f85d3","Type":"ContainerStarted","Data":"c822273bd75c6e06530cbe4e1ab23e37c94e47044a13a409aa8effb395e81d02"} Dec 01 08:45:00 crc kubenswrapper[4689]: I1201 08:45:00.172661 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409645-lhqn8"] Dec 01 08:45:00 crc kubenswrapper[4689]: I1201 08:45:00.174407 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409645-lhqn8" Dec 01 08:45:00 crc kubenswrapper[4689]: I1201 08:45:00.178100 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 01 08:45:00 crc kubenswrapper[4689]: I1201 08:45:00.182331 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 01 08:45:00 crc kubenswrapper[4689]: I1201 08:45:00.190138 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409645-lhqn8"] Dec 01 08:45:00 crc kubenswrapper[4689]: I1201 08:45:00.354174 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dvzp\" (UniqueName: \"kubernetes.io/projected/a6fd5553-6e2f-4b49-93c0-f03807e48f54-kube-api-access-5dvzp\") pod \"collect-profiles-29409645-lhqn8\" (UID: \"a6fd5553-6e2f-4b49-93c0-f03807e48f54\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409645-lhqn8" Dec 01 08:45:00 crc kubenswrapper[4689]: I1201 08:45:00.354518 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a6fd5553-6e2f-4b49-93c0-f03807e48f54-config-volume\") pod \"collect-profiles-29409645-lhqn8\" (UID: \"a6fd5553-6e2f-4b49-93c0-f03807e48f54\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409645-lhqn8" Dec 01 08:45:00 crc kubenswrapper[4689]: I1201 08:45:00.354618 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a6fd5553-6e2f-4b49-93c0-f03807e48f54-secret-volume\") pod \"collect-profiles-29409645-lhqn8\" (UID: \"a6fd5553-6e2f-4b49-93c0-f03807e48f54\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409645-lhqn8" Dec 01 08:45:00 crc kubenswrapper[4689]: I1201 08:45:00.456330 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dvzp\" (UniqueName: \"kubernetes.io/projected/a6fd5553-6e2f-4b49-93c0-f03807e48f54-kube-api-access-5dvzp\") pod \"collect-profiles-29409645-lhqn8\" (UID: \"a6fd5553-6e2f-4b49-93c0-f03807e48f54\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409645-lhqn8" Dec 01 08:45:00 crc kubenswrapper[4689]: I1201 08:45:00.456467 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a6fd5553-6e2f-4b49-93c0-f03807e48f54-config-volume\") pod \"collect-profiles-29409645-lhqn8\" (UID: \"a6fd5553-6e2f-4b49-93c0-f03807e48f54\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409645-lhqn8" Dec 01 08:45:00 crc kubenswrapper[4689]: I1201 08:45:00.456504 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a6fd5553-6e2f-4b49-93c0-f03807e48f54-secret-volume\") pod \"collect-profiles-29409645-lhqn8\" (UID: \"a6fd5553-6e2f-4b49-93c0-f03807e48f54\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409645-lhqn8" Dec 01 08:45:00 crc kubenswrapper[4689]: I1201 08:45:00.457669 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a6fd5553-6e2f-4b49-93c0-f03807e48f54-config-volume\") pod \"collect-profiles-29409645-lhqn8\" (UID: \"a6fd5553-6e2f-4b49-93c0-f03807e48f54\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409645-lhqn8" Dec 01 08:45:00 crc kubenswrapper[4689]: I1201 08:45:00.470141 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a6fd5553-6e2f-4b49-93c0-f03807e48f54-secret-volume\") pod \"collect-profiles-29409645-lhqn8\" (UID: \"a6fd5553-6e2f-4b49-93c0-f03807e48f54\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409645-lhqn8" Dec 01 08:45:00 crc kubenswrapper[4689]: I1201 08:45:00.476075 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dvzp\" (UniqueName: \"kubernetes.io/projected/a6fd5553-6e2f-4b49-93c0-f03807e48f54-kube-api-access-5dvzp\") pod \"collect-profiles-29409645-lhqn8\" (UID: \"a6fd5553-6e2f-4b49-93c0-f03807e48f54\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409645-lhqn8" Dec 01 08:45:00 crc kubenswrapper[4689]: I1201 08:45:00.478484 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-tg572" event={"ID":"5b45e776-d57b-4922-b11b-80b8de9f85d3","Type":"ContainerStarted","Data":"bf650b552f76fc0d7744c2c8d494c949c7eae2c74dfddf5cc406dc1639dfe9cc"} Dec 01 08:45:00 crc kubenswrapper[4689]: I1201 08:45:00.479194 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-tg572" Dec 01 08:45:00 crc kubenswrapper[4689]: I1201 08:45:00.496700 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409645-lhqn8" Dec 01 08:45:00 crc kubenswrapper[4689]: I1201 08:45:00.500303 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-tg572" podStartSLOduration=3.500245647 podStartE2EDuration="3.500245647s" podCreationTimestamp="2025-12-01 08:44:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:45:00.498339487 +0000 UTC m=+380.570627391" watchObservedRunningTime="2025-12-01 08:45:00.500245647 +0000 UTC m=+380.572533591" Dec 01 08:45:00 crc kubenswrapper[4689]: I1201 08:45:00.977875 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409645-lhqn8"] Dec 01 08:45:00 crc kubenswrapper[4689]: W1201 08:45:00.990865 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda6fd5553_6e2f_4b49_93c0_f03807e48f54.slice/crio-935d3aaf5fac3d636e9602dfbca6c2d9775a479fd46b1a354cf3873be7ba9cc3 WatchSource:0}: Error finding container 935d3aaf5fac3d636e9602dfbca6c2d9775a479fd46b1a354cf3873be7ba9cc3: Status 404 returned error can't find the container with id 935d3aaf5fac3d636e9602dfbca6c2d9775a479fd46b1a354cf3873be7ba9cc3 Dec 01 08:45:01 crc kubenswrapper[4689]: I1201 08:45:01.486009 4689 generic.go:334] "Generic (PLEG): container finished" podID="a6fd5553-6e2f-4b49-93c0-f03807e48f54" containerID="1dd93318847e65d87543324043ddf9a6763d5a6d37cb0375cb4fee2781bb9041" exitCode=0 Dec 01 08:45:01 crc kubenswrapper[4689]: I1201 08:45:01.486433 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29409645-lhqn8" event={"ID":"a6fd5553-6e2f-4b49-93c0-f03807e48f54","Type":"ContainerDied","Data":"1dd93318847e65d87543324043ddf9a6763d5a6d37cb0375cb4fee2781bb9041"} Dec 01 08:45:01 crc kubenswrapper[4689]: I1201 08:45:01.486491 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29409645-lhqn8" event={"ID":"a6fd5553-6e2f-4b49-93c0-f03807e48f54","Type":"ContainerStarted","Data":"935d3aaf5fac3d636e9602dfbca6c2d9775a479fd46b1a354cf3873be7ba9cc3"} Dec 01 08:45:02 crc kubenswrapper[4689]: I1201 08:45:02.783272 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-795b8d5757-fflsj"] Dec 01 08:45:02 crc kubenswrapper[4689]: I1201 08:45:02.784136 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-795b8d5757-fflsj" podUID="9598129f-1ef1-45e5-9fd0-c8ad7a816f3e" containerName="route-controller-manager" containerID="cri-o://414a82c1cd2315604265047a2344934cf5524d2c8e394a7a0bc83c7f74061314" gracePeriod=30 Dec 01 08:45:02 crc kubenswrapper[4689]: I1201 08:45:02.867949 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409645-lhqn8" Dec 01 08:45:02 crc kubenswrapper[4689]: I1201 08:45:02.998889 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a6fd5553-6e2f-4b49-93c0-f03807e48f54-config-volume\") pod \"a6fd5553-6e2f-4b49-93c0-f03807e48f54\" (UID: \"a6fd5553-6e2f-4b49-93c0-f03807e48f54\") " Dec 01 08:45:02 crc kubenswrapper[4689]: I1201 08:45:02.999022 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5dvzp\" (UniqueName: \"kubernetes.io/projected/a6fd5553-6e2f-4b49-93c0-f03807e48f54-kube-api-access-5dvzp\") pod \"a6fd5553-6e2f-4b49-93c0-f03807e48f54\" (UID: \"a6fd5553-6e2f-4b49-93c0-f03807e48f54\") " Dec 01 08:45:02 crc kubenswrapper[4689]: I1201 08:45:02.999076 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a6fd5553-6e2f-4b49-93c0-f03807e48f54-secret-volume\") pod \"a6fd5553-6e2f-4b49-93c0-f03807e48f54\" (UID: \"a6fd5553-6e2f-4b49-93c0-f03807e48f54\") " Dec 01 08:45:03 crc kubenswrapper[4689]: I1201 08:45:03.000439 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6fd5553-6e2f-4b49-93c0-f03807e48f54-config-volume" (OuterVolumeSpecName: "config-volume") pod "a6fd5553-6e2f-4b49-93c0-f03807e48f54" (UID: "a6fd5553-6e2f-4b49-93c0-f03807e48f54"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:45:03 crc kubenswrapper[4689]: I1201 08:45:03.007676 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6fd5553-6e2f-4b49-93c0-f03807e48f54-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a6fd5553-6e2f-4b49-93c0-f03807e48f54" (UID: "a6fd5553-6e2f-4b49-93c0-f03807e48f54"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:45:03 crc kubenswrapper[4689]: I1201 08:45:03.009504 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6fd5553-6e2f-4b49-93c0-f03807e48f54-kube-api-access-5dvzp" (OuterVolumeSpecName: "kube-api-access-5dvzp") pod "a6fd5553-6e2f-4b49-93c0-f03807e48f54" (UID: "a6fd5553-6e2f-4b49-93c0-f03807e48f54"). InnerVolumeSpecName "kube-api-access-5dvzp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:45:03 crc kubenswrapper[4689]: I1201 08:45:03.101000 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5dvzp\" (UniqueName: \"kubernetes.io/projected/a6fd5553-6e2f-4b49-93c0-f03807e48f54-kube-api-access-5dvzp\") on node \"crc\" DevicePath \"\"" Dec 01 08:45:03 crc kubenswrapper[4689]: I1201 08:45:03.101218 4689 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a6fd5553-6e2f-4b49-93c0-f03807e48f54-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 01 08:45:03 crc kubenswrapper[4689]: I1201 08:45:03.101300 4689 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a6fd5553-6e2f-4b49-93c0-f03807e48f54-config-volume\") on node \"crc\" DevicePath \"\"" Dec 01 08:45:03 crc kubenswrapper[4689]: I1201 08:45:03.500514 4689 generic.go:334] "Generic (PLEG): container finished" podID="9598129f-1ef1-45e5-9fd0-c8ad7a816f3e" containerID="414a82c1cd2315604265047a2344934cf5524d2c8e394a7a0bc83c7f74061314" exitCode=0 Dec 01 08:45:03 crc kubenswrapper[4689]: I1201 08:45:03.500650 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-795b8d5757-fflsj" event={"ID":"9598129f-1ef1-45e5-9fd0-c8ad7a816f3e","Type":"ContainerDied","Data":"414a82c1cd2315604265047a2344934cf5524d2c8e394a7a0bc83c7f74061314"} Dec 01 08:45:03 crc kubenswrapper[4689]: I1201 08:45:03.503523 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29409645-lhqn8" event={"ID":"a6fd5553-6e2f-4b49-93c0-f03807e48f54","Type":"ContainerDied","Data":"935d3aaf5fac3d636e9602dfbca6c2d9775a479fd46b1a354cf3873be7ba9cc3"} Dec 01 08:45:03 crc kubenswrapper[4689]: I1201 08:45:03.503639 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409645-lhqn8" Dec 01 08:45:03 crc kubenswrapper[4689]: I1201 08:45:03.503651 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="935d3aaf5fac3d636e9602dfbca6c2d9775a479fd46b1a354cf3873be7ba9cc3" Dec 01 08:45:03 crc kubenswrapper[4689]: I1201 08:45:03.768768 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-795b8d5757-fflsj" Dec 01 08:45:03 crc kubenswrapper[4689]: I1201 08:45:03.838954 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9598129f-1ef1-45e5-9fd0-c8ad7a816f3e-client-ca\") pod \"9598129f-1ef1-45e5-9fd0-c8ad7a816f3e\" (UID: \"9598129f-1ef1-45e5-9fd0-c8ad7a816f3e\") " Dec 01 08:45:03 crc kubenswrapper[4689]: I1201 08:45:03.840226 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wkcb9\" (UniqueName: \"kubernetes.io/projected/9598129f-1ef1-45e5-9fd0-c8ad7a816f3e-kube-api-access-wkcb9\") pod \"9598129f-1ef1-45e5-9fd0-c8ad7a816f3e\" (UID: \"9598129f-1ef1-45e5-9fd0-c8ad7a816f3e\") " Dec 01 08:45:03 crc kubenswrapper[4689]: I1201 08:45:03.840346 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9598129f-1ef1-45e5-9fd0-c8ad7a816f3e-serving-cert\") pod \"9598129f-1ef1-45e5-9fd0-c8ad7a816f3e\" (UID: \"9598129f-1ef1-45e5-9fd0-c8ad7a816f3e\") " Dec 01 08:45:03 crc kubenswrapper[4689]: I1201 08:45:03.840537 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9598129f-1ef1-45e5-9fd0-c8ad7a816f3e-config\") pod \"9598129f-1ef1-45e5-9fd0-c8ad7a816f3e\" (UID: \"9598129f-1ef1-45e5-9fd0-c8ad7a816f3e\") " Dec 01 08:45:03 crc kubenswrapper[4689]: I1201 08:45:03.842607 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9598129f-1ef1-45e5-9fd0-c8ad7a816f3e-client-ca" (OuterVolumeSpecName: "client-ca") pod "9598129f-1ef1-45e5-9fd0-c8ad7a816f3e" (UID: "9598129f-1ef1-45e5-9fd0-c8ad7a816f3e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:45:03 crc kubenswrapper[4689]: I1201 08:45:03.845939 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9598129f-1ef1-45e5-9fd0-c8ad7a816f3e-config" (OuterVolumeSpecName: "config") pod "9598129f-1ef1-45e5-9fd0-c8ad7a816f3e" (UID: "9598129f-1ef1-45e5-9fd0-c8ad7a816f3e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:45:03 crc kubenswrapper[4689]: I1201 08:45:03.854830 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9598129f-1ef1-45e5-9fd0-c8ad7a816f3e-kube-api-access-wkcb9" (OuterVolumeSpecName: "kube-api-access-wkcb9") pod "9598129f-1ef1-45e5-9fd0-c8ad7a816f3e" (UID: "9598129f-1ef1-45e5-9fd0-c8ad7a816f3e"). InnerVolumeSpecName "kube-api-access-wkcb9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:45:03 crc kubenswrapper[4689]: I1201 08:45:03.863969 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9598129f-1ef1-45e5-9fd0-c8ad7a816f3e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9598129f-1ef1-45e5-9fd0-c8ad7a816f3e" (UID: "9598129f-1ef1-45e5-9fd0-c8ad7a816f3e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:45:03 crc kubenswrapper[4689]: I1201 08:45:03.945248 4689 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9598129f-1ef1-45e5-9fd0-c8ad7a816f3e-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 08:45:03 crc kubenswrapper[4689]: I1201 08:45:03.945301 4689 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9598129f-1ef1-45e5-9fd0-c8ad7a816f3e-config\") on node \"crc\" DevicePath \"\"" Dec 01 08:45:03 crc kubenswrapper[4689]: I1201 08:45:03.945323 4689 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9598129f-1ef1-45e5-9fd0-c8ad7a816f3e-client-ca\") on node \"crc\" DevicePath \"\"" Dec 01 08:45:03 crc kubenswrapper[4689]: I1201 08:45:03.945337 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wkcb9\" (UniqueName: \"kubernetes.io/projected/9598129f-1ef1-45e5-9fd0-c8ad7a816f3e-kube-api-access-wkcb9\") on node \"crc\" DevicePath \"\"" Dec 01 08:45:04 crc kubenswrapper[4689]: I1201 08:45:04.450715 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cf74ff74d-rrhrc"] Dec 01 08:45:04 crc kubenswrapper[4689]: E1201 08:45:04.451072 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9598129f-1ef1-45e5-9fd0-c8ad7a816f3e" containerName="route-controller-manager" Dec 01 08:45:04 crc kubenswrapper[4689]: I1201 08:45:04.451483 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="9598129f-1ef1-45e5-9fd0-c8ad7a816f3e" containerName="route-controller-manager" Dec 01 08:45:04 crc kubenswrapper[4689]: E1201 08:45:04.451517 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6fd5553-6e2f-4b49-93c0-f03807e48f54" containerName="collect-profiles" Dec 01 08:45:04 crc kubenswrapper[4689]: I1201 08:45:04.451524 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6fd5553-6e2f-4b49-93c0-f03807e48f54" containerName="collect-profiles" Dec 01 08:45:04 crc kubenswrapper[4689]: I1201 08:45:04.452060 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="9598129f-1ef1-45e5-9fd0-c8ad7a816f3e" containerName="route-controller-manager" Dec 01 08:45:04 crc kubenswrapper[4689]: I1201 08:45:04.452179 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6fd5553-6e2f-4b49-93c0-f03807e48f54" containerName="collect-profiles" Dec 01 08:45:04 crc kubenswrapper[4689]: I1201 08:45:04.453257 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6cf74ff74d-rrhrc" Dec 01 08:45:04 crc kubenswrapper[4689]: I1201 08:45:04.466346 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cf74ff74d-rrhrc"] Dec 01 08:45:04 crc kubenswrapper[4689]: I1201 08:45:04.511352 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-795b8d5757-fflsj" event={"ID":"9598129f-1ef1-45e5-9fd0-c8ad7a816f3e","Type":"ContainerDied","Data":"cedb01b2a0592bd0ca0d30135a84c23b5ce9034c94d530162c21b1b7a569d0b3"} Dec 01 08:45:04 crc kubenswrapper[4689]: I1201 08:45:04.511492 4689 scope.go:117] "RemoveContainer" containerID="414a82c1cd2315604265047a2344934cf5524d2c8e394a7a0bc83c7f74061314" Dec 01 08:45:04 crc kubenswrapper[4689]: I1201 08:45:04.511532 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-795b8d5757-fflsj" Dec 01 08:45:04 crc kubenswrapper[4689]: I1201 08:45:04.551427 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-795b8d5757-fflsj"] Dec 01 08:45:04 crc kubenswrapper[4689]: I1201 08:45:04.553525 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e48cba8c-2540-496c-87df-1be952119db4-client-ca\") pod \"route-controller-manager-6cf74ff74d-rrhrc\" (UID: \"e48cba8c-2540-496c-87df-1be952119db4\") " pod="openshift-route-controller-manager/route-controller-manager-6cf74ff74d-rrhrc" Dec 01 08:45:04 crc kubenswrapper[4689]: I1201 08:45:04.553594 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e48cba8c-2540-496c-87df-1be952119db4-serving-cert\") pod \"route-controller-manager-6cf74ff74d-rrhrc\" (UID: \"e48cba8c-2540-496c-87df-1be952119db4\") " pod="openshift-route-controller-manager/route-controller-manager-6cf74ff74d-rrhrc" Dec 01 08:45:04 crc kubenswrapper[4689]: I1201 08:45:04.553629 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qv9jc\" (UniqueName: \"kubernetes.io/projected/e48cba8c-2540-496c-87df-1be952119db4-kube-api-access-qv9jc\") pod \"route-controller-manager-6cf74ff74d-rrhrc\" (UID: \"e48cba8c-2540-496c-87df-1be952119db4\") " pod="openshift-route-controller-manager/route-controller-manager-6cf74ff74d-rrhrc" Dec 01 08:45:04 crc kubenswrapper[4689]: I1201 08:45:04.553670 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e48cba8c-2540-496c-87df-1be952119db4-config\") pod \"route-controller-manager-6cf74ff74d-rrhrc\" (UID: \"e48cba8c-2540-496c-87df-1be952119db4\") " pod="openshift-route-controller-manager/route-controller-manager-6cf74ff74d-rrhrc" Dec 01 08:45:04 crc kubenswrapper[4689]: I1201 08:45:04.556337 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-795b8d5757-fflsj"] Dec 01 08:45:04 crc kubenswrapper[4689]: I1201 08:45:04.654769 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e48cba8c-2540-496c-87df-1be952119db4-client-ca\") pod \"route-controller-manager-6cf74ff74d-rrhrc\" (UID: \"e48cba8c-2540-496c-87df-1be952119db4\") " pod="openshift-route-controller-manager/route-controller-manager-6cf74ff74d-rrhrc" Dec 01 08:45:04 crc kubenswrapper[4689]: I1201 08:45:04.654828 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e48cba8c-2540-496c-87df-1be952119db4-serving-cert\") pod \"route-controller-manager-6cf74ff74d-rrhrc\" (UID: \"e48cba8c-2540-496c-87df-1be952119db4\") " pod="openshift-route-controller-manager/route-controller-manager-6cf74ff74d-rrhrc" Dec 01 08:45:04 crc kubenswrapper[4689]: I1201 08:45:04.654854 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qv9jc\" (UniqueName: \"kubernetes.io/projected/e48cba8c-2540-496c-87df-1be952119db4-kube-api-access-qv9jc\") pod \"route-controller-manager-6cf74ff74d-rrhrc\" (UID: \"e48cba8c-2540-496c-87df-1be952119db4\") " pod="openshift-route-controller-manager/route-controller-manager-6cf74ff74d-rrhrc" Dec 01 08:45:04 crc kubenswrapper[4689]: I1201 08:45:04.654893 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e48cba8c-2540-496c-87df-1be952119db4-config\") pod \"route-controller-manager-6cf74ff74d-rrhrc\" (UID: \"e48cba8c-2540-496c-87df-1be952119db4\") " pod="openshift-route-controller-manager/route-controller-manager-6cf74ff74d-rrhrc" Dec 01 08:45:04 crc kubenswrapper[4689]: I1201 08:45:04.656542 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e48cba8c-2540-496c-87df-1be952119db4-config\") pod \"route-controller-manager-6cf74ff74d-rrhrc\" (UID: \"e48cba8c-2540-496c-87df-1be952119db4\") " pod="openshift-route-controller-manager/route-controller-manager-6cf74ff74d-rrhrc" Dec 01 08:45:04 crc kubenswrapper[4689]: I1201 08:45:04.656870 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e48cba8c-2540-496c-87df-1be952119db4-client-ca\") pod \"route-controller-manager-6cf74ff74d-rrhrc\" (UID: \"e48cba8c-2540-496c-87df-1be952119db4\") " pod="openshift-route-controller-manager/route-controller-manager-6cf74ff74d-rrhrc" Dec 01 08:45:04 crc kubenswrapper[4689]: I1201 08:45:04.662797 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e48cba8c-2540-496c-87df-1be952119db4-serving-cert\") pod \"route-controller-manager-6cf74ff74d-rrhrc\" (UID: \"e48cba8c-2540-496c-87df-1be952119db4\") " pod="openshift-route-controller-manager/route-controller-manager-6cf74ff74d-rrhrc" Dec 01 08:45:04 crc kubenswrapper[4689]: I1201 08:45:04.683295 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qv9jc\" (UniqueName: \"kubernetes.io/projected/e48cba8c-2540-496c-87df-1be952119db4-kube-api-access-qv9jc\") pod \"route-controller-manager-6cf74ff74d-rrhrc\" (UID: \"e48cba8c-2540-496c-87df-1be952119db4\") " pod="openshift-route-controller-manager/route-controller-manager-6cf74ff74d-rrhrc" Dec 01 08:45:04 crc kubenswrapper[4689]: I1201 08:45:04.787021 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6cf74ff74d-rrhrc" Dec 01 08:45:05 crc kubenswrapper[4689]: I1201 08:45:05.037078 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cf74ff74d-rrhrc"] Dec 01 08:45:05 crc kubenswrapper[4689]: W1201 08:45:05.040172 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode48cba8c_2540_496c_87df_1be952119db4.slice/crio-da4b8dc0f2bc681edd823a66545b84b5e8e736f1a1950e6438bac4d537cf26f7 WatchSource:0}: Error finding container da4b8dc0f2bc681edd823a66545b84b5e8e736f1a1950e6438bac4d537cf26f7: Status 404 returned error can't find the container with id da4b8dc0f2bc681edd823a66545b84b5e8e736f1a1950e6438bac4d537cf26f7 Dec 01 08:45:05 crc kubenswrapper[4689]: I1201 08:45:05.065277 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9598129f-1ef1-45e5-9fd0-c8ad7a816f3e" path="/var/lib/kubelet/pods/9598129f-1ef1-45e5-9fd0-c8ad7a816f3e/volumes" Dec 01 08:45:05 crc kubenswrapper[4689]: I1201 08:45:05.519060 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6cf74ff74d-rrhrc" event={"ID":"e48cba8c-2540-496c-87df-1be952119db4","Type":"ContainerStarted","Data":"4c2e7dd9228ac5120e44eabf6f008b08944531db1376add1239001d55d4aaeb8"} Dec 01 08:45:05 crc kubenswrapper[4689]: I1201 08:45:05.519128 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6cf74ff74d-rrhrc" event={"ID":"e48cba8c-2540-496c-87df-1be952119db4","Type":"ContainerStarted","Data":"da4b8dc0f2bc681edd823a66545b84b5e8e736f1a1950e6438bac4d537cf26f7"} Dec 01 08:45:05 crc kubenswrapper[4689]: I1201 08:45:05.519696 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6cf74ff74d-rrhrc" Dec 01 08:45:05 crc kubenswrapper[4689]: I1201 08:45:05.914059 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6cf74ff74d-rrhrc" Dec 01 08:45:05 crc kubenswrapper[4689]: I1201 08:45:05.937898 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6cf74ff74d-rrhrc" podStartSLOduration=3.937868882 podStartE2EDuration="3.937868882s" podCreationTimestamp="2025-12-01 08:45:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:45:05.535415779 +0000 UTC m=+385.607703693" watchObservedRunningTime="2025-12-01 08:45:05.937868882 +0000 UTC m=+386.010156786" Dec 01 08:45:09 crc kubenswrapper[4689]: I1201 08:45:09.147191 4689 patch_prober.go:28] interesting pod/machine-config-daemon-hmdnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 08:45:09 crc kubenswrapper[4689]: I1201 08:45:09.147647 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 08:45:18 crc kubenswrapper[4689]: I1201 08:45:18.158176 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-tg572" Dec 01 08:45:18 crc kubenswrapper[4689]: I1201 08:45:18.220852 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-t8fdl"] Dec 01 08:45:39 crc kubenswrapper[4689]: I1201 08:45:39.146969 4689 patch_prober.go:28] interesting pod/machine-config-daemon-hmdnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 08:45:39 crc kubenswrapper[4689]: I1201 08:45:39.147696 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 08:45:39 crc kubenswrapper[4689]: I1201 08:45:39.147772 4689 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" Dec 01 08:45:39 crc kubenswrapper[4689]: I1201 08:45:39.148868 4689 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"37b1b11c7bc8ffe4ab73103e6e1b196742e6409d79e78201bda5211f96e4082a"} pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 01 08:45:39 crc kubenswrapper[4689]: I1201 08:45:39.148956 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" containerName="machine-config-daemon" containerID="cri-o://37b1b11c7bc8ffe4ab73103e6e1b196742e6409d79e78201bda5211f96e4082a" gracePeriod=600 Dec 01 08:45:39 crc kubenswrapper[4689]: I1201 08:45:39.744523 4689 generic.go:334] "Generic (PLEG): container finished" podID="3947625d-75bf-4332-a233-1491b2ee9d96" containerID="37b1b11c7bc8ffe4ab73103e6e1b196742e6409d79e78201bda5211f96e4082a" exitCode=0 Dec 01 08:45:39 crc kubenswrapper[4689]: I1201 08:45:39.744644 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" event={"ID":"3947625d-75bf-4332-a233-1491b2ee9d96","Type":"ContainerDied","Data":"37b1b11c7bc8ffe4ab73103e6e1b196742e6409d79e78201bda5211f96e4082a"} Dec 01 08:45:39 crc kubenswrapper[4689]: I1201 08:45:39.744936 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" event={"ID":"3947625d-75bf-4332-a233-1491b2ee9d96","Type":"ContainerStarted","Data":"77ce6d5e8c89d838f6758e3e368fce5280554d8513e298f9f66f88dccdb20c3d"} Dec 01 08:45:39 crc kubenswrapper[4689]: I1201 08:45:39.744965 4689 scope.go:117] "RemoveContainer" containerID="cdcf58565a332d76dac9cc12df1e0cd89eee6934be4b3234f8c36e9a8e22983a" Dec 01 08:45:43 crc kubenswrapper[4689]: I1201 08:45:43.269870 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" podUID="7d86e20d-febe-4cfb-a738-4705f8122326" containerName="registry" containerID="cri-o://e088513029875ffa84f4b164ada18653cdce555977e688e6984b0d101248c3a1" gracePeriod=30 Dec 01 08:45:43 crc kubenswrapper[4689]: I1201 08:45:43.736817 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:45:43 crc kubenswrapper[4689]: I1201 08:45:43.772924 4689 generic.go:334] "Generic (PLEG): container finished" podID="7d86e20d-febe-4cfb-a738-4705f8122326" containerID="e088513029875ffa84f4b164ada18653cdce555977e688e6984b0d101248c3a1" exitCode=0 Dec 01 08:45:43 crc kubenswrapper[4689]: I1201 08:45:43.772986 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" event={"ID":"7d86e20d-febe-4cfb-a738-4705f8122326","Type":"ContainerDied","Data":"e088513029875ffa84f4b164ada18653cdce555977e688e6984b0d101248c3a1"} Dec 01 08:45:43 crc kubenswrapper[4689]: I1201 08:45:43.772992 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" Dec 01 08:45:43 crc kubenswrapper[4689]: I1201 08:45:43.773037 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-t8fdl" event={"ID":"7d86e20d-febe-4cfb-a738-4705f8122326","Type":"ContainerDied","Data":"251bf0202d38ad8c2b8d502884a7afb8940b19cb08935a73016cf046f9a44416"} Dec 01 08:45:43 crc kubenswrapper[4689]: I1201 08:45:43.773057 4689 scope.go:117] "RemoveContainer" containerID="e088513029875ffa84f4b164ada18653cdce555977e688e6984b0d101248c3a1" Dec 01 08:45:43 crc kubenswrapper[4689]: I1201 08:45:43.790836 4689 scope.go:117] "RemoveContainer" containerID="e088513029875ffa84f4b164ada18653cdce555977e688e6984b0d101248c3a1" Dec 01 08:45:43 crc kubenswrapper[4689]: E1201 08:45:43.791845 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e088513029875ffa84f4b164ada18653cdce555977e688e6984b0d101248c3a1\": container with ID starting with e088513029875ffa84f4b164ada18653cdce555977e688e6984b0d101248c3a1 not found: ID does not exist" containerID="e088513029875ffa84f4b164ada18653cdce555977e688e6984b0d101248c3a1" Dec 01 08:45:43 crc kubenswrapper[4689]: I1201 08:45:43.791903 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e088513029875ffa84f4b164ada18653cdce555977e688e6984b0d101248c3a1"} err="failed to get container status \"e088513029875ffa84f4b164ada18653cdce555977e688e6984b0d101248c3a1\": rpc error: code = NotFound desc = could not find container \"e088513029875ffa84f4b164ada18653cdce555977e688e6984b0d101248c3a1\": container with ID starting with e088513029875ffa84f4b164ada18653cdce555977e688e6984b0d101248c3a1 not found: ID does not exist" Dec 01 08:45:43 crc kubenswrapper[4689]: I1201 08:45:43.918978 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7d86e20d-febe-4cfb-a738-4705f8122326-bound-sa-token\") pod \"7d86e20d-febe-4cfb-a738-4705f8122326\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " Dec 01 08:45:43 crc kubenswrapper[4689]: I1201 08:45:43.919478 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7d86e20d-febe-4cfb-a738-4705f8122326-ca-trust-extracted\") pod \"7d86e20d-febe-4cfb-a738-4705f8122326\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " Dec 01 08:45:43 crc kubenswrapper[4689]: I1201 08:45:43.919540 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7d86e20d-febe-4cfb-a738-4705f8122326-trusted-ca\") pod \"7d86e20d-febe-4cfb-a738-4705f8122326\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " Dec 01 08:45:43 crc kubenswrapper[4689]: I1201 08:45:43.919649 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7d86e20d-febe-4cfb-a738-4705f8122326-installation-pull-secrets\") pod \"7d86e20d-febe-4cfb-a738-4705f8122326\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " Dec 01 08:45:43 crc kubenswrapper[4689]: I1201 08:45:43.920025 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"7d86e20d-febe-4cfb-a738-4705f8122326\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " Dec 01 08:45:43 crc kubenswrapper[4689]: I1201 08:45:43.920116 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7d86e20d-febe-4cfb-a738-4705f8122326-registry-certificates\") pod \"7d86e20d-febe-4cfb-a738-4705f8122326\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " Dec 01 08:45:43 crc kubenswrapper[4689]: I1201 08:45:43.920154 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7d86e20d-febe-4cfb-a738-4705f8122326-registry-tls\") pod \"7d86e20d-febe-4cfb-a738-4705f8122326\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " Dec 01 08:45:43 crc kubenswrapper[4689]: I1201 08:45:43.920206 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6qq96\" (UniqueName: \"kubernetes.io/projected/7d86e20d-febe-4cfb-a738-4705f8122326-kube-api-access-6qq96\") pod \"7d86e20d-febe-4cfb-a738-4705f8122326\" (UID: \"7d86e20d-febe-4cfb-a738-4705f8122326\") " Dec 01 08:45:43 crc kubenswrapper[4689]: I1201 08:45:43.920485 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d86e20d-febe-4cfb-a738-4705f8122326-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "7d86e20d-febe-4cfb-a738-4705f8122326" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:45:43 crc kubenswrapper[4689]: I1201 08:45:43.920785 4689 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7d86e20d-febe-4cfb-a738-4705f8122326-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 01 08:45:43 crc kubenswrapper[4689]: I1201 08:45:43.923951 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d86e20d-febe-4cfb-a738-4705f8122326-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "7d86e20d-febe-4cfb-a738-4705f8122326" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:45:43 crc kubenswrapper[4689]: I1201 08:45:43.926420 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d86e20d-febe-4cfb-a738-4705f8122326-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "7d86e20d-febe-4cfb-a738-4705f8122326" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:45:43 crc kubenswrapper[4689]: I1201 08:45:43.927064 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d86e20d-febe-4cfb-a738-4705f8122326-kube-api-access-6qq96" (OuterVolumeSpecName: "kube-api-access-6qq96") pod "7d86e20d-febe-4cfb-a738-4705f8122326" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326"). InnerVolumeSpecName "kube-api-access-6qq96". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:45:43 crc kubenswrapper[4689]: I1201 08:45:43.933719 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d86e20d-febe-4cfb-a738-4705f8122326-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "7d86e20d-febe-4cfb-a738-4705f8122326" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:45:43 crc kubenswrapper[4689]: I1201 08:45:43.936901 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "7d86e20d-febe-4cfb-a738-4705f8122326" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 01 08:45:43 crc kubenswrapper[4689]: I1201 08:45:43.939594 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d86e20d-febe-4cfb-a738-4705f8122326-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "7d86e20d-febe-4cfb-a738-4705f8122326" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:45:43 crc kubenswrapper[4689]: I1201 08:45:43.942614 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d86e20d-febe-4cfb-a738-4705f8122326-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "7d86e20d-febe-4cfb-a738-4705f8122326" (UID: "7d86e20d-febe-4cfb-a738-4705f8122326"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 08:45:44 crc kubenswrapper[4689]: I1201 08:45:44.022752 4689 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7d86e20d-febe-4cfb-a738-4705f8122326-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 01 08:45:44 crc kubenswrapper[4689]: I1201 08:45:44.022815 4689 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7d86e20d-febe-4cfb-a738-4705f8122326-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 01 08:45:44 crc kubenswrapper[4689]: I1201 08:45:44.022829 4689 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7d86e20d-febe-4cfb-a738-4705f8122326-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 01 08:45:44 crc kubenswrapper[4689]: I1201 08:45:44.022843 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6qq96\" (UniqueName: \"kubernetes.io/projected/7d86e20d-febe-4cfb-a738-4705f8122326-kube-api-access-6qq96\") on node \"crc\" DevicePath \"\"" Dec 01 08:45:44 crc kubenswrapper[4689]: I1201 08:45:44.022879 4689 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7d86e20d-febe-4cfb-a738-4705f8122326-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 01 08:45:44 crc kubenswrapper[4689]: I1201 08:45:44.022891 4689 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7d86e20d-febe-4cfb-a738-4705f8122326-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 01 08:45:44 crc kubenswrapper[4689]: I1201 08:45:44.115344 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-t8fdl"] Dec 01 08:45:44 crc kubenswrapper[4689]: I1201 08:45:44.119293 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-t8fdl"] Dec 01 08:45:45 crc kubenswrapper[4689]: I1201 08:45:45.057334 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d86e20d-febe-4cfb-a738-4705f8122326" path="/var/lib/kubelet/pods/7d86e20d-febe-4cfb-a738-4705f8122326/volumes" Dec 01 08:47:39 crc kubenswrapper[4689]: I1201 08:47:39.147582 4689 patch_prober.go:28] interesting pod/machine-config-daemon-hmdnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 08:47:39 crc kubenswrapper[4689]: I1201 08:47:39.148648 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 08:48:09 crc kubenswrapper[4689]: I1201 08:48:09.148669 4689 patch_prober.go:28] interesting pod/machine-config-daemon-hmdnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 08:48:09 crc kubenswrapper[4689]: I1201 08:48:09.149305 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 08:48:39 crc kubenswrapper[4689]: I1201 08:48:39.147239 4689 patch_prober.go:28] interesting pod/machine-config-daemon-hmdnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 08:48:39 crc kubenswrapper[4689]: I1201 08:48:39.148070 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 08:48:39 crc kubenswrapper[4689]: I1201 08:48:39.148182 4689 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" Dec 01 08:48:39 crc kubenswrapper[4689]: I1201 08:48:39.149323 4689 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"77ce6d5e8c89d838f6758e3e368fce5280554d8513e298f9f66f88dccdb20c3d"} pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 01 08:48:39 crc kubenswrapper[4689]: I1201 08:48:39.149559 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" containerName="machine-config-daemon" containerID="cri-o://77ce6d5e8c89d838f6758e3e368fce5280554d8513e298f9f66f88dccdb20c3d" gracePeriod=600 Dec 01 08:48:40 crc kubenswrapper[4689]: I1201 08:48:40.057732 4689 generic.go:334] "Generic (PLEG): container finished" podID="3947625d-75bf-4332-a233-1491b2ee9d96" containerID="77ce6d5e8c89d838f6758e3e368fce5280554d8513e298f9f66f88dccdb20c3d" exitCode=0 Dec 01 08:48:40 crc kubenswrapper[4689]: I1201 08:48:40.057789 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" event={"ID":"3947625d-75bf-4332-a233-1491b2ee9d96","Type":"ContainerDied","Data":"77ce6d5e8c89d838f6758e3e368fce5280554d8513e298f9f66f88dccdb20c3d"} Dec 01 08:48:40 crc kubenswrapper[4689]: I1201 08:48:40.058147 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" event={"ID":"3947625d-75bf-4332-a233-1491b2ee9d96","Type":"ContainerStarted","Data":"74b1ead9c91ab196fa5f6493d6eb41ab2d35580a1ad359148d766458297d4a15"} Dec 01 08:48:40 crc kubenswrapper[4689]: I1201 08:48:40.058193 4689 scope.go:117] "RemoveContainer" containerID="37b1b11c7bc8ffe4ab73103e6e1b196742e6409d79e78201bda5211f96e4082a" Dec 01 08:49:08 crc kubenswrapper[4689]: I1201 08:49:08.807081 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-lhzz2"] Dec 01 08:49:08 crc kubenswrapper[4689]: E1201 08:49:08.808080 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d86e20d-febe-4cfb-a738-4705f8122326" containerName="registry" Dec 01 08:49:08 crc kubenswrapper[4689]: I1201 08:49:08.808109 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d86e20d-febe-4cfb-a738-4705f8122326" containerName="registry" Dec 01 08:49:08 crc kubenswrapper[4689]: I1201 08:49:08.808279 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d86e20d-febe-4cfb-a738-4705f8122326" containerName="registry" Dec 01 08:49:08 crc kubenswrapper[4689]: I1201 08:49:08.808926 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-lhzz2" Dec 01 08:49:08 crc kubenswrapper[4689]: I1201 08:49:08.812562 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Dec 01 08:49:08 crc kubenswrapper[4689]: I1201 08:49:08.813315 4689 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-nqsnx" Dec 01 08:49:08 crc kubenswrapper[4689]: I1201 08:49:08.821147 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Dec 01 08:49:08 crc kubenswrapper[4689]: I1201 08:49:08.827567 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-lhzz2"] Dec 01 08:49:08 crc kubenswrapper[4689]: I1201 08:49:08.836634 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-5b446d88c5-jxq2j"] Dec 01 08:49:08 crc kubenswrapper[4689]: I1201 08:49:08.837694 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-jxq2j" Dec 01 08:49:08 crc kubenswrapper[4689]: W1201 08:49:08.853034 4689 reflector.go:561] object-"cert-manager"/"cert-manager-dockercfg-spms6": failed to list *v1.Secret: secrets "cert-manager-dockercfg-spms6" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "cert-manager": no relationship found between node 'crc' and this object Dec 01 08:49:08 crc kubenswrapper[4689]: E1201 08:49:08.853104 4689 reflector.go:158] "Unhandled Error" err="object-\"cert-manager\"/\"cert-manager-dockercfg-spms6\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cert-manager-dockercfg-spms6\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"cert-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 01 08:49:08 crc kubenswrapper[4689]: I1201 08:49:08.865264 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-jxq2j"] Dec 01 08:49:08 crc kubenswrapper[4689]: I1201 08:49:08.880844 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-mnqrt"] Dec 01 08:49:08 crc kubenswrapper[4689]: I1201 08:49:08.881787 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-mnqrt" Dec 01 08:49:08 crc kubenswrapper[4689]: I1201 08:49:08.887843 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-mnqrt"] Dec 01 08:49:08 crc kubenswrapper[4689]: I1201 08:49:08.891418 4689 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-pvnz5" Dec 01 08:49:08 crc kubenswrapper[4689]: I1201 08:49:08.990111 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7l6k5\" (UniqueName: \"kubernetes.io/projected/159eaec1-709b-4f6b-9c2d-271433805055-kube-api-access-7l6k5\") pod \"cert-manager-5b446d88c5-jxq2j\" (UID: \"159eaec1-709b-4f6b-9c2d-271433805055\") " pod="cert-manager/cert-manager-5b446d88c5-jxq2j" Dec 01 08:49:08 crc kubenswrapper[4689]: I1201 08:49:08.990195 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7s8t\" (UniqueName: \"kubernetes.io/projected/f166eac0-2073-4aa8-9b0b-6b3c6e43b19e-kube-api-access-j7s8t\") pod \"cert-manager-cainjector-7f985d654d-lhzz2\" (UID: \"f166eac0-2073-4aa8-9b0b-6b3c6e43b19e\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-lhzz2" Dec 01 08:49:08 crc kubenswrapper[4689]: I1201 08:49:08.990219 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cc5dd\" (UniqueName: \"kubernetes.io/projected/0690c213-4822-49c3-a886-9dd92aa3f957-kube-api-access-cc5dd\") pod \"cert-manager-webhook-5655c58dd6-mnqrt\" (UID: \"0690c213-4822-49c3-a886-9dd92aa3f957\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-mnqrt" Dec 01 08:49:09 crc kubenswrapper[4689]: I1201 08:49:09.091716 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7l6k5\" (UniqueName: \"kubernetes.io/projected/159eaec1-709b-4f6b-9c2d-271433805055-kube-api-access-7l6k5\") pod \"cert-manager-5b446d88c5-jxq2j\" (UID: \"159eaec1-709b-4f6b-9c2d-271433805055\") " pod="cert-manager/cert-manager-5b446d88c5-jxq2j" Dec 01 08:49:09 crc kubenswrapper[4689]: I1201 08:49:09.091789 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7s8t\" (UniqueName: \"kubernetes.io/projected/f166eac0-2073-4aa8-9b0b-6b3c6e43b19e-kube-api-access-j7s8t\") pod \"cert-manager-cainjector-7f985d654d-lhzz2\" (UID: \"f166eac0-2073-4aa8-9b0b-6b3c6e43b19e\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-lhzz2" Dec 01 08:49:09 crc kubenswrapper[4689]: I1201 08:49:09.091816 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cc5dd\" (UniqueName: \"kubernetes.io/projected/0690c213-4822-49c3-a886-9dd92aa3f957-kube-api-access-cc5dd\") pod \"cert-manager-webhook-5655c58dd6-mnqrt\" (UID: \"0690c213-4822-49c3-a886-9dd92aa3f957\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-mnqrt" Dec 01 08:49:09 crc kubenswrapper[4689]: I1201 08:49:09.114900 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7l6k5\" (UniqueName: \"kubernetes.io/projected/159eaec1-709b-4f6b-9c2d-271433805055-kube-api-access-7l6k5\") pod \"cert-manager-5b446d88c5-jxq2j\" (UID: \"159eaec1-709b-4f6b-9c2d-271433805055\") " pod="cert-manager/cert-manager-5b446d88c5-jxq2j" Dec 01 08:49:09 crc kubenswrapper[4689]: I1201 08:49:09.115841 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cc5dd\" (UniqueName: \"kubernetes.io/projected/0690c213-4822-49c3-a886-9dd92aa3f957-kube-api-access-cc5dd\") pod \"cert-manager-webhook-5655c58dd6-mnqrt\" (UID: \"0690c213-4822-49c3-a886-9dd92aa3f957\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-mnqrt" Dec 01 08:49:09 crc kubenswrapper[4689]: I1201 08:49:09.116219 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7s8t\" (UniqueName: \"kubernetes.io/projected/f166eac0-2073-4aa8-9b0b-6b3c6e43b19e-kube-api-access-j7s8t\") pod \"cert-manager-cainjector-7f985d654d-lhzz2\" (UID: \"f166eac0-2073-4aa8-9b0b-6b3c6e43b19e\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-lhzz2" Dec 01 08:49:09 crc kubenswrapper[4689]: I1201 08:49:09.125918 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-lhzz2" Dec 01 08:49:09 crc kubenswrapper[4689]: I1201 08:49:09.202202 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-mnqrt" Dec 01 08:49:09 crc kubenswrapper[4689]: I1201 08:49:09.406711 4689 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 01 08:49:09 crc kubenswrapper[4689]: I1201 08:49:09.415924 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-lhzz2"] Dec 01 08:49:09 crc kubenswrapper[4689]: I1201 08:49:09.448605 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-mnqrt"] Dec 01 08:49:09 crc kubenswrapper[4689]: I1201 08:49:09.842847 4689 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-spms6" Dec 01 08:49:09 crc kubenswrapper[4689]: I1201 08:49:09.845574 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-jxq2j" Dec 01 08:49:10 crc kubenswrapper[4689]: I1201 08:49:10.050061 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-jxq2j"] Dec 01 08:49:10 crc kubenswrapper[4689]: W1201 08:49:10.057007 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod159eaec1_709b_4f6b_9c2d_271433805055.slice/crio-bb3e54aae4ad140ac38d08af761f7b205fa97807f7455c569c341a966b87e3f9 WatchSource:0}: Error finding container bb3e54aae4ad140ac38d08af761f7b205fa97807f7455c569c341a966b87e3f9: Status 404 returned error can't find the container with id bb3e54aae4ad140ac38d08af761f7b205fa97807f7455c569c341a966b87e3f9 Dec 01 08:49:10 crc kubenswrapper[4689]: I1201 08:49:10.274142 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-lhzz2" event={"ID":"f166eac0-2073-4aa8-9b0b-6b3c6e43b19e","Type":"ContainerStarted","Data":"81845a1abdd8f44476087127df3924271094d80ce8c2ae5a893409b71ec7f435"} Dec 01 08:49:10 crc kubenswrapper[4689]: I1201 08:49:10.275504 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-jxq2j" event={"ID":"159eaec1-709b-4f6b-9c2d-271433805055","Type":"ContainerStarted","Data":"bb3e54aae4ad140ac38d08af761f7b205fa97807f7455c569c341a966b87e3f9"} Dec 01 08:49:10 crc kubenswrapper[4689]: I1201 08:49:10.276606 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-mnqrt" event={"ID":"0690c213-4822-49c3-a886-9dd92aa3f957","Type":"ContainerStarted","Data":"2ef84191378d058fbd9d08d7ff7fb1a4ff7be9256f65f9c820b7e6f775a94c46"} Dec 01 08:49:13 crc kubenswrapper[4689]: I1201 08:49:13.297537 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-mnqrt" event={"ID":"0690c213-4822-49c3-a886-9dd92aa3f957","Type":"ContainerStarted","Data":"8db1bb12327a098c514441f51fcb1545f06614b1a5af11515f3b30b184fb3bab"} Dec 01 08:49:13 crc kubenswrapper[4689]: I1201 08:49:13.298324 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-5655c58dd6-mnqrt" Dec 01 08:49:13 crc kubenswrapper[4689]: I1201 08:49:13.307232 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-lhzz2" event={"ID":"f166eac0-2073-4aa8-9b0b-6b3c6e43b19e","Type":"ContainerStarted","Data":"30ed4fe6b91c26e9d585487c1a1e70b72333466136471c6b637c8d1cf47bedad"} Dec 01 08:49:13 crc kubenswrapper[4689]: I1201 08:49:13.312248 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-5655c58dd6-mnqrt" podStartSLOduration=2.429396689 podStartE2EDuration="5.312206674s" podCreationTimestamp="2025-12-01 08:49:08 +0000 UTC" firstStartedPulling="2025-12-01 08:49:09.454633562 +0000 UTC m=+629.526921466" lastFinishedPulling="2025-12-01 08:49:12.337443547 +0000 UTC m=+632.409731451" observedRunningTime="2025-12-01 08:49:13.311148816 +0000 UTC m=+633.383436720" watchObservedRunningTime="2025-12-01 08:49:13.312206674 +0000 UTC m=+633.384494578" Dec 01 08:49:13 crc kubenswrapper[4689]: I1201 08:49:13.332225 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7f985d654d-lhzz2" podStartSLOduration=2.464573144 podStartE2EDuration="5.332211033s" podCreationTimestamp="2025-12-01 08:49:08 +0000 UTC" firstStartedPulling="2025-12-01 08:49:09.40571617 +0000 UTC m=+629.478004074" lastFinishedPulling="2025-12-01 08:49:12.273354049 +0000 UTC m=+632.345641963" observedRunningTime="2025-12-01 08:49:13.328385128 +0000 UTC m=+633.400673032" watchObservedRunningTime="2025-12-01 08:49:13.332211033 +0000 UTC m=+633.404498937" Dec 01 08:49:14 crc kubenswrapper[4689]: I1201 08:49:14.313067 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-jxq2j" event={"ID":"159eaec1-709b-4f6b-9c2d-271433805055","Type":"ContainerStarted","Data":"2f435985bb1cd85efc5116ba3d002c45625d783b669fe67371895bb7f118a125"} Dec 01 08:49:14 crc kubenswrapper[4689]: I1201 08:49:14.329569 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-5b446d88c5-jxq2j" podStartSLOduration=2.739816504 podStartE2EDuration="6.32954391s" podCreationTimestamp="2025-12-01 08:49:08 +0000 UTC" firstStartedPulling="2025-12-01 08:49:10.05973102 +0000 UTC m=+630.132018924" lastFinishedPulling="2025-12-01 08:49:13.649458426 +0000 UTC m=+633.721746330" observedRunningTime="2025-12-01 08:49:14.326815005 +0000 UTC m=+634.399102909" watchObservedRunningTime="2025-12-01 08:49:14.32954391 +0000 UTC m=+634.401831814" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.212998 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-5655c58dd6-mnqrt" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.225027 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-8zn56"] Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.225437 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" podUID="988f960f-52fa-406f-9320-a8eec7a04f76" containerName="ovn-controller" containerID="cri-o://211c2ff150fa2d5fb73b0df4cfc5e694bf0b6b6761eec1e6827ac983fd759829" gracePeriod=30 Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.225497 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" podUID="988f960f-52fa-406f-9320-a8eec7a04f76" containerName="northd" containerID="cri-o://0be8fa8da5eef9035e80121c75adb9d6653d8683948fa83cfb1ff87a5fb3e8b2" gracePeriod=30 Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.225542 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" podUID="988f960f-52fa-406f-9320-a8eec7a04f76" containerName="sbdb" containerID="cri-o://e96fecee94f6f25ea0bbde3c6a23854fb5dc476f4b5cfb82871358fbcf6809c5" gracePeriod=30 Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.225527 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" podUID="988f960f-52fa-406f-9320-a8eec7a04f76" containerName="ovn-acl-logging" containerID="cri-o://b81345c33ddfbccd0c6b7d9c505ee81d513ce884fa1326add04026fd82d441e1" gracePeriod=30 Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.225605 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" podUID="988f960f-52fa-406f-9320-a8eec7a04f76" containerName="nbdb" containerID="cri-o://2afbacac849e9ed880e975ae167ea5b2329811c28516152f4d809b043ab61746" gracePeriod=30 Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.225531 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" podUID="988f960f-52fa-406f-9320-a8eec7a04f76" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://46c9dc400db69538c36365f4e2d66aae47defc4224c2881532870f875e2b30e7" gracePeriod=30 Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.225689 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" podUID="988f960f-52fa-406f-9320-a8eec7a04f76" containerName="kube-rbac-proxy-node" containerID="cri-o://8be3fd2d2a091ced00ce091ea6f81df99a6718fa9ead81d56590d2c8ebc2a36c" gracePeriod=30 Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.292731 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" podUID="988f960f-52fa-406f-9320-a8eec7a04f76" containerName="ovnkube-controller" containerID="cri-o://4c85239b54ff834a926bf6e40f8f4cf64a98a2d9b8ca6eb7b30f3cb1332d35b5" gracePeriod=30 Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.355120 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dl2st_6bebcb50-c292-4bca-9299-2fdc21439b18/kube-multus/2.log" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.356934 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dl2st_6bebcb50-c292-4bca-9299-2fdc21439b18/kube-multus/1.log" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.356981 4689 generic.go:334] "Generic (PLEG): container finished" podID="6bebcb50-c292-4bca-9299-2fdc21439b18" containerID="e15e4d4d20bedfa63e8dc39de991d3e641a4c410f89da82a2a3386442c160632" exitCode=2 Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.357014 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-dl2st" event={"ID":"6bebcb50-c292-4bca-9299-2fdc21439b18","Type":"ContainerDied","Data":"e15e4d4d20bedfa63e8dc39de991d3e641a4c410f89da82a2a3386442c160632"} Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.357302 4689 scope.go:117] "RemoveContainer" containerID="f0a76050989ec3f5388f58967fc29b953bea67fbbf75db6ad980546718f4f034" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.358129 4689 scope.go:117] "RemoveContainer" containerID="e15e4d4d20bedfa63e8dc39de991d3e641a4c410f89da82a2a3386442c160632" Dec 01 08:49:19 crc kubenswrapper[4689]: E1201 08:49:19.360031 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-dl2st_openshift-multus(6bebcb50-c292-4bca-9299-2fdc21439b18)\"" pod="openshift-multus/multus-dl2st" podUID="6bebcb50-c292-4bca-9299-2fdc21439b18" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.623918 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8zn56_988f960f-52fa-406f-9320-a8eec7a04f76/ovnkube-controller/3.log" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.629951 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8zn56_988f960f-52fa-406f-9320-a8eec7a04f76/ovn-acl-logging/0.log" Dec 01 08:49:19 crc kubenswrapper[4689]: E1201 08:49:19.630707 4689 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod988f960f_52fa_406f_9320_a8eec7a04f76.slice/crio-0be8fa8da5eef9035e80121c75adb9d6653d8683948fa83cfb1ff87a5fb3e8b2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod988f960f_52fa_406f_9320_a8eec7a04f76.slice/crio-conmon-0be8fa8da5eef9035e80121c75adb9d6653d8683948fa83cfb1ff87a5fb3e8b2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod988f960f_52fa_406f_9320_a8eec7a04f76.slice/crio-2afbacac849e9ed880e975ae167ea5b2329811c28516152f4d809b043ab61746.scope\": RecentStats: unable to find data in memory cache]" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.631195 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8zn56_988f960f-52fa-406f-9320-a8eec7a04f76/ovn-controller/0.log" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.631733 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.679209 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-mqp2s"] Dec 01 08:49:19 crc kubenswrapper[4689]: E1201 08:49:19.679632 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="988f960f-52fa-406f-9320-a8eec7a04f76" containerName="sbdb" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.679646 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="988f960f-52fa-406f-9320-a8eec7a04f76" containerName="sbdb" Dec 01 08:49:19 crc kubenswrapper[4689]: E1201 08:49:19.679657 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="988f960f-52fa-406f-9320-a8eec7a04f76" containerName="ovnkube-controller" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.679662 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="988f960f-52fa-406f-9320-a8eec7a04f76" containerName="ovnkube-controller" Dec 01 08:49:19 crc kubenswrapper[4689]: E1201 08:49:19.679671 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="988f960f-52fa-406f-9320-a8eec7a04f76" containerName="ovnkube-controller" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.679677 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="988f960f-52fa-406f-9320-a8eec7a04f76" containerName="ovnkube-controller" Dec 01 08:49:19 crc kubenswrapper[4689]: E1201 08:49:19.679687 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="988f960f-52fa-406f-9320-a8eec7a04f76" containerName="ovnkube-controller" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.679693 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="988f960f-52fa-406f-9320-a8eec7a04f76" containerName="ovnkube-controller" Dec 01 08:49:19 crc kubenswrapper[4689]: E1201 08:49:19.679702 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="988f960f-52fa-406f-9320-a8eec7a04f76" containerName="ovnkube-controller" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.679708 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="988f960f-52fa-406f-9320-a8eec7a04f76" containerName="ovnkube-controller" Dec 01 08:49:19 crc kubenswrapper[4689]: E1201 08:49:19.679720 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="988f960f-52fa-406f-9320-a8eec7a04f76" containerName="ovn-controller" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.679726 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="988f960f-52fa-406f-9320-a8eec7a04f76" containerName="ovn-controller" Dec 01 08:49:19 crc kubenswrapper[4689]: E1201 08:49:19.679733 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="988f960f-52fa-406f-9320-a8eec7a04f76" containerName="kubecfg-setup" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.679740 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="988f960f-52fa-406f-9320-a8eec7a04f76" containerName="kubecfg-setup" Dec 01 08:49:19 crc kubenswrapper[4689]: E1201 08:49:19.679745 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="988f960f-52fa-406f-9320-a8eec7a04f76" containerName="northd" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.679751 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="988f960f-52fa-406f-9320-a8eec7a04f76" containerName="northd" Dec 01 08:49:19 crc kubenswrapper[4689]: E1201 08:49:19.679758 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="988f960f-52fa-406f-9320-a8eec7a04f76" containerName="nbdb" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.679766 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="988f960f-52fa-406f-9320-a8eec7a04f76" containerName="nbdb" Dec 01 08:49:19 crc kubenswrapper[4689]: E1201 08:49:19.679774 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="988f960f-52fa-406f-9320-a8eec7a04f76" containerName="ovn-acl-logging" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.679781 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="988f960f-52fa-406f-9320-a8eec7a04f76" containerName="ovn-acl-logging" Dec 01 08:49:19 crc kubenswrapper[4689]: E1201 08:49:19.679789 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="988f960f-52fa-406f-9320-a8eec7a04f76" containerName="kube-rbac-proxy-node" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.679796 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="988f960f-52fa-406f-9320-a8eec7a04f76" containerName="kube-rbac-proxy-node" Dec 01 08:49:19 crc kubenswrapper[4689]: E1201 08:49:19.679804 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="988f960f-52fa-406f-9320-a8eec7a04f76" containerName="kube-rbac-proxy-ovn-metrics" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.679809 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="988f960f-52fa-406f-9320-a8eec7a04f76" containerName="kube-rbac-proxy-ovn-metrics" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.679896 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="988f960f-52fa-406f-9320-a8eec7a04f76" containerName="ovnkube-controller" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.679905 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="988f960f-52fa-406f-9320-a8eec7a04f76" containerName="nbdb" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.679911 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="988f960f-52fa-406f-9320-a8eec7a04f76" containerName="ovn-acl-logging" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.679922 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="988f960f-52fa-406f-9320-a8eec7a04f76" containerName="northd" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.679929 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="988f960f-52fa-406f-9320-a8eec7a04f76" containerName="ovnkube-controller" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.679937 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="988f960f-52fa-406f-9320-a8eec7a04f76" containerName="ovn-controller" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.679945 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="988f960f-52fa-406f-9320-a8eec7a04f76" containerName="ovnkube-controller" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.679952 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="988f960f-52fa-406f-9320-a8eec7a04f76" containerName="kube-rbac-proxy-node" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.679959 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="988f960f-52fa-406f-9320-a8eec7a04f76" containerName="sbdb" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.679968 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="988f960f-52fa-406f-9320-a8eec7a04f76" containerName="kube-rbac-proxy-ovn-metrics" Dec 01 08:49:19 crc kubenswrapper[4689]: E1201 08:49:19.680055 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="988f960f-52fa-406f-9320-a8eec7a04f76" containerName="ovnkube-controller" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.680061 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="988f960f-52fa-406f-9320-a8eec7a04f76" containerName="ovnkube-controller" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.680175 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="988f960f-52fa-406f-9320-a8eec7a04f76" containerName="ovnkube-controller" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.680342 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="988f960f-52fa-406f-9320-a8eec7a04f76" containerName="ovnkube-controller" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.681831 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-mqp2s" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.796983 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/988f960f-52fa-406f-9320-a8eec7a04f76-host-cni-bin\") pod \"988f960f-52fa-406f-9320-a8eec7a04f76\" (UID: \"988f960f-52fa-406f-9320-a8eec7a04f76\") " Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.797053 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/988f960f-52fa-406f-9320-a8eec7a04f76-ovnkube-script-lib\") pod \"988f960f-52fa-406f-9320-a8eec7a04f76\" (UID: \"988f960f-52fa-406f-9320-a8eec7a04f76\") " Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.797102 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/988f960f-52fa-406f-9320-a8eec7a04f76-host-run-ovn-kubernetes\") pod \"988f960f-52fa-406f-9320-a8eec7a04f76\" (UID: \"988f960f-52fa-406f-9320-a8eec7a04f76\") " Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.797123 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/988f960f-52fa-406f-9320-a8eec7a04f76-var-lib-openvswitch\") pod \"988f960f-52fa-406f-9320-a8eec7a04f76\" (UID: \"988f960f-52fa-406f-9320-a8eec7a04f76\") " Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.797150 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/988f960f-52fa-406f-9320-a8eec7a04f76-host-kubelet\") pod \"988f960f-52fa-406f-9320-a8eec7a04f76\" (UID: \"988f960f-52fa-406f-9320-a8eec7a04f76\") " Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.797157 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/988f960f-52fa-406f-9320-a8eec7a04f76-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "988f960f-52fa-406f-9320-a8eec7a04f76" (UID: "988f960f-52fa-406f-9320-a8eec7a04f76"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.797215 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/988f960f-52fa-406f-9320-a8eec7a04f76-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "988f960f-52fa-406f-9320-a8eec7a04f76" (UID: "988f960f-52fa-406f-9320-a8eec7a04f76"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.797248 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/988f960f-52fa-406f-9320-a8eec7a04f76-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "988f960f-52fa-406f-9320-a8eec7a04f76" (UID: "988f960f-52fa-406f-9320-a8eec7a04f76"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.797266 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/988f960f-52fa-406f-9320-a8eec7a04f76-run-openvswitch\") pod \"988f960f-52fa-406f-9320-a8eec7a04f76\" (UID: \"988f960f-52fa-406f-9320-a8eec7a04f76\") " Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.797317 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/988f960f-52fa-406f-9320-a8eec7a04f76-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "988f960f-52fa-406f-9320-a8eec7a04f76" (UID: "988f960f-52fa-406f-9320-a8eec7a04f76"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.797345 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/988f960f-52fa-406f-9320-a8eec7a04f76-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "988f960f-52fa-406f-9320-a8eec7a04f76" (UID: "988f960f-52fa-406f-9320-a8eec7a04f76"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.797414 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/988f960f-52fa-406f-9320-a8eec7a04f76-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "988f960f-52fa-406f-9320-a8eec7a04f76" (UID: "988f960f-52fa-406f-9320-a8eec7a04f76"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.797427 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/988f960f-52fa-406f-9320-a8eec7a04f76-systemd-units\") pod \"988f960f-52fa-406f-9320-a8eec7a04f76\" (UID: \"988f960f-52fa-406f-9320-a8eec7a04f76\") " Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.797466 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/988f960f-52fa-406f-9320-a8eec7a04f76-host-run-netns\") pod \"988f960f-52fa-406f-9320-a8eec7a04f76\" (UID: \"988f960f-52fa-406f-9320-a8eec7a04f76\") " Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.797493 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/988f960f-52fa-406f-9320-a8eec7a04f76-etc-openvswitch\") pod \"988f960f-52fa-406f-9320-a8eec7a04f76\" (UID: \"988f960f-52fa-406f-9320-a8eec7a04f76\") " Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.797510 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/988f960f-52fa-406f-9320-a8eec7a04f76-node-log\") pod \"988f960f-52fa-406f-9320-a8eec7a04f76\" (UID: \"988f960f-52fa-406f-9320-a8eec7a04f76\") " Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.797548 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/988f960f-52fa-406f-9320-a8eec7a04f76-ovnkube-config\") pod \"988f960f-52fa-406f-9320-a8eec7a04f76\" (UID: \"988f960f-52fa-406f-9320-a8eec7a04f76\") " Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.797573 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/988f960f-52fa-406f-9320-a8eec7a04f76-host-slash\") pod \"988f960f-52fa-406f-9320-a8eec7a04f76\" (UID: \"988f960f-52fa-406f-9320-a8eec7a04f76\") " Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.797603 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/988f960f-52fa-406f-9320-a8eec7a04f76-host-var-lib-cni-networks-ovn-kubernetes\") pod \"988f960f-52fa-406f-9320-a8eec7a04f76\" (UID: \"988f960f-52fa-406f-9320-a8eec7a04f76\") " Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.797621 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/988f960f-52fa-406f-9320-a8eec7a04f76-node-log" (OuterVolumeSpecName: "node-log") pod "988f960f-52fa-406f-9320-a8eec7a04f76" (UID: "988f960f-52fa-406f-9320-a8eec7a04f76"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.797626 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/988f960f-52fa-406f-9320-a8eec7a04f76-run-ovn\") pod \"988f960f-52fa-406f-9320-a8eec7a04f76\" (UID: \"988f960f-52fa-406f-9320-a8eec7a04f76\") " Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.797650 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/988f960f-52fa-406f-9320-a8eec7a04f76-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "988f960f-52fa-406f-9320-a8eec7a04f76" (UID: "988f960f-52fa-406f-9320-a8eec7a04f76"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.797673 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/988f960f-52fa-406f-9320-a8eec7a04f76-ovn-node-metrics-cert\") pod \"988f960f-52fa-406f-9320-a8eec7a04f76\" (UID: \"988f960f-52fa-406f-9320-a8eec7a04f76\") " Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.797659 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/988f960f-52fa-406f-9320-a8eec7a04f76-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "988f960f-52fa-406f-9320-a8eec7a04f76" (UID: "988f960f-52fa-406f-9320-a8eec7a04f76"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.797692 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/988f960f-52fa-406f-9320-a8eec7a04f76-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "988f960f-52fa-406f-9320-a8eec7a04f76" (UID: "988f960f-52fa-406f-9320-a8eec7a04f76"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.797710 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/988f960f-52fa-406f-9320-a8eec7a04f76-run-systemd\") pod \"988f960f-52fa-406f-9320-a8eec7a04f76\" (UID: \"988f960f-52fa-406f-9320-a8eec7a04f76\") " Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.797685 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/988f960f-52fa-406f-9320-a8eec7a04f76-host-slash" (OuterVolumeSpecName: "host-slash") pod "988f960f-52fa-406f-9320-a8eec7a04f76" (UID: "988f960f-52fa-406f-9320-a8eec7a04f76"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.797739 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcm2f\" (UniqueName: \"kubernetes.io/projected/988f960f-52fa-406f-9320-a8eec7a04f76-kube-api-access-fcm2f\") pod \"988f960f-52fa-406f-9320-a8eec7a04f76\" (UID: \"988f960f-52fa-406f-9320-a8eec7a04f76\") " Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.797788 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/988f960f-52fa-406f-9320-a8eec7a04f76-host-cni-netd\") pod \"988f960f-52fa-406f-9320-a8eec7a04f76\" (UID: \"988f960f-52fa-406f-9320-a8eec7a04f76\") " Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.797808 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/988f960f-52fa-406f-9320-a8eec7a04f76-log-socket\") pod \"988f960f-52fa-406f-9320-a8eec7a04f76\" (UID: \"988f960f-52fa-406f-9320-a8eec7a04f76\") " Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.797848 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/988f960f-52fa-406f-9320-a8eec7a04f76-env-overrides\") pod \"988f960f-52fa-406f-9320-a8eec7a04f76\" (UID: \"988f960f-52fa-406f-9320-a8eec7a04f76\") " Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.798036 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/bb884aea-14ac-453f-b847-f161c8e48bd3-systemd-units\") pod \"ovnkube-node-mqp2s\" (UID: \"bb884aea-14ac-453f-b847-f161c8e48bd3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mqp2s" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.798087 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/bb884aea-14ac-453f-b847-f161c8e48bd3-host-cni-bin\") pod \"ovnkube-node-mqp2s\" (UID: \"bb884aea-14ac-453f-b847-f161c8e48bd3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mqp2s" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.798116 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bb884aea-14ac-453f-b847-f161c8e48bd3-host-cni-netd\") pod \"ovnkube-node-mqp2s\" (UID: \"bb884aea-14ac-453f-b847-f161c8e48bd3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mqp2s" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.798140 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/bb884aea-14ac-453f-b847-f161c8e48bd3-node-log\") pod \"ovnkube-node-mqp2s\" (UID: \"bb884aea-14ac-453f-b847-f161c8e48bd3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mqp2s" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.798161 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bb884aea-14ac-453f-b847-f161c8e48bd3-etc-openvswitch\") pod \"ovnkube-node-mqp2s\" (UID: \"bb884aea-14ac-453f-b847-f161c8e48bd3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mqp2s" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.798182 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mwqm\" (UniqueName: \"kubernetes.io/projected/bb884aea-14ac-453f-b847-f161c8e48bd3-kube-api-access-6mwqm\") pod \"ovnkube-node-mqp2s\" (UID: \"bb884aea-14ac-453f-b847-f161c8e48bd3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mqp2s" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.798223 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bb884aea-14ac-453f-b847-f161c8e48bd3-var-lib-openvswitch\") pod \"ovnkube-node-mqp2s\" (UID: \"bb884aea-14ac-453f-b847-f161c8e48bd3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mqp2s" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.798237 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/988f960f-52fa-406f-9320-a8eec7a04f76-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "988f960f-52fa-406f-9320-a8eec7a04f76" (UID: "988f960f-52fa-406f-9320-a8eec7a04f76"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.798233 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/988f960f-52fa-406f-9320-a8eec7a04f76-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "988f960f-52fa-406f-9320-a8eec7a04f76" (UID: "988f960f-52fa-406f-9320-a8eec7a04f76"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.798267 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/988f960f-52fa-406f-9320-a8eec7a04f76-log-socket" (OuterVolumeSpecName: "log-socket") pod "988f960f-52fa-406f-9320-a8eec7a04f76" (UID: "988f960f-52fa-406f-9320-a8eec7a04f76"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.798251 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/bb884aea-14ac-453f-b847-f161c8e48bd3-host-slash\") pod \"ovnkube-node-mqp2s\" (UID: \"bb884aea-14ac-453f-b847-f161c8e48bd3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mqp2s" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.798417 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bb884aea-14ac-453f-b847-f161c8e48bd3-env-overrides\") pod \"ovnkube-node-mqp2s\" (UID: \"bb884aea-14ac-453f-b847-f161c8e48bd3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mqp2s" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.798433 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/988f960f-52fa-406f-9320-a8eec7a04f76-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "988f960f-52fa-406f-9320-a8eec7a04f76" (UID: "988f960f-52fa-406f-9320-a8eec7a04f76"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.798450 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bb884aea-14ac-453f-b847-f161c8e48bd3-host-run-ovn-kubernetes\") pod \"ovnkube-node-mqp2s\" (UID: \"bb884aea-14ac-453f-b847-f161c8e48bd3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mqp2s" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.798531 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/bb884aea-14ac-453f-b847-f161c8e48bd3-ovnkube-script-lib\") pod \"ovnkube-node-mqp2s\" (UID: \"bb884aea-14ac-453f-b847-f161c8e48bd3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mqp2s" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.798558 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/988f960f-52fa-406f-9320-a8eec7a04f76-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "988f960f-52fa-406f-9320-a8eec7a04f76" (UID: "988f960f-52fa-406f-9320-a8eec7a04f76"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.798565 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bb884aea-14ac-453f-b847-f161c8e48bd3-ovnkube-config\") pod \"ovnkube-node-mqp2s\" (UID: \"bb884aea-14ac-453f-b847-f161c8e48bd3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mqp2s" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.798604 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/988f960f-52fa-406f-9320-a8eec7a04f76-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "988f960f-52fa-406f-9320-a8eec7a04f76" (UID: "988f960f-52fa-406f-9320-a8eec7a04f76"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.798655 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/bb884aea-14ac-453f-b847-f161c8e48bd3-run-ovn\") pod \"ovnkube-node-mqp2s\" (UID: \"bb884aea-14ac-453f-b847-f161c8e48bd3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mqp2s" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.798683 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/bb884aea-14ac-453f-b847-f161c8e48bd3-run-systemd\") pod \"ovnkube-node-mqp2s\" (UID: \"bb884aea-14ac-453f-b847-f161c8e48bd3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mqp2s" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.798704 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bb884aea-14ac-453f-b847-f161c8e48bd3-run-openvswitch\") pod \"ovnkube-node-mqp2s\" (UID: \"bb884aea-14ac-453f-b847-f161c8e48bd3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mqp2s" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.798726 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/bb884aea-14ac-453f-b847-f161c8e48bd3-host-kubelet\") pod \"ovnkube-node-mqp2s\" (UID: \"bb884aea-14ac-453f-b847-f161c8e48bd3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mqp2s" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.798752 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bb884aea-14ac-453f-b847-f161c8e48bd3-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-mqp2s\" (UID: \"bb884aea-14ac-453f-b847-f161c8e48bd3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mqp2s" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.798777 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bb884aea-14ac-453f-b847-f161c8e48bd3-ovn-node-metrics-cert\") pod \"ovnkube-node-mqp2s\" (UID: \"bb884aea-14ac-453f-b847-f161c8e48bd3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mqp2s" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.798816 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/bb884aea-14ac-453f-b847-f161c8e48bd3-log-socket\") pod \"ovnkube-node-mqp2s\" (UID: \"bb884aea-14ac-453f-b847-f161c8e48bd3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mqp2s" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.798875 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/bb884aea-14ac-453f-b847-f161c8e48bd3-host-run-netns\") pod \"ovnkube-node-mqp2s\" (UID: \"bb884aea-14ac-453f-b847-f161c8e48bd3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mqp2s" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.798943 4689 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/988f960f-52fa-406f-9320-a8eec7a04f76-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.798960 4689 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/988f960f-52fa-406f-9320-a8eec7a04f76-host-cni-bin\") on node \"crc\" DevicePath \"\"" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.798971 4689 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/988f960f-52fa-406f-9320-a8eec7a04f76-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.798983 4689 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/988f960f-52fa-406f-9320-a8eec7a04f76-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.798997 4689 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/988f960f-52fa-406f-9320-a8eec7a04f76-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.799008 4689 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/988f960f-52fa-406f-9320-a8eec7a04f76-host-kubelet\") on node \"crc\" DevicePath \"\"" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.799018 4689 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/988f960f-52fa-406f-9320-a8eec7a04f76-run-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.799028 4689 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/988f960f-52fa-406f-9320-a8eec7a04f76-systemd-units\") on node \"crc\" DevicePath \"\"" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.799039 4689 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/988f960f-52fa-406f-9320-a8eec7a04f76-host-run-netns\") on node \"crc\" DevicePath \"\"" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.799050 4689 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/988f960f-52fa-406f-9320-a8eec7a04f76-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.799061 4689 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/988f960f-52fa-406f-9320-a8eec7a04f76-node-log\") on node \"crc\" DevicePath \"\"" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.799072 4689 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/988f960f-52fa-406f-9320-a8eec7a04f76-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.799082 4689 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/988f960f-52fa-406f-9320-a8eec7a04f76-host-slash\") on node \"crc\" DevicePath \"\"" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.799095 4689 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/988f960f-52fa-406f-9320-a8eec7a04f76-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.799108 4689 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/988f960f-52fa-406f-9320-a8eec7a04f76-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.799120 4689 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/988f960f-52fa-406f-9320-a8eec7a04f76-host-cni-netd\") on node \"crc\" DevicePath \"\"" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.799131 4689 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/988f960f-52fa-406f-9320-a8eec7a04f76-log-socket\") on node \"crc\" DevicePath \"\"" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.803277 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/988f960f-52fa-406f-9320-a8eec7a04f76-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "988f960f-52fa-406f-9320-a8eec7a04f76" (UID: "988f960f-52fa-406f-9320-a8eec7a04f76"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.804624 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/988f960f-52fa-406f-9320-a8eec7a04f76-kube-api-access-fcm2f" (OuterVolumeSpecName: "kube-api-access-fcm2f") pod "988f960f-52fa-406f-9320-a8eec7a04f76" (UID: "988f960f-52fa-406f-9320-a8eec7a04f76"). InnerVolumeSpecName "kube-api-access-fcm2f". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.810587 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/988f960f-52fa-406f-9320-a8eec7a04f76-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "988f960f-52fa-406f-9320-a8eec7a04f76" (UID: "988f960f-52fa-406f-9320-a8eec7a04f76"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.900169 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/bb884aea-14ac-453f-b847-f161c8e48bd3-log-socket\") pod \"ovnkube-node-mqp2s\" (UID: \"bb884aea-14ac-453f-b847-f161c8e48bd3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mqp2s" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.900302 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/bb884aea-14ac-453f-b847-f161c8e48bd3-host-run-netns\") pod \"ovnkube-node-mqp2s\" (UID: \"bb884aea-14ac-453f-b847-f161c8e48bd3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mqp2s" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.900322 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/bb884aea-14ac-453f-b847-f161c8e48bd3-log-socket\") pod \"ovnkube-node-mqp2s\" (UID: \"bb884aea-14ac-453f-b847-f161c8e48bd3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mqp2s" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.900451 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/bb884aea-14ac-453f-b847-f161c8e48bd3-systemd-units\") pod \"ovnkube-node-mqp2s\" (UID: \"bb884aea-14ac-453f-b847-f161c8e48bd3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mqp2s" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.900434 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/bb884aea-14ac-453f-b847-f161c8e48bd3-host-run-netns\") pod \"ovnkube-node-mqp2s\" (UID: \"bb884aea-14ac-453f-b847-f161c8e48bd3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mqp2s" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.900537 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/bb884aea-14ac-453f-b847-f161c8e48bd3-systemd-units\") pod \"ovnkube-node-mqp2s\" (UID: \"bb884aea-14ac-453f-b847-f161c8e48bd3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mqp2s" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.900580 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/bb884aea-14ac-453f-b847-f161c8e48bd3-host-cni-bin\") pod \"ovnkube-node-mqp2s\" (UID: \"bb884aea-14ac-453f-b847-f161c8e48bd3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mqp2s" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.900636 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/bb884aea-14ac-453f-b847-f161c8e48bd3-host-cni-bin\") pod \"ovnkube-node-mqp2s\" (UID: \"bb884aea-14ac-453f-b847-f161c8e48bd3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mqp2s" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.900637 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bb884aea-14ac-453f-b847-f161c8e48bd3-host-cni-netd\") pod \"ovnkube-node-mqp2s\" (UID: \"bb884aea-14ac-453f-b847-f161c8e48bd3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mqp2s" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.900696 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bb884aea-14ac-453f-b847-f161c8e48bd3-host-cni-netd\") pod \"ovnkube-node-mqp2s\" (UID: \"bb884aea-14ac-453f-b847-f161c8e48bd3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mqp2s" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.900722 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/bb884aea-14ac-453f-b847-f161c8e48bd3-node-log\") pod \"ovnkube-node-mqp2s\" (UID: \"bb884aea-14ac-453f-b847-f161c8e48bd3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mqp2s" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.900807 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/bb884aea-14ac-453f-b847-f161c8e48bd3-node-log\") pod \"ovnkube-node-mqp2s\" (UID: \"bb884aea-14ac-453f-b847-f161c8e48bd3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mqp2s" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.900892 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bb884aea-14ac-453f-b847-f161c8e48bd3-etc-openvswitch\") pod \"ovnkube-node-mqp2s\" (UID: \"bb884aea-14ac-453f-b847-f161c8e48bd3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mqp2s" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.900951 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bb884aea-14ac-453f-b847-f161c8e48bd3-etc-openvswitch\") pod \"ovnkube-node-mqp2s\" (UID: \"bb884aea-14ac-453f-b847-f161c8e48bd3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mqp2s" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.900955 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mwqm\" (UniqueName: \"kubernetes.io/projected/bb884aea-14ac-453f-b847-f161c8e48bd3-kube-api-access-6mwqm\") pod \"ovnkube-node-mqp2s\" (UID: \"bb884aea-14ac-453f-b847-f161c8e48bd3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mqp2s" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.901063 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bb884aea-14ac-453f-b847-f161c8e48bd3-var-lib-openvswitch\") pod \"ovnkube-node-mqp2s\" (UID: \"bb884aea-14ac-453f-b847-f161c8e48bd3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mqp2s" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.901125 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/bb884aea-14ac-453f-b847-f161c8e48bd3-host-slash\") pod \"ovnkube-node-mqp2s\" (UID: \"bb884aea-14ac-453f-b847-f161c8e48bd3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mqp2s" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.901153 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bb884aea-14ac-453f-b847-f161c8e48bd3-var-lib-openvswitch\") pod \"ovnkube-node-mqp2s\" (UID: \"bb884aea-14ac-453f-b847-f161c8e48bd3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mqp2s" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.901179 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bb884aea-14ac-453f-b847-f161c8e48bd3-env-overrides\") pod \"ovnkube-node-mqp2s\" (UID: \"bb884aea-14ac-453f-b847-f161c8e48bd3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mqp2s" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.901194 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/bb884aea-14ac-453f-b847-f161c8e48bd3-host-slash\") pod \"ovnkube-node-mqp2s\" (UID: \"bb884aea-14ac-453f-b847-f161c8e48bd3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mqp2s" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.901228 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bb884aea-14ac-453f-b847-f161c8e48bd3-host-run-ovn-kubernetes\") pod \"ovnkube-node-mqp2s\" (UID: \"bb884aea-14ac-453f-b847-f161c8e48bd3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mqp2s" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.901303 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/bb884aea-14ac-453f-b847-f161c8e48bd3-ovnkube-script-lib\") pod \"ovnkube-node-mqp2s\" (UID: \"bb884aea-14ac-453f-b847-f161c8e48bd3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mqp2s" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.901359 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bb884aea-14ac-453f-b847-f161c8e48bd3-ovnkube-config\") pod \"ovnkube-node-mqp2s\" (UID: \"bb884aea-14ac-453f-b847-f161c8e48bd3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mqp2s" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.901493 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/bb884aea-14ac-453f-b847-f161c8e48bd3-run-ovn\") pod \"ovnkube-node-mqp2s\" (UID: \"bb884aea-14ac-453f-b847-f161c8e48bd3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mqp2s" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.901548 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/bb884aea-14ac-453f-b847-f161c8e48bd3-run-systemd\") pod \"ovnkube-node-mqp2s\" (UID: \"bb884aea-14ac-453f-b847-f161c8e48bd3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mqp2s" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.901596 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bb884aea-14ac-453f-b847-f161c8e48bd3-run-openvswitch\") pod \"ovnkube-node-mqp2s\" (UID: \"bb884aea-14ac-453f-b847-f161c8e48bd3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mqp2s" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.901667 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/bb884aea-14ac-453f-b847-f161c8e48bd3-host-kubelet\") pod \"ovnkube-node-mqp2s\" (UID: \"bb884aea-14ac-453f-b847-f161c8e48bd3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mqp2s" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.901673 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/bb884aea-14ac-453f-b847-f161c8e48bd3-run-ovn\") pod \"ovnkube-node-mqp2s\" (UID: \"bb884aea-14ac-453f-b847-f161c8e48bd3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mqp2s" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.901691 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bb884aea-14ac-453f-b847-f161c8e48bd3-host-run-ovn-kubernetes\") pod \"ovnkube-node-mqp2s\" (UID: \"bb884aea-14ac-453f-b847-f161c8e48bd3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mqp2s" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.901765 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bb884aea-14ac-453f-b847-f161c8e48bd3-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-mqp2s\" (UID: \"bb884aea-14ac-453f-b847-f161c8e48bd3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mqp2s" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.902027 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bb884aea-14ac-453f-b847-f161c8e48bd3-run-openvswitch\") pod \"ovnkube-node-mqp2s\" (UID: \"bb884aea-14ac-453f-b847-f161c8e48bd3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mqp2s" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.902054 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bb884aea-14ac-453f-b847-f161c8e48bd3-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-mqp2s\" (UID: \"bb884aea-14ac-453f-b847-f161c8e48bd3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mqp2s" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.902096 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bb884aea-14ac-453f-b847-f161c8e48bd3-env-overrides\") pod \"ovnkube-node-mqp2s\" (UID: \"bb884aea-14ac-453f-b847-f161c8e48bd3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mqp2s" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.902130 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bb884aea-14ac-453f-b847-f161c8e48bd3-ovn-node-metrics-cert\") pod \"ovnkube-node-mqp2s\" (UID: \"bb884aea-14ac-453f-b847-f161c8e48bd3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mqp2s" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.902229 4689 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/988f960f-52fa-406f-9320-a8eec7a04f76-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.902237 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/bb884aea-14ac-453f-b847-f161c8e48bd3-ovnkube-script-lib\") pod \"ovnkube-node-mqp2s\" (UID: \"bb884aea-14ac-453f-b847-f161c8e48bd3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mqp2s" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.902259 4689 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/988f960f-52fa-406f-9320-a8eec7a04f76-run-systemd\") on node \"crc\" DevicePath \"\"" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.902278 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcm2f\" (UniqueName: \"kubernetes.io/projected/988f960f-52fa-406f-9320-a8eec7a04f76-kube-api-access-fcm2f\") on node \"crc\" DevicePath \"\"" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.902287 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/bb884aea-14ac-453f-b847-f161c8e48bd3-run-systemd\") pod \"ovnkube-node-mqp2s\" (UID: \"bb884aea-14ac-453f-b847-f161c8e48bd3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mqp2s" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.902139 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/bb884aea-14ac-453f-b847-f161c8e48bd3-host-kubelet\") pod \"ovnkube-node-mqp2s\" (UID: \"bb884aea-14ac-453f-b847-f161c8e48bd3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mqp2s" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.903403 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bb884aea-14ac-453f-b847-f161c8e48bd3-ovnkube-config\") pod \"ovnkube-node-mqp2s\" (UID: \"bb884aea-14ac-453f-b847-f161c8e48bd3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mqp2s" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.908182 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bb884aea-14ac-453f-b847-f161c8e48bd3-ovn-node-metrics-cert\") pod \"ovnkube-node-mqp2s\" (UID: \"bb884aea-14ac-453f-b847-f161c8e48bd3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mqp2s" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.923935 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mwqm\" (UniqueName: \"kubernetes.io/projected/bb884aea-14ac-453f-b847-f161c8e48bd3-kube-api-access-6mwqm\") pod \"ovnkube-node-mqp2s\" (UID: \"bb884aea-14ac-453f-b847-f161c8e48bd3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mqp2s" Dec 01 08:49:19 crc kubenswrapper[4689]: I1201 08:49:19.995049 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-mqp2s" Dec 01 08:49:20 crc kubenswrapper[4689]: W1201 08:49:20.030118 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb884aea_14ac_453f_b847_f161c8e48bd3.slice/crio-75f5bf6a786f6f40af583ff07548e9ea1953c28b55cb0b624fa32de91200c72f WatchSource:0}: Error finding container 75f5bf6a786f6f40af583ff07548e9ea1953c28b55cb0b624fa32de91200c72f: Status 404 returned error can't find the container with id 75f5bf6a786f6f40af583ff07548e9ea1953c28b55cb0b624fa32de91200c72f Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.368651 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8zn56_988f960f-52fa-406f-9320-a8eec7a04f76/ovnkube-controller/3.log" Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.373865 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8zn56_988f960f-52fa-406f-9320-a8eec7a04f76/ovn-acl-logging/0.log" Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.374698 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8zn56_988f960f-52fa-406f-9320-a8eec7a04f76/ovn-controller/0.log" Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.375288 4689 generic.go:334] "Generic (PLEG): container finished" podID="988f960f-52fa-406f-9320-a8eec7a04f76" containerID="4c85239b54ff834a926bf6e40f8f4cf64a98a2d9b8ca6eb7b30f3cb1332d35b5" exitCode=0 Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.375328 4689 generic.go:334] "Generic (PLEG): container finished" podID="988f960f-52fa-406f-9320-a8eec7a04f76" containerID="e96fecee94f6f25ea0bbde3c6a23854fb5dc476f4b5cfb82871358fbcf6809c5" exitCode=0 Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.375343 4689 generic.go:334] "Generic (PLEG): container finished" podID="988f960f-52fa-406f-9320-a8eec7a04f76" containerID="2afbacac849e9ed880e975ae167ea5b2329811c28516152f4d809b043ab61746" exitCode=0 Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.375356 4689 generic.go:334] "Generic (PLEG): container finished" podID="988f960f-52fa-406f-9320-a8eec7a04f76" containerID="0be8fa8da5eef9035e80121c75adb9d6653d8683948fa83cfb1ff87a5fb3e8b2" exitCode=0 Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.375416 4689 generic.go:334] "Generic (PLEG): container finished" podID="988f960f-52fa-406f-9320-a8eec7a04f76" containerID="46c9dc400db69538c36365f4e2d66aae47defc4224c2881532870f875e2b30e7" exitCode=0 Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.375430 4689 generic.go:334] "Generic (PLEG): container finished" podID="988f960f-52fa-406f-9320-a8eec7a04f76" containerID="8be3fd2d2a091ced00ce091ea6f81df99a6718fa9ead81d56590d2c8ebc2a36c" exitCode=0 Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.375443 4689 generic.go:334] "Generic (PLEG): container finished" podID="988f960f-52fa-406f-9320-a8eec7a04f76" containerID="b81345c33ddfbccd0c6b7d9c505ee81d513ce884fa1326add04026fd82d441e1" exitCode=143 Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.375456 4689 generic.go:334] "Generic (PLEG): container finished" podID="988f960f-52fa-406f-9320-a8eec7a04f76" containerID="211c2ff150fa2d5fb73b0df4cfc5e694bf0b6b6761eec1e6827ac983fd759829" exitCode=143 Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.375444 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" event={"ID":"988f960f-52fa-406f-9320-a8eec7a04f76","Type":"ContainerDied","Data":"4c85239b54ff834a926bf6e40f8f4cf64a98a2d9b8ca6eb7b30f3cb1332d35b5"} Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.375533 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" event={"ID":"988f960f-52fa-406f-9320-a8eec7a04f76","Type":"ContainerDied","Data":"e96fecee94f6f25ea0bbde3c6a23854fb5dc476f4b5cfb82871358fbcf6809c5"} Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.375563 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.375621 4689 scope.go:117] "RemoveContainer" containerID="4c85239b54ff834a926bf6e40f8f4cf64a98a2d9b8ca6eb7b30f3cb1332d35b5" Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.375571 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" event={"ID":"988f960f-52fa-406f-9320-a8eec7a04f76","Type":"ContainerDied","Data":"2afbacac849e9ed880e975ae167ea5b2329811c28516152f4d809b043ab61746"} Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.375757 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" event={"ID":"988f960f-52fa-406f-9320-a8eec7a04f76","Type":"ContainerDied","Data":"0be8fa8da5eef9035e80121c75adb9d6653d8683948fa83cfb1ff87a5fb3e8b2"} Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.375786 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" event={"ID":"988f960f-52fa-406f-9320-a8eec7a04f76","Type":"ContainerDied","Data":"46c9dc400db69538c36365f4e2d66aae47defc4224c2881532870f875e2b30e7"} Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.375808 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" event={"ID":"988f960f-52fa-406f-9320-a8eec7a04f76","Type":"ContainerDied","Data":"8be3fd2d2a091ced00ce091ea6f81df99a6718fa9ead81d56590d2c8ebc2a36c"} Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.375856 4689 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f3c9833e8018d8e0b8fdf680a449297d9419da3fb486c78f21c586947f97e1a1"} Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.375885 4689 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e96fecee94f6f25ea0bbde3c6a23854fb5dc476f4b5cfb82871358fbcf6809c5"} Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.375897 4689 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2afbacac849e9ed880e975ae167ea5b2329811c28516152f4d809b043ab61746"} Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.375908 4689 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0be8fa8da5eef9035e80121c75adb9d6653d8683948fa83cfb1ff87a5fb3e8b2"} Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.375919 4689 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"46c9dc400db69538c36365f4e2d66aae47defc4224c2881532870f875e2b30e7"} Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.375930 4689 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8be3fd2d2a091ced00ce091ea6f81df99a6718fa9ead81d56590d2c8ebc2a36c"} Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.375941 4689 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b81345c33ddfbccd0c6b7d9c505ee81d513ce884fa1326add04026fd82d441e1"} Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.375951 4689 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"211c2ff150fa2d5fb73b0df4cfc5e694bf0b6b6761eec1e6827ac983fd759829"} Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.375962 4689 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"496b7a56d1e3bb23f4d1da0fc5bbe009995c46b1f44faad80ee18a18ab301a18"} Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.375976 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" event={"ID":"988f960f-52fa-406f-9320-a8eec7a04f76","Type":"ContainerDied","Data":"b81345c33ddfbccd0c6b7d9c505ee81d513ce884fa1326add04026fd82d441e1"} Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.375992 4689 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4c85239b54ff834a926bf6e40f8f4cf64a98a2d9b8ca6eb7b30f3cb1332d35b5"} Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.376007 4689 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f3c9833e8018d8e0b8fdf680a449297d9419da3fb486c78f21c586947f97e1a1"} Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.376018 4689 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e96fecee94f6f25ea0bbde3c6a23854fb5dc476f4b5cfb82871358fbcf6809c5"} Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.376028 4689 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2afbacac849e9ed880e975ae167ea5b2329811c28516152f4d809b043ab61746"} Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.376039 4689 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0be8fa8da5eef9035e80121c75adb9d6653d8683948fa83cfb1ff87a5fb3e8b2"} Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.376049 4689 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"46c9dc400db69538c36365f4e2d66aae47defc4224c2881532870f875e2b30e7"} Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.376060 4689 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8be3fd2d2a091ced00ce091ea6f81df99a6718fa9ead81d56590d2c8ebc2a36c"} Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.376070 4689 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b81345c33ddfbccd0c6b7d9c505ee81d513ce884fa1326add04026fd82d441e1"} Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.376081 4689 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"211c2ff150fa2d5fb73b0df4cfc5e694bf0b6b6761eec1e6827ac983fd759829"} Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.376091 4689 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"496b7a56d1e3bb23f4d1da0fc5bbe009995c46b1f44faad80ee18a18ab301a18"} Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.376105 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" event={"ID":"988f960f-52fa-406f-9320-a8eec7a04f76","Type":"ContainerDied","Data":"211c2ff150fa2d5fb73b0df4cfc5e694bf0b6b6761eec1e6827ac983fd759829"} Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.376123 4689 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4c85239b54ff834a926bf6e40f8f4cf64a98a2d9b8ca6eb7b30f3cb1332d35b5"} Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.376137 4689 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f3c9833e8018d8e0b8fdf680a449297d9419da3fb486c78f21c586947f97e1a1"} Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.376148 4689 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e96fecee94f6f25ea0bbde3c6a23854fb5dc476f4b5cfb82871358fbcf6809c5"} Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.376160 4689 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2afbacac849e9ed880e975ae167ea5b2329811c28516152f4d809b043ab61746"} Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.376171 4689 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0be8fa8da5eef9035e80121c75adb9d6653d8683948fa83cfb1ff87a5fb3e8b2"} Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.376185 4689 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"46c9dc400db69538c36365f4e2d66aae47defc4224c2881532870f875e2b30e7"} Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.376196 4689 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8be3fd2d2a091ced00ce091ea6f81df99a6718fa9ead81d56590d2c8ebc2a36c"} Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.376206 4689 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b81345c33ddfbccd0c6b7d9c505ee81d513ce884fa1326add04026fd82d441e1"} Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.376217 4689 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"211c2ff150fa2d5fb73b0df4cfc5e694bf0b6b6761eec1e6827ac983fd759829"} Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.376228 4689 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"496b7a56d1e3bb23f4d1da0fc5bbe009995c46b1f44faad80ee18a18ab301a18"} Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.376242 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8zn56" event={"ID":"988f960f-52fa-406f-9320-a8eec7a04f76","Type":"ContainerDied","Data":"273efa17ff5b2d285cdae463bed6e3a5cc8fbb768846cf6beff009d97192773b"} Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.376257 4689 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4c85239b54ff834a926bf6e40f8f4cf64a98a2d9b8ca6eb7b30f3cb1332d35b5"} Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.376271 4689 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f3c9833e8018d8e0b8fdf680a449297d9419da3fb486c78f21c586947f97e1a1"} Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.376282 4689 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e96fecee94f6f25ea0bbde3c6a23854fb5dc476f4b5cfb82871358fbcf6809c5"} Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.376292 4689 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2afbacac849e9ed880e975ae167ea5b2329811c28516152f4d809b043ab61746"} Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.376302 4689 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0be8fa8da5eef9035e80121c75adb9d6653d8683948fa83cfb1ff87a5fb3e8b2"} Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.376325 4689 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"46c9dc400db69538c36365f4e2d66aae47defc4224c2881532870f875e2b30e7"} Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.376335 4689 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8be3fd2d2a091ced00ce091ea6f81df99a6718fa9ead81d56590d2c8ebc2a36c"} Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.376354 4689 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b81345c33ddfbccd0c6b7d9c505ee81d513ce884fa1326add04026fd82d441e1"} Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.376421 4689 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"211c2ff150fa2d5fb73b0df4cfc5e694bf0b6b6761eec1e6827ac983fd759829"} Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.376433 4689 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"496b7a56d1e3bb23f4d1da0fc5bbe009995c46b1f44faad80ee18a18ab301a18"} Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.378594 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dl2st_6bebcb50-c292-4bca-9299-2fdc21439b18/kube-multus/2.log" Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.383111 4689 generic.go:334] "Generic (PLEG): container finished" podID="bb884aea-14ac-453f-b847-f161c8e48bd3" containerID="61af37c53f8c1fac89e03b8fd8451cab2eadd61ad709cfd186dae50a3255245f" exitCode=0 Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.383151 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mqp2s" event={"ID":"bb884aea-14ac-453f-b847-f161c8e48bd3","Type":"ContainerDied","Data":"61af37c53f8c1fac89e03b8fd8451cab2eadd61ad709cfd186dae50a3255245f"} Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.383182 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mqp2s" event={"ID":"bb884aea-14ac-453f-b847-f161c8e48bd3","Type":"ContainerStarted","Data":"75f5bf6a786f6f40af583ff07548e9ea1953c28b55cb0b624fa32de91200c72f"} Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.448971 4689 scope.go:117] "RemoveContainer" containerID="f3c9833e8018d8e0b8fdf680a449297d9419da3fb486c78f21c586947f97e1a1" Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.495796 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-8zn56"] Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.503820 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-8zn56"] Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.533051 4689 scope.go:117] "RemoveContainer" containerID="e96fecee94f6f25ea0bbde3c6a23854fb5dc476f4b5cfb82871358fbcf6809c5" Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.550489 4689 scope.go:117] "RemoveContainer" containerID="2afbacac849e9ed880e975ae167ea5b2329811c28516152f4d809b043ab61746" Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.569302 4689 scope.go:117] "RemoveContainer" containerID="0be8fa8da5eef9035e80121c75adb9d6653d8683948fa83cfb1ff87a5fb3e8b2" Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.597648 4689 scope.go:117] "RemoveContainer" containerID="46c9dc400db69538c36365f4e2d66aae47defc4224c2881532870f875e2b30e7" Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.619979 4689 scope.go:117] "RemoveContainer" containerID="8be3fd2d2a091ced00ce091ea6f81df99a6718fa9ead81d56590d2c8ebc2a36c" Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.633431 4689 scope.go:117] "RemoveContainer" containerID="b81345c33ddfbccd0c6b7d9c505ee81d513ce884fa1326add04026fd82d441e1" Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.651985 4689 scope.go:117] "RemoveContainer" containerID="211c2ff150fa2d5fb73b0df4cfc5e694bf0b6b6761eec1e6827ac983fd759829" Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.669161 4689 scope.go:117] "RemoveContainer" containerID="496b7a56d1e3bb23f4d1da0fc5bbe009995c46b1f44faad80ee18a18ab301a18" Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.684056 4689 scope.go:117] "RemoveContainer" containerID="4c85239b54ff834a926bf6e40f8f4cf64a98a2d9b8ca6eb7b30f3cb1332d35b5" Dec 01 08:49:20 crc kubenswrapper[4689]: E1201 08:49:20.684839 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c85239b54ff834a926bf6e40f8f4cf64a98a2d9b8ca6eb7b30f3cb1332d35b5\": container with ID starting with 4c85239b54ff834a926bf6e40f8f4cf64a98a2d9b8ca6eb7b30f3cb1332d35b5 not found: ID does not exist" containerID="4c85239b54ff834a926bf6e40f8f4cf64a98a2d9b8ca6eb7b30f3cb1332d35b5" Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.684883 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c85239b54ff834a926bf6e40f8f4cf64a98a2d9b8ca6eb7b30f3cb1332d35b5"} err="failed to get container status \"4c85239b54ff834a926bf6e40f8f4cf64a98a2d9b8ca6eb7b30f3cb1332d35b5\": rpc error: code = NotFound desc = could not find container \"4c85239b54ff834a926bf6e40f8f4cf64a98a2d9b8ca6eb7b30f3cb1332d35b5\": container with ID starting with 4c85239b54ff834a926bf6e40f8f4cf64a98a2d9b8ca6eb7b30f3cb1332d35b5 not found: ID does not exist" Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.684915 4689 scope.go:117] "RemoveContainer" containerID="f3c9833e8018d8e0b8fdf680a449297d9419da3fb486c78f21c586947f97e1a1" Dec 01 08:49:20 crc kubenswrapper[4689]: E1201 08:49:20.685352 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3c9833e8018d8e0b8fdf680a449297d9419da3fb486c78f21c586947f97e1a1\": container with ID starting with f3c9833e8018d8e0b8fdf680a449297d9419da3fb486c78f21c586947f97e1a1 not found: ID does not exist" containerID="f3c9833e8018d8e0b8fdf680a449297d9419da3fb486c78f21c586947f97e1a1" Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.685441 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3c9833e8018d8e0b8fdf680a449297d9419da3fb486c78f21c586947f97e1a1"} err="failed to get container status \"f3c9833e8018d8e0b8fdf680a449297d9419da3fb486c78f21c586947f97e1a1\": rpc error: code = NotFound desc = could not find container \"f3c9833e8018d8e0b8fdf680a449297d9419da3fb486c78f21c586947f97e1a1\": container with ID starting with f3c9833e8018d8e0b8fdf680a449297d9419da3fb486c78f21c586947f97e1a1 not found: ID does not exist" Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.685462 4689 scope.go:117] "RemoveContainer" containerID="e96fecee94f6f25ea0bbde3c6a23854fb5dc476f4b5cfb82871358fbcf6809c5" Dec 01 08:49:20 crc kubenswrapper[4689]: E1201 08:49:20.685739 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e96fecee94f6f25ea0bbde3c6a23854fb5dc476f4b5cfb82871358fbcf6809c5\": container with ID starting with e96fecee94f6f25ea0bbde3c6a23854fb5dc476f4b5cfb82871358fbcf6809c5 not found: ID does not exist" containerID="e96fecee94f6f25ea0bbde3c6a23854fb5dc476f4b5cfb82871358fbcf6809c5" Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.685766 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e96fecee94f6f25ea0bbde3c6a23854fb5dc476f4b5cfb82871358fbcf6809c5"} err="failed to get container status \"e96fecee94f6f25ea0bbde3c6a23854fb5dc476f4b5cfb82871358fbcf6809c5\": rpc error: code = NotFound desc = could not find container \"e96fecee94f6f25ea0bbde3c6a23854fb5dc476f4b5cfb82871358fbcf6809c5\": container with ID starting with e96fecee94f6f25ea0bbde3c6a23854fb5dc476f4b5cfb82871358fbcf6809c5 not found: ID does not exist" Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.685784 4689 scope.go:117] "RemoveContainer" containerID="2afbacac849e9ed880e975ae167ea5b2329811c28516152f4d809b043ab61746" Dec 01 08:49:20 crc kubenswrapper[4689]: E1201 08:49:20.686062 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2afbacac849e9ed880e975ae167ea5b2329811c28516152f4d809b043ab61746\": container with ID starting with 2afbacac849e9ed880e975ae167ea5b2329811c28516152f4d809b043ab61746 not found: ID does not exist" containerID="2afbacac849e9ed880e975ae167ea5b2329811c28516152f4d809b043ab61746" Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.686092 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2afbacac849e9ed880e975ae167ea5b2329811c28516152f4d809b043ab61746"} err="failed to get container status \"2afbacac849e9ed880e975ae167ea5b2329811c28516152f4d809b043ab61746\": rpc error: code = NotFound desc = could not find container \"2afbacac849e9ed880e975ae167ea5b2329811c28516152f4d809b043ab61746\": container with ID starting with 2afbacac849e9ed880e975ae167ea5b2329811c28516152f4d809b043ab61746 not found: ID does not exist" Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.686109 4689 scope.go:117] "RemoveContainer" containerID="0be8fa8da5eef9035e80121c75adb9d6653d8683948fa83cfb1ff87a5fb3e8b2" Dec 01 08:49:20 crc kubenswrapper[4689]: E1201 08:49:20.686498 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0be8fa8da5eef9035e80121c75adb9d6653d8683948fa83cfb1ff87a5fb3e8b2\": container with ID starting with 0be8fa8da5eef9035e80121c75adb9d6653d8683948fa83cfb1ff87a5fb3e8b2 not found: ID does not exist" containerID="0be8fa8da5eef9035e80121c75adb9d6653d8683948fa83cfb1ff87a5fb3e8b2" Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.686525 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0be8fa8da5eef9035e80121c75adb9d6653d8683948fa83cfb1ff87a5fb3e8b2"} err="failed to get container status \"0be8fa8da5eef9035e80121c75adb9d6653d8683948fa83cfb1ff87a5fb3e8b2\": rpc error: code = NotFound desc = could not find container \"0be8fa8da5eef9035e80121c75adb9d6653d8683948fa83cfb1ff87a5fb3e8b2\": container with ID starting with 0be8fa8da5eef9035e80121c75adb9d6653d8683948fa83cfb1ff87a5fb3e8b2 not found: ID does not exist" Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.686545 4689 scope.go:117] "RemoveContainer" containerID="46c9dc400db69538c36365f4e2d66aae47defc4224c2881532870f875e2b30e7" Dec 01 08:49:20 crc kubenswrapper[4689]: E1201 08:49:20.687071 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46c9dc400db69538c36365f4e2d66aae47defc4224c2881532870f875e2b30e7\": container with ID starting with 46c9dc400db69538c36365f4e2d66aae47defc4224c2881532870f875e2b30e7 not found: ID does not exist" containerID="46c9dc400db69538c36365f4e2d66aae47defc4224c2881532870f875e2b30e7" Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.687100 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46c9dc400db69538c36365f4e2d66aae47defc4224c2881532870f875e2b30e7"} err="failed to get container status \"46c9dc400db69538c36365f4e2d66aae47defc4224c2881532870f875e2b30e7\": rpc error: code = NotFound desc = could not find container \"46c9dc400db69538c36365f4e2d66aae47defc4224c2881532870f875e2b30e7\": container with ID starting with 46c9dc400db69538c36365f4e2d66aae47defc4224c2881532870f875e2b30e7 not found: ID does not exist" Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.687126 4689 scope.go:117] "RemoveContainer" containerID="8be3fd2d2a091ced00ce091ea6f81df99a6718fa9ead81d56590d2c8ebc2a36c" Dec 01 08:49:20 crc kubenswrapper[4689]: E1201 08:49:20.687513 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8be3fd2d2a091ced00ce091ea6f81df99a6718fa9ead81d56590d2c8ebc2a36c\": container with ID starting with 8be3fd2d2a091ced00ce091ea6f81df99a6718fa9ead81d56590d2c8ebc2a36c not found: ID does not exist" containerID="8be3fd2d2a091ced00ce091ea6f81df99a6718fa9ead81d56590d2c8ebc2a36c" Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.687546 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8be3fd2d2a091ced00ce091ea6f81df99a6718fa9ead81d56590d2c8ebc2a36c"} err="failed to get container status \"8be3fd2d2a091ced00ce091ea6f81df99a6718fa9ead81d56590d2c8ebc2a36c\": rpc error: code = NotFound desc = could not find container \"8be3fd2d2a091ced00ce091ea6f81df99a6718fa9ead81d56590d2c8ebc2a36c\": container with ID starting with 8be3fd2d2a091ced00ce091ea6f81df99a6718fa9ead81d56590d2c8ebc2a36c not found: ID does not exist" Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.687563 4689 scope.go:117] "RemoveContainer" containerID="b81345c33ddfbccd0c6b7d9c505ee81d513ce884fa1326add04026fd82d441e1" Dec 01 08:49:20 crc kubenswrapper[4689]: E1201 08:49:20.687930 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b81345c33ddfbccd0c6b7d9c505ee81d513ce884fa1326add04026fd82d441e1\": container with ID starting with b81345c33ddfbccd0c6b7d9c505ee81d513ce884fa1326add04026fd82d441e1 not found: ID does not exist" containerID="b81345c33ddfbccd0c6b7d9c505ee81d513ce884fa1326add04026fd82d441e1" Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.687957 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b81345c33ddfbccd0c6b7d9c505ee81d513ce884fa1326add04026fd82d441e1"} err="failed to get container status \"b81345c33ddfbccd0c6b7d9c505ee81d513ce884fa1326add04026fd82d441e1\": rpc error: code = NotFound desc = could not find container \"b81345c33ddfbccd0c6b7d9c505ee81d513ce884fa1326add04026fd82d441e1\": container with ID starting with b81345c33ddfbccd0c6b7d9c505ee81d513ce884fa1326add04026fd82d441e1 not found: ID does not exist" Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.687974 4689 scope.go:117] "RemoveContainer" containerID="211c2ff150fa2d5fb73b0df4cfc5e694bf0b6b6761eec1e6827ac983fd759829" Dec 01 08:49:20 crc kubenswrapper[4689]: E1201 08:49:20.688305 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"211c2ff150fa2d5fb73b0df4cfc5e694bf0b6b6761eec1e6827ac983fd759829\": container with ID starting with 211c2ff150fa2d5fb73b0df4cfc5e694bf0b6b6761eec1e6827ac983fd759829 not found: ID does not exist" containerID="211c2ff150fa2d5fb73b0df4cfc5e694bf0b6b6761eec1e6827ac983fd759829" Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.688331 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"211c2ff150fa2d5fb73b0df4cfc5e694bf0b6b6761eec1e6827ac983fd759829"} err="failed to get container status \"211c2ff150fa2d5fb73b0df4cfc5e694bf0b6b6761eec1e6827ac983fd759829\": rpc error: code = NotFound desc = could not find container \"211c2ff150fa2d5fb73b0df4cfc5e694bf0b6b6761eec1e6827ac983fd759829\": container with ID starting with 211c2ff150fa2d5fb73b0df4cfc5e694bf0b6b6761eec1e6827ac983fd759829 not found: ID does not exist" Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.688348 4689 scope.go:117] "RemoveContainer" containerID="496b7a56d1e3bb23f4d1da0fc5bbe009995c46b1f44faad80ee18a18ab301a18" Dec 01 08:49:20 crc kubenswrapper[4689]: E1201 08:49:20.688663 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"496b7a56d1e3bb23f4d1da0fc5bbe009995c46b1f44faad80ee18a18ab301a18\": container with ID starting with 496b7a56d1e3bb23f4d1da0fc5bbe009995c46b1f44faad80ee18a18ab301a18 not found: ID does not exist" containerID="496b7a56d1e3bb23f4d1da0fc5bbe009995c46b1f44faad80ee18a18ab301a18" Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.688688 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"496b7a56d1e3bb23f4d1da0fc5bbe009995c46b1f44faad80ee18a18ab301a18"} err="failed to get container status \"496b7a56d1e3bb23f4d1da0fc5bbe009995c46b1f44faad80ee18a18ab301a18\": rpc error: code = NotFound desc = could not find container \"496b7a56d1e3bb23f4d1da0fc5bbe009995c46b1f44faad80ee18a18ab301a18\": container with ID starting with 496b7a56d1e3bb23f4d1da0fc5bbe009995c46b1f44faad80ee18a18ab301a18 not found: ID does not exist" Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.688707 4689 scope.go:117] "RemoveContainer" containerID="4c85239b54ff834a926bf6e40f8f4cf64a98a2d9b8ca6eb7b30f3cb1332d35b5" Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.689097 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c85239b54ff834a926bf6e40f8f4cf64a98a2d9b8ca6eb7b30f3cb1332d35b5"} err="failed to get container status \"4c85239b54ff834a926bf6e40f8f4cf64a98a2d9b8ca6eb7b30f3cb1332d35b5\": rpc error: code = NotFound desc = could not find container \"4c85239b54ff834a926bf6e40f8f4cf64a98a2d9b8ca6eb7b30f3cb1332d35b5\": container with ID starting with 4c85239b54ff834a926bf6e40f8f4cf64a98a2d9b8ca6eb7b30f3cb1332d35b5 not found: ID does not exist" Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.689121 4689 scope.go:117] "RemoveContainer" containerID="f3c9833e8018d8e0b8fdf680a449297d9419da3fb486c78f21c586947f97e1a1" Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.689397 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3c9833e8018d8e0b8fdf680a449297d9419da3fb486c78f21c586947f97e1a1"} err="failed to get container status \"f3c9833e8018d8e0b8fdf680a449297d9419da3fb486c78f21c586947f97e1a1\": rpc error: code = NotFound desc = could not find container \"f3c9833e8018d8e0b8fdf680a449297d9419da3fb486c78f21c586947f97e1a1\": container with ID starting with f3c9833e8018d8e0b8fdf680a449297d9419da3fb486c78f21c586947f97e1a1 not found: ID does not exist" Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.689444 4689 scope.go:117] "RemoveContainer" containerID="e96fecee94f6f25ea0bbde3c6a23854fb5dc476f4b5cfb82871358fbcf6809c5" Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.689683 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e96fecee94f6f25ea0bbde3c6a23854fb5dc476f4b5cfb82871358fbcf6809c5"} err="failed to get container status \"e96fecee94f6f25ea0bbde3c6a23854fb5dc476f4b5cfb82871358fbcf6809c5\": rpc error: code = NotFound desc = could not find container \"e96fecee94f6f25ea0bbde3c6a23854fb5dc476f4b5cfb82871358fbcf6809c5\": container with ID starting with e96fecee94f6f25ea0bbde3c6a23854fb5dc476f4b5cfb82871358fbcf6809c5 not found: ID does not exist" Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.689711 4689 scope.go:117] "RemoveContainer" containerID="2afbacac849e9ed880e975ae167ea5b2329811c28516152f4d809b043ab61746" Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.690146 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2afbacac849e9ed880e975ae167ea5b2329811c28516152f4d809b043ab61746"} err="failed to get container status \"2afbacac849e9ed880e975ae167ea5b2329811c28516152f4d809b043ab61746\": rpc error: code = NotFound desc = could not find container \"2afbacac849e9ed880e975ae167ea5b2329811c28516152f4d809b043ab61746\": container with ID starting with 2afbacac849e9ed880e975ae167ea5b2329811c28516152f4d809b043ab61746 not found: ID does not exist" Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.690172 4689 scope.go:117] "RemoveContainer" containerID="0be8fa8da5eef9035e80121c75adb9d6653d8683948fa83cfb1ff87a5fb3e8b2" Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.690947 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0be8fa8da5eef9035e80121c75adb9d6653d8683948fa83cfb1ff87a5fb3e8b2"} err="failed to get container status \"0be8fa8da5eef9035e80121c75adb9d6653d8683948fa83cfb1ff87a5fb3e8b2\": rpc error: code = NotFound desc = could not find container \"0be8fa8da5eef9035e80121c75adb9d6653d8683948fa83cfb1ff87a5fb3e8b2\": container with ID starting with 0be8fa8da5eef9035e80121c75adb9d6653d8683948fa83cfb1ff87a5fb3e8b2 not found: ID does not exist" Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.690973 4689 scope.go:117] "RemoveContainer" containerID="46c9dc400db69538c36365f4e2d66aae47defc4224c2881532870f875e2b30e7" Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.692254 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46c9dc400db69538c36365f4e2d66aae47defc4224c2881532870f875e2b30e7"} err="failed to get container status \"46c9dc400db69538c36365f4e2d66aae47defc4224c2881532870f875e2b30e7\": rpc error: code = NotFound desc = could not find container \"46c9dc400db69538c36365f4e2d66aae47defc4224c2881532870f875e2b30e7\": container with ID starting with 46c9dc400db69538c36365f4e2d66aae47defc4224c2881532870f875e2b30e7 not found: ID does not exist" Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.692276 4689 scope.go:117] "RemoveContainer" containerID="8be3fd2d2a091ced00ce091ea6f81df99a6718fa9ead81d56590d2c8ebc2a36c" Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.692686 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8be3fd2d2a091ced00ce091ea6f81df99a6718fa9ead81d56590d2c8ebc2a36c"} err="failed to get container status \"8be3fd2d2a091ced00ce091ea6f81df99a6718fa9ead81d56590d2c8ebc2a36c\": rpc error: code = NotFound desc = could not find container \"8be3fd2d2a091ced00ce091ea6f81df99a6718fa9ead81d56590d2c8ebc2a36c\": container with ID starting with 8be3fd2d2a091ced00ce091ea6f81df99a6718fa9ead81d56590d2c8ebc2a36c not found: ID does not exist" Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.692709 4689 scope.go:117] "RemoveContainer" containerID="b81345c33ddfbccd0c6b7d9c505ee81d513ce884fa1326add04026fd82d441e1" Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.692962 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b81345c33ddfbccd0c6b7d9c505ee81d513ce884fa1326add04026fd82d441e1"} err="failed to get container status \"b81345c33ddfbccd0c6b7d9c505ee81d513ce884fa1326add04026fd82d441e1\": rpc error: code = NotFound desc = could not find container \"b81345c33ddfbccd0c6b7d9c505ee81d513ce884fa1326add04026fd82d441e1\": container with ID starting with b81345c33ddfbccd0c6b7d9c505ee81d513ce884fa1326add04026fd82d441e1 not found: ID does not exist" Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.693031 4689 scope.go:117] "RemoveContainer" containerID="211c2ff150fa2d5fb73b0df4cfc5e694bf0b6b6761eec1e6827ac983fd759829" Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.693301 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"211c2ff150fa2d5fb73b0df4cfc5e694bf0b6b6761eec1e6827ac983fd759829"} err="failed to get container status \"211c2ff150fa2d5fb73b0df4cfc5e694bf0b6b6761eec1e6827ac983fd759829\": rpc error: code = NotFound desc = could not find container \"211c2ff150fa2d5fb73b0df4cfc5e694bf0b6b6761eec1e6827ac983fd759829\": container with ID starting with 211c2ff150fa2d5fb73b0df4cfc5e694bf0b6b6761eec1e6827ac983fd759829 not found: ID does not exist" Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.693331 4689 scope.go:117] "RemoveContainer" containerID="496b7a56d1e3bb23f4d1da0fc5bbe009995c46b1f44faad80ee18a18ab301a18" Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.693803 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"496b7a56d1e3bb23f4d1da0fc5bbe009995c46b1f44faad80ee18a18ab301a18"} err="failed to get container status \"496b7a56d1e3bb23f4d1da0fc5bbe009995c46b1f44faad80ee18a18ab301a18\": rpc error: code = NotFound desc = could not find container \"496b7a56d1e3bb23f4d1da0fc5bbe009995c46b1f44faad80ee18a18ab301a18\": container with ID starting with 496b7a56d1e3bb23f4d1da0fc5bbe009995c46b1f44faad80ee18a18ab301a18 not found: ID does not exist" Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.693829 4689 scope.go:117] "RemoveContainer" containerID="4c85239b54ff834a926bf6e40f8f4cf64a98a2d9b8ca6eb7b30f3cb1332d35b5" Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.694156 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c85239b54ff834a926bf6e40f8f4cf64a98a2d9b8ca6eb7b30f3cb1332d35b5"} err="failed to get container status \"4c85239b54ff834a926bf6e40f8f4cf64a98a2d9b8ca6eb7b30f3cb1332d35b5\": rpc error: code = NotFound desc = could not find container \"4c85239b54ff834a926bf6e40f8f4cf64a98a2d9b8ca6eb7b30f3cb1332d35b5\": container with ID starting with 4c85239b54ff834a926bf6e40f8f4cf64a98a2d9b8ca6eb7b30f3cb1332d35b5 not found: ID does not exist" Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.694180 4689 scope.go:117] "RemoveContainer" containerID="f3c9833e8018d8e0b8fdf680a449297d9419da3fb486c78f21c586947f97e1a1" Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.694442 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3c9833e8018d8e0b8fdf680a449297d9419da3fb486c78f21c586947f97e1a1"} err="failed to get container status \"f3c9833e8018d8e0b8fdf680a449297d9419da3fb486c78f21c586947f97e1a1\": rpc error: code = NotFound desc = could not find container \"f3c9833e8018d8e0b8fdf680a449297d9419da3fb486c78f21c586947f97e1a1\": container with ID starting with f3c9833e8018d8e0b8fdf680a449297d9419da3fb486c78f21c586947f97e1a1 not found: ID does not exist" Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.694466 4689 scope.go:117] "RemoveContainer" containerID="e96fecee94f6f25ea0bbde3c6a23854fb5dc476f4b5cfb82871358fbcf6809c5" Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.694733 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e96fecee94f6f25ea0bbde3c6a23854fb5dc476f4b5cfb82871358fbcf6809c5"} err="failed to get container status \"e96fecee94f6f25ea0bbde3c6a23854fb5dc476f4b5cfb82871358fbcf6809c5\": rpc error: code = NotFound desc = could not find container \"e96fecee94f6f25ea0bbde3c6a23854fb5dc476f4b5cfb82871358fbcf6809c5\": container with ID starting with e96fecee94f6f25ea0bbde3c6a23854fb5dc476f4b5cfb82871358fbcf6809c5 not found: ID does not exist" Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.694756 4689 scope.go:117] "RemoveContainer" containerID="2afbacac849e9ed880e975ae167ea5b2329811c28516152f4d809b043ab61746" Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.695026 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2afbacac849e9ed880e975ae167ea5b2329811c28516152f4d809b043ab61746"} err="failed to get container status \"2afbacac849e9ed880e975ae167ea5b2329811c28516152f4d809b043ab61746\": rpc error: code = NotFound desc = could not find container \"2afbacac849e9ed880e975ae167ea5b2329811c28516152f4d809b043ab61746\": container with ID starting with 2afbacac849e9ed880e975ae167ea5b2329811c28516152f4d809b043ab61746 not found: ID does not exist" Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.695049 4689 scope.go:117] "RemoveContainer" containerID="0be8fa8da5eef9035e80121c75adb9d6653d8683948fa83cfb1ff87a5fb3e8b2" Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.695577 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0be8fa8da5eef9035e80121c75adb9d6653d8683948fa83cfb1ff87a5fb3e8b2"} err="failed to get container status \"0be8fa8da5eef9035e80121c75adb9d6653d8683948fa83cfb1ff87a5fb3e8b2\": rpc error: code = NotFound desc = could not find container \"0be8fa8da5eef9035e80121c75adb9d6653d8683948fa83cfb1ff87a5fb3e8b2\": container with ID starting with 0be8fa8da5eef9035e80121c75adb9d6653d8683948fa83cfb1ff87a5fb3e8b2 not found: ID does not exist" Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.695597 4689 scope.go:117] "RemoveContainer" containerID="46c9dc400db69538c36365f4e2d66aae47defc4224c2881532870f875e2b30e7" Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.695854 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46c9dc400db69538c36365f4e2d66aae47defc4224c2881532870f875e2b30e7"} err="failed to get container status \"46c9dc400db69538c36365f4e2d66aae47defc4224c2881532870f875e2b30e7\": rpc error: code = NotFound desc = could not find container \"46c9dc400db69538c36365f4e2d66aae47defc4224c2881532870f875e2b30e7\": container with ID starting with 46c9dc400db69538c36365f4e2d66aae47defc4224c2881532870f875e2b30e7 not found: ID does not exist" Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.695873 4689 scope.go:117] "RemoveContainer" containerID="8be3fd2d2a091ced00ce091ea6f81df99a6718fa9ead81d56590d2c8ebc2a36c" Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.696177 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8be3fd2d2a091ced00ce091ea6f81df99a6718fa9ead81d56590d2c8ebc2a36c"} err="failed to get container status \"8be3fd2d2a091ced00ce091ea6f81df99a6718fa9ead81d56590d2c8ebc2a36c\": rpc error: code = NotFound desc = could not find container \"8be3fd2d2a091ced00ce091ea6f81df99a6718fa9ead81d56590d2c8ebc2a36c\": container with ID starting with 8be3fd2d2a091ced00ce091ea6f81df99a6718fa9ead81d56590d2c8ebc2a36c not found: ID does not exist" Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.696199 4689 scope.go:117] "RemoveContainer" containerID="b81345c33ddfbccd0c6b7d9c505ee81d513ce884fa1326add04026fd82d441e1" Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.696547 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b81345c33ddfbccd0c6b7d9c505ee81d513ce884fa1326add04026fd82d441e1"} err="failed to get container status \"b81345c33ddfbccd0c6b7d9c505ee81d513ce884fa1326add04026fd82d441e1\": rpc error: code = NotFound desc = could not find container \"b81345c33ddfbccd0c6b7d9c505ee81d513ce884fa1326add04026fd82d441e1\": container with ID starting with b81345c33ddfbccd0c6b7d9c505ee81d513ce884fa1326add04026fd82d441e1 not found: ID does not exist" Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.696574 4689 scope.go:117] "RemoveContainer" containerID="211c2ff150fa2d5fb73b0df4cfc5e694bf0b6b6761eec1e6827ac983fd759829" Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.697042 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"211c2ff150fa2d5fb73b0df4cfc5e694bf0b6b6761eec1e6827ac983fd759829"} err="failed to get container status \"211c2ff150fa2d5fb73b0df4cfc5e694bf0b6b6761eec1e6827ac983fd759829\": rpc error: code = NotFound desc = could not find container \"211c2ff150fa2d5fb73b0df4cfc5e694bf0b6b6761eec1e6827ac983fd759829\": container with ID starting with 211c2ff150fa2d5fb73b0df4cfc5e694bf0b6b6761eec1e6827ac983fd759829 not found: ID does not exist" Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.697070 4689 scope.go:117] "RemoveContainer" containerID="496b7a56d1e3bb23f4d1da0fc5bbe009995c46b1f44faad80ee18a18ab301a18" Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.697646 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"496b7a56d1e3bb23f4d1da0fc5bbe009995c46b1f44faad80ee18a18ab301a18"} err="failed to get container status \"496b7a56d1e3bb23f4d1da0fc5bbe009995c46b1f44faad80ee18a18ab301a18\": rpc error: code = NotFound desc = could not find container \"496b7a56d1e3bb23f4d1da0fc5bbe009995c46b1f44faad80ee18a18ab301a18\": container with ID starting with 496b7a56d1e3bb23f4d1da0fc5bbe009995c46b1f44faad80ee18a18ab301a18 not found: ID does not exist" Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.697671 4689 scope.go:117] "RemoveContainer" containerID="4c85239b54ff834a926bf6e40f8f4cf64a98a2d9b8ca6eb7b30f3cb1332d35b5" Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.697978 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c85239b54ff834a926bf6e40f8f4cf64a98a2d9b8ca6eb7b30f3cb1332d35b5"} err="failed to get container status \"4c85239b54ff834a926bf6e40f8f4cf64a98a2d9b8ca6eb7b30f3cb1332d35b5\": rpc error: code = NotFound desc = could not find container \"4c85239b54ff834a926bf6e40f8f4cf64a98a2d9b8ca6eb7b30f3cb1332d35b5\": container with ID starting with 4c85239b54ff834a926bf6e40f8f4cf64a98a2d9b8ca6eb7b30f3cb1332d35b5 not found: ID does not exist" Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.698003 4689 scope.go:117] "RemoveContainer" containerID="f3c9833e8018d8e0b8fdf680a449297d9419da3fb486c78f21c586947f97e1a1" Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.698241 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3c9833e8018d8e0b8fdf680a449297d9419da3fb486c78f21c586947f97e1a1"} err="failed to get container status \"f3c9833e8018d8e0b8fdf680a449297d9419da3fb486c78f21c586947f97e1a1\": rpc error: code = NotFound desc = could not find container \"f3c9833e8018d8e0b8fdf680a449297d9419da3fb486c78f21c586947f97e1a1\": container with ID starting with f3c9833e8018d8e0b8fdf680a449297d9419da3fb486c78f21c586947f97e1a1 not found: ID does not exist" Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.698258 4689 scope.go:117] "RemoveContainer" containerID="e96fecee94f6f25ea0bbde3c6a23854fb5dc476f4b5cfb82871358fbcf6809c5" Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.698490 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e96fecee94f6f25ea0bbde3c6a23854fb5dc476f4b5cfb82871358fbcf6809c5"} err="failed to get container status \"e96fecee94f6f25ea0bbde3c6a23854fb5dc476f4b5cfb82871358fbcf6809c5\": rpc error: code = NotFound desc = could not find container \"e96fecee94f6f25ea0bbde3c6a23854fb5dc476f4b5cfb82871358fbcf6809c5\": container with ID starting with e96fecee94f6f25ea0bbde3c6a23854fb5dc476f4b5cfb82871358fbcf6809c5 not found: ID does not exist" Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.698516 4689 scope.go:117] "RemoveContainer" containerID="2afbacac849e9ed880e975ae167ea5b2329811c28516152f4d809b043ab61746" Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.698806 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2afbacac849e9ed880e975ae167ea5b2329811c28516152f4d809b043ab61746"} err="failed to get container status \"2afbacac849e9ed880e975ae167ea5b2329811c28516152f4d809b043ab61746\": rpc error: code = NotFound desc = could not find container \"2afbacac849e9ed880e975ae167ea5b2329811c28516152f4d809b043ab61746\": container with ID starting with 2afbacac849e9ed880e975ae167ea5b2329811c28516152f4d809b043ab61746 not found: ID does not exist" Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.698832 4689 scope.go:117] "RemoveContainer" containerID="0be8fa8da5eef9035e80121c75adb9d6653d8683948fa83cfb1ff87a5fb3e8b2" Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.699306 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0be8fa8da5eef9035e80121c75adb9d6653d8683948fa83cfb1ff87a5fb3e8b2"} err="failed to get container status \"0be8fa8da5eef9035e80121c75adb9d6653d8683948fa83cfb1ff87a5fb3e8b2\": rpc error: code = NotFound desc = could not find container \"0be8fa8da5eef9035e80121c75adb9d6653d8683948fa83cfb1ff87a5fb3e8b2\": container with ID starting with 0be8fa8da5eef9035e80121c75adb9d6653d8683948fa83cfb1ff87a5fb3e8b2 not found: ID does not exist" Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.699332 4689 scope.go:117] "RemoveContainer" containerID="46c9dc400db69538c36365f4e2d66aae47defc4224c2881532870f875e2b30e7" Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.699636 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46c9dc400db69538c36365f4e2d66aae47defc4224c2881532870f875e2b30e7"} err="failed to get container status \"46c9dc400db69538c36365f4e2d66aae47defc4224c2881532870f875e2b30e7\": rpc error: code = NotFound desc = could not find container \"46c9dc400db69538c36365f4e2d66aae47defc4224c2881532870f875e2b30e7\": container with ID starting with 46c9dc400db69538c36365f4e2d66aae47defc4224c2881532870f875e2b30e7 not found: ID does not exist" Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.699662 4689 scope.go:117] "RemoveContainer" containerID="8be3fd2d2a091ced00ce091ea6f81df99a6718fa9ead81d56590d2c8ebc2a36c" Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.699980 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8be3fd2d2a091ced00ce091ea6f81df99a6718fa9ead81d56590d2c8ebc2a36c"} err="failed to get container status \"8be3fd2d2a091ced00ce091ea6f81df99a6718fa9ead81d56590d2c8ebc2a36c\": rpc error: code = NotFound desc = could not find container \"8be3fd2d2a091ced00ce091ea6f81df99a6718fa9ead81d56590d2c8ebc2a36c\": container with ID starting with 8be3fd2d2a091ced00ce091ea6f81df99a6718fa9ead81d56590d2c8ebc2a36c not found: ID does not exist" Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.700006 4689 scope.go:117] "RemoveContainer" containerID="b81345c33ddfbccd0c6b7d9c505ee81d513ce884fa1326add04026fd82d441e1" Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.700235 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b81345c33ddfbccd0c6b7d9c505ee81d513ce884fa1326add04026fd82d441e1"} err="failed to get container status \"b81345c33ddfbccd0c6b7d9c505ee81d513ce884fa1326add04026fd82d441e1\": rpc error: code = NotFound desc = could not find container \"b81345c33ddfbccd0c6b7d9c505ee81d513ce884fa1326add04026fd82d441e1\": container with ID starting with b81345c33ddfbccd0c6b7d9c505ee81d513ce884fa1326add04026fd82d441e1 not found: ID does not exist" Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.700259 4689 scope.go:117] "RemoveContainer" containerID="211c2ff150fa2d5fb73b0df4cfc5e694bf0b6b6761eec1e6827ac983fd759829" Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.700481 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"211c2ff150fa2d5fb73b0df4cfc5e694bf0b6b6761eec1e6827ac983fd759829"} err="failed to get container status \"211c2ff150fa2d5fb73b0df4cfc5e694bf0b6b6761eec1e6827ac983fd759829\": rpc error: code = NotFound desc = could not find container \"211c2ff150fa2d5fb73b0df4cfc5e694bf0b6b6761eec1e6827ac983fd759829\": container with ID starting with 211c2ff150fa2d5fb73b0df4cfc5e694bf0b6b6761eec1e6827ac983fd759829 not found: ID does not exist" Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.700504 4689 scope.go:117] "RemoveContainer" containerID="496b7a56d1e3bb23f4d1da0fc5bbe009995c46b1f44faad80ee18a18ab301a18" Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.700966 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"496b7a56d1e3bb23f4d1da0fc5bbe009995c46b1f44faad80ee18a18ab301a18"} err="failed to get container status \"496b7a56d1e3bb23f4d1da0fc5bbe009995c46b1f44faad80ee18a18ab301a18\": rpc error: code = NotFound desc = could not find container \"496b7a56d1e3bb23f4d1da0fc5bbe009995c46b1f44faad80ee18a18ab301a18\": container with ID starting with 496b7a56d1e3bb23f4d1da0fc5bbe009995c46b1f44faad80ee18a18ab301a18 not found: ID does not exist" Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.700988 4689 scope.go:117] "RemoveContainer" containerID="4c85239b54ff834a926bf6e40f8f4cf64a98a2d9b8ca6eb7b30f3cb1332d35b5" Dec 01 08:49:20 crc kubenswrapper[4689]: I1201 08:49:20.701325 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c85239b54ff834a926bf6e40f8f4cf64a98a2d9b8ca6eb7b30f3cb1332d35b5"} err="failed to get container status \"4c85239b54ff834a926bf6e40f8f4cf64a98a2d9b8ca6eb7b30f3cb1332d35b5\": rpc error: code = NotFound desc = could not find container \"4c85239b54ff834a926bf6e40f8f4cf64a98a2d9b8ca6eb7b30f3cb1332d35b5\": container with ID starting with 4c85239b54ff834a926bf6e40f8f4cf64a98a2d9b8ca6eb7b30f3cb1332d35b5 not found: ID does not exist" Dec 01 08:49:21 crc kubenswrapper[4689]: I1201 08:49:21.054381 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="988f960f-52fa-406f-9320-a8eec7a04f76" path="/var/lib/kubelet/pods/988f960f-52fa-406f-9320-a8eec7a04f76/volumes" Dec 01 08:49:21 crc kubenswrapper[4689]: I1201 08:49:21.421506 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mqp2s" event={"ID":"bb884aea-14ac-453f-b847-f161c8e48bd3","Type":"ContainerStarted","Data":"62cf387d7a020a4af7067bcdf6ff12c6b2cd2e00335338726fe552d298a21847"} Dec 01 08:49:21 crc kubenswrapper[4689]: I1201 08:49:21.421554 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mqp2s" event={"ID":"bb884aea-14ac-453f-b847-f161c8e48bd3","Type":"ContainerStarted","Data":"57f1666abb9099d4c15d8af580f86ac79188081adeab5f0ced0bef09ea9907b2"} Dec 01 08:49:21 crc kubenswrapper[4689]: I1201 08:49:21.421566 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mqp2s" event={"ID":"bb884aea-14ac-453f-b847-f161c8e48bd3","Type":"ContainerStarted","Data":"4b587e72def62cf12261963a49fd40aa5ae789c7be36e398d831278714867223"} Dec 01 08:49:21 crc kubenswrapper[4689]: I1201 08:49:21.421575 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mqp2s" event={"ID":"bb884aea-14ac-453f-b847-f161c8e48bd3","Type":"ContainerStarted","Data":"12a1e9d610346d3f959ac48c1009046d7692b928637f4ae4c897c9c5c3dd875a"} Dec 01 08:49:21 crc kubenswrapper[4689]: I1201 08:49:21.421583 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mqp2s" event={"ID":"bb884aea-14ac-453f-b847-f161c8e48bd3","Type":"ContainerStarted","Data":"7a6bd22d3e13d14fb2c119d1e2ab5b245db96eba8cfa6e4d2d1c674cb1840255"} Dec 01 08:49:21 crc kubenswrapper[4689]: I1201 08:49:21.421591 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mqp2s" event={"ID":"bb884aea-14ac-453f-b847-f161c8e48bd3","Type":"ContainerStarted","Data":"e0f97aa1f17a5df79471edae0881b02d69e6797e3685f2494ea9a7dd9dd161d2"} Dec 01 08:49:24 crc kubenswrapper[4689]: I1201 08:49:24.448126 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mqp2s" event={"ID":"bb884aea-14ac-453f-b847-f161c8e48bd3","Type":"ContainerStarted","Data":"614af6fb73b840152dc2d6a0356b55fa1720284ba86d3125da06e088414d8b9e"} Dec 01 08:49:26 crc kubenswrapper[4689]: I1201 08:49:26.476677 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mqp2s" event={"ID":"bb884aea-14ac-453f-b847-f161c8e48bd3","Type":"ContainerStarted","Data":"2edd8923743a88b3b12b0df04031c59465598ce9c9e13c044408d6b0094ca2c1"} Dec 01 08:49:26 crc kubenswrapper[4689]: I1201 08:49:26.477330 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mqp2s" Dec 01 08:49:26 crc kubenswrapper[4689]: I1201 08:49:26.477356 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mqp2s" Dec 01 08:49:26 crc kubenswrapper[4689]: I1201 08:49:26.512751 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-mqp2s" podStartSLOduration=7.512713444 podStartE2EDuration="7.512713444s" podCreationTimestamp="2025-12-01 08:49:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:49:26.509506266 +0000 UTC m=+646.581794170" watchObservedRunningTime="2025-12-01 08:49:26.512713444 +0000 UTC m=+646.585001348" Dec 01 08:49:26 crc kubenswrapper[4689]: I1201 08:49:26.517119 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-mqp2s" Dec 01 08:49:27 crc kubenswrapper[4689]: I1201 08:49:27.482094 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mqp2s" Dec 01 08:49:27 crc kubenswrapper[4689]: I1201 08:49:27.538245 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-mqp2s" Dec 01 08:49:31 crc kubenswrapper[4689]: I1201 08:49:31.050389 4689 scope.go:117] "RemoveContainer" containerID="e15e4d4d20bedfa63e8dc39de991d3e641a4c410f89da82a2a3386442c160632" Dec 01 08:49:31 crc kubenswrapper[4689]: E1201 08:49:31.050974 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-dl2st_openshift-multus(6bebcb50-c292-4bca-9299-2fdc21439b18)\"" pod="openshift-multus/multus-dl2st" podUID="6bebcb50-c292-4bca-9299-2fdc21439b18" Dec 01 08:49:44 crc kubenswrapper[4689]: I1201 08:49:44.048917 4689 scope.go:117] "RemoveContainer" containerID="e15e4d4d20bedfa63e8dc39de991d3e641a4c410f89da82a2a3386442c160632" Dec 01 08:49:44 crc kubenswrapper[4689]: I1201 08:49:44.604111 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dl2st_6bebcb50-c292-4bca-9299-2fdc21439b18/kube-multus/2.log" Dec 01 08:49:44 crc kubenswrapper[4689]: I1201 08:49:44.606935 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-dl2st" event={"ID":"6bebcb50-c292-4bca-9299-2fdc21439b18","Type":"ContainerStarted","Data":"c2463f80dc579425572ccc3538e54137beb57dfe4578985acc477bb727e98203"} Dec 01 08:49:50 crc kubenswrapper[4689]: I1201 08:49:50.051112 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-mqp2s" Dec 01 08:50:04 crc kubenswrapper[4689]: I1201 08:50:04.074753 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f85wqf"] Dec 01 08:50:04 crc kubenswrapper[4689]: I1201 08:50:04.079764 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f85wqf" Dec 01 08:50:04 crc kubenswrapper[4689]: I1201 08:50:04.082313 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwkvr\" (UniqueName: \"kubernetes.io/projected/4205e462-7e96-4991-8157-5a483dec2452-kube-api-access-wwkvr\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f85wqf\" (UID: \"4205e462-7e96-4991-8157-5a483dec2452\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f85wqf" Dec 01 08:50:04 crc kubenswrapper[4689]: I1201 08:50:04.082439 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4205e462-7e96-4991-8157-5a483dec2452-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f85wqf\" (UID: \"4205e462-7e96-4991-8157-5a483dec2452\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f85wqf" Dec 01 08:50:04 crc kubenswrapper[4689]: I1201 08:50:04.082479 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4205e462-7e96-4991-8157-5a483dec2452-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f85wqf\" (UID: \"4205e462-7e96-4991-8157-5a483dec2452\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f85wqf" Dec 01 08:50:04 crc kubenswrapper[4689]: I1201 08:50:04.083115 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 01 08:50:04 crc kubenswrapper[4689]: I1201 08:50:04.083217 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f85wqf"] Dec 01 08:50:04 crc kubenswrapper[4689]: I1201 08:50:04.183772 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4205e462-7e96-4991-8157-5a483dec2452-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f85wqf\" (UID: \"4205e462-7e96-4991-8157-5a483dec2452\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f85wqf" Dec 01 08:50:04 crc kubenswrapper[4689]: I1201 08:50:04.183922 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4205e462-7e96-4991-8157-5a483dec2452-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f85wqf\" (UID: \"4205e462-7e96-4991-8157-5a483dec2452\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f85wqf" Dec 01 08:50:04 crc kubenswrapper[4689]: I1201 08:50:04.184023 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwkvr\" (UniqueName: \"kubernetes.io/projected/4205e462-7e96-4991-8157-5a483dec2452-kube-api-access-wwkvr\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f85wqf\" (UID: \"4205e462-7e96-4991-8157-5a483dec2452\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f85wqf" Dec 01 08:50:04 crc kubenswrapper[4689]: I1201 08:50:04.184637 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4205e462-7e96-4991-8157-5a483dec2452-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f85wqf\" (UID: \"4205e462-7e96-4991-8157-5a483dec2452\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f85wqf" Dec 01 08:50:04 crc kubenswrapper[4689]: I1201 08:50:04.184781 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4205e462-7e96-4991-8157-5a483dec2452-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f85wqf\" (UID: \"4205e462-7e96-4991-8157-5a483dec2452\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f85wqf" Dec 01 08:50:04 crc kubenswrapper[4689]: I1201 08:50:04.211927 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwkvr\" (UniqueName: \"kubernetes.io/projected/4205e462-7e96-4991-8157-5a483dec2452-kube-api-access-wwkvr\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f85wqf\" (UID: \"4205e462-7e96-4991-8157-5a483dec2452\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f85wqf" Dec 01 08:50:04 crc kubenswrapper[4689]: I1201 08:50:04.414250 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f85wqf" Dec 01 08:50:04 crc kubenswrapper[4689]: I1201 08:50:04.714695 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f85wqf"] Dec 01 08:50:04 crc kubenswrapper[4689]: I1201 08:50:04.747747 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f85wqf" event={"ID":"4205e462-7e96-4991-8157-5a483dec2452","Type":"ContainerStarted","Data":"8b227bc67dd1193a7866c5ffed3b1bf067c7db16de687024762935834e4d814c"} Dec 01 08:50:05 crc kubenswrapper[4689]: I1201 08:50:05.762651 4689 generic.go:334] "Generic (PLEG): container finished" podID="4205e462-7e96-4991-8157-5a483dec2452" containerID="adda90602b36a9bcc15a6b031a67bf3ea53c37b8ffb482fd67635fd3cadef706" exitCode=0 Dec 01 08:50:05 crc kubenswrapper[4689]: I1201 08:50:05.762747 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f85wqf" event={"ID":"4205e462-7e96-4991-8157-5a483dec2452","Type":"ContainerDied","Data":"adda90602b36a9bcc15a6b031a67bf3ea53c37b8ffb482fd67635fd3cadef706"} Dec 01 08:50:07 crc kubenswrapper[4689]: I1201 08:50:07.777742 4689 generic.go:334] "Generic (PLEG): container finished" podID="4205e462-7e96-4991-8157-5a483dec2452" containerID="60525bf7963d700e4ffb2bf8fac560164976dc2396dee896761d8e02877ddc83" exitCode=0 Dec 01 08:50:07 crc kubenswrapper[4689]: I1201 08:50:07.777831 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f85wqf" event={"ID":"4205e462-7e96-4991-8157-5a483dec2452","Type":"ContainerDied","Data":"60525bf7963d700e4ffb2bf8fac560164976dc2396dee896761d8e02877ddc83"} Dec 01 08:50:08 crc kubenswrapper[4689]: I1201 08:50:08.784905 4689 generic.go:334] "Generic (PLEG): container finished" podID="4205e462-7e96-4991-8157-5a483dec2452" containerID="36492dfd595e21f55ca7450cd62dc1a5abc6f433aa69b9eaf098780232ec906b" exitCode=0 Dec 01 08:50:08 crc kubenswrapper[4689]: I1201 08:50:08.784989 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f85wqf" event={"ID":"4205e462-7e96-4991-8157-5a483dec2452","Type":"ContainerDied","Data":"36492dfd595e21f55ca7450cd62dc1a5abc6f433aa69b9eaf098780232ec906b"} Dec 01 08:50:10 crc kubenswrapper[4689]: I1201 08:50:10.100679 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f85wqf" Dec 01 08:50:10 crc kubenswrapper[4689]: I1201 08:50:10.256165 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4205e462-7e96-4991-8157-5a483dec2452-bundle\") pod \"4205e462-7e96-4991-8157-5a483dec2452\" (UID: \"4205e462-7e96-4991-8157-5a483dec2452\") " Dec 01 08:50:10 crc kubenswrapper[4689]: I1201 08:50:10.256425 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wwkvr\" (UniqueName: \"kubernetes.io/projected/4205e462-7e96-4991-8157-5a483dec2452-kube-api-access-wwkvr\") pod \"4205e462-7e96-4991-8157-5a483dec2452\" (UID: \"4205e462-7e96-4991-8157-5a483dec2452\") " Dec 01 08:50:10 crc kubenswrapper[4689]: I1201 08:50:10.256472 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4205e462-7e96-4991-8157-5a483dec2452-util\") pod \"4205e462-7e96-4991-8157-5a483dec2452\" (UID: \"4205e462-7e96-4991-8157-5a483dec2452\") " Dec 01 08:50:10 crc kubenswrapper[4689]: I1201 08:50:10.258585 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4205e462-7e96-4991-8157-5a483dec2452-bundle" (OuterVolumeSpecName: "bundle") pod "4205e462-7e96-4991-8157-5a483dec2452" (UID: "4205e462-7e96-4991-8157-5a483dec2452"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 08:50:10 crc kubenswrapper[4689]: I1201 08:50:10.263784 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4205e462-7e96-4991-8157-5a483dec2452-kube-api-access-wwkvr" (OuterVolumeSpecName: "kube-api-access-wwkvr") pod "4205e462-7e96-4991-8157-5a483dec2452" (UID: "4205e462-7e96-4991-8157-5a483dec2452"). InnerVolumeSpecName "kube-api-access-wwkvr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:50:10 crc kubenswrapper[4689]: I1201 08:50:10.268412 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4205e462-7e96-4991-8157-5a483dec2452-util" (OuterVolumeSpecName: "util") pod "4205e462-7e96-4991-8157-5a483dec2452" (UID: "4205e462-7e96-4991-8157-5a483dec2452"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 08:50:10 crc kubenswrapper[4689]: I1201 08:50:10.359249 4689 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4205e462-7e96-4991-8157-5a483dec2452-util\") on node \"crc\" DevicePath \"\"" Dec 01 08:50:10 crc kubenswrapper[4689]: I1201 08:50:10.359312 4689 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4205e462-7e96-4991-8157-5a483dec2452-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 08:50:10 crc kubenswrapper[4689]: I1201 08:50:10.359326 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wwkvr\" (UniqueName: \"kubernetes.io/projected/4205e462-7e96-4991-8157-5a483dec2452-kube-api-access-wwkvr\") on node \"crc\" DevicePath \"\"" Dec 01 08:50:10 crc kubenswrapper[4689]: I1201 08:50:10.799854 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f85wqf" event={"ID":"4205e462-7e96-4991-8157-5a483dec2452","Type":"ContainerDied","Data":"8b227bc67dd1193a7866c5ffed3b1bf067c7db16de687024762935834e4d814c"} Dec 01 08:50:10 crc kubenswrapper[4689]: I1201 08:50:10.800414 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f85wqf" Dec 01 08:50:10 crc kubenswrapper[4689]: I1201 08:50:10.800423 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b227bc67dd1193a7866c5ffed3b1bf067c7db16de687024762935834e4d814c" Dec 01 08:50:13 crc kubenswrapper[4689]: I1201 08:50:13.159326 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-ldxm6"] Dec 01 08:50:13 crc kubenswrapper[4689]: E1201 08:50:13.161870 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4205e462-7e96-4991-8157-5a483dec2452" containerName="extract" Dec 01 08:50:13 crc kubenswrapper[4689]: I1201 08:50:13.161993 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="4205e462-7e96-4991-8157-5a483dec2452" containerName="extract" Dec 01 08:50:13 crc kubenswrapper[4689]: E1201 08:50:13.162105 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4205e462-7e96-4991-8157-5a483dec2452" containerName="util" Dec 01 08:50:13 crc kubenswrapper[4689]: I1201 08:50:13.162167 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="4205e462-7e96-4991-8157-5a483dec2452" containerName="util" Dec 01 08:50:13 crc kubenswrapper[4689]: E1201 08:50:13.162231 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4205e462-7e96-4991-8157-5a483dec2452" containerName="pull" Dec 01 08:50:13 crc kubenswrapper[4689]: I1201 08:50:13.162284 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="4205e462-7e96-4991-8157-5a483dec2452" containerName="pull" Dec 01 08:50:13 crc kubenswrapper[4689]: I1201 08:50:13.162514 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="4205e462-7e96-4991-8157-5a483dec2452" containerName="extract" Dec 01 08:50:13 crc kubenswrapper[4689]: I1201 08:50:13.163152 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-ldxm6" Dec 01 08:50:13 crc kubenswrapper[4689]: I1201 08:50:13.165354 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-g4g49" Dec 01 08:50:13 crc kubenswrapper[4689]: I1201 08:50:13.165621 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Dec 01 08:50:13 crc kubenswrapper[4689]: I1201 08:50:13.166049 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Dec 01 08:50:13 crc kubenswrapper[4689]: I1201 08:50:13.177568 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-ldxm6"] Dec 01 08:50:13 crc kubenswrapper[4689]: I1201 08:50:13.192954 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4m4mx\" (UniqueName: \"kubernetes.io/projected/f38467c3-1d62-49ae-97f5-1fa17dbb514e-kube-api-access-4m4mx\") pod \"nmstate-operator-5b5b58f5c8-ldxm6\" (UID: \"f38467c3-1d62-49ae-97f5-1fa17dbb514e\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-ldxm6" Dec 01 08:50:13 crc kubenswrapper[4689]: I1201 08:50:13.293911 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4m4mx\" (UniqueName: \"kubernetes.io/projected/f38467c3-1d62-49ae-97f5-1fa17dbb514e-kube-api-access-4m4mx\") pod \"nmstate-operator-5b5b58f5c8-ldxm6\" (UID: \"f38467c3-1d62-49ae-97f5-1fa17dbb514e\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-ldxm6" Dec 01 08:50:13 crc kubenswrapper[4689]: I1201 08:50:13.317525 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4m4mx\" (UniqueName: \"kubernetes.io/projected/f38467c3-1d62-49ae-97f5-1fa17dbb514e-kube-api-access-4m4mx\") pod \"nmstate-operator-5b5b58f5c8-ldxm6\" (UID: \"f38467c3-1d62-49ae-97f5-1fa17dbb514e\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-ldxm6" Dec 01 08:50:13 crc kubenswrapper[4689]: I1201 08:50:13.488615 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-ldxm6" Dec 01 08:50:13 crc kubenswrapper[4689]: I1201 08:50:13.776155 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-ldxm6"] Dec 01 08:50:13 crc kubenswrapper[4689]: I1201 08:50:13.851604 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-ldxm6" event={"ID":"f38467c3-1d62-49ae-97f5-1fa17dbb514e","Type":"ContainerStarted","Data":"529ef9a5a17b8b0d1ca335c0d008f281f0781a8003c2086dff8ef6612151c2aa"} Dec 01 08:50:16 crc kubenswrapper[4689]: I1201 08:50:16.867248 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-ldxm6" event={"ID":"f38467c3-1d62-49ae-97f5-1fa17dbb514e","Type":"ContainerStarted","Data":"7b752ca74c4458b936a160a619b6c37a49ebddaed37db48be01ff15e9258439f"} Dec 01 08:50:16 crc kubenswrapper[4689]: I1201 08:50:16.890680 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-ldxm6" podStartSLOduration=1.2729421890000001 podStartE2EDuration="3.890625252s" podCreationTimestamp="2025-12-01 08:50:13 +0000 UTC" firstStartedPulling="2025-12-01 08:50:13.797232835 +0000 UTC m=+693.869520739" lastFinishedPulling="2025-12-01 08:50:16.414915858 +0000 UTC m=+696.487203802" observedRunningTime="2025-12-01 08:50:16.886148018 +0000 UTC m=+696.958435932" watchObservedRunningTime="2025-12-01 08:50:16.890625252 +0000 UTC m=+696.962913156" Dec 01 08:50:24 crc kubenswrapper[4689]: I1201 08:50:24.251140 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-nwsvj"] Dec 01 08:50:24 crc kubenswrapper[4689]: I1201 08:50:24.253696 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-nwsvj" Dec 01 08:50:24 crc kubenswrapper[4689]: I1201 08:50:24.256665 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zl9t\" (UniqueName: \"kubernetes.io/projected/c0686309-db1b-42c8-963c-e66bee2b8bb1-kube-api-access-5zl9t\") pod \"nmstate-metrics-7f946cbc9-nwsvj\" (UID: \"c0686309-db1b-42c8-963c-e66bee2b8bb1\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-nwsvj" Dec 01 08:50:24 crc kubenswrapper[4689]: I1201 08:50:24.258303 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-bblqw" Dec 01 08:50:24 crc kubenswrapper[4689]: I1201 08:50:24.280219 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-fbrdp"] Dec 01 08:50:24 crc kubenswrapper[4689]: I1201 08:50:24.280968 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-fbrdp" Dec 01 08:50:24 crc kubenswrapper[4689]: I1201 08:50:24.283706 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Dec 01 08:50:24 crc kubenswrapper[4689]: I1201 08:50:24.284089 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-nwsvj"] Dec 01 08:50:24 crc kubenswrapper[4689]: I1201 08:50:24.292251 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-mtr66"] Dec 01 08:50:24 crc kubenswrapper[4689]: I1201 08:50:24.293123 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-mtr66" Dec 01 08:50:24 crc kubenswrapper[4689]: I1201 08:50:24.318048 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-fbrdp"] Dec 01 08:50:24 crc kubenswrapper[4689]: I1201 08:50:24.358967 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/569aea60-ecf2-4ccb-b516-93098c33139a-ovs-socket\") pod \"nmstate-handler-mtr66\" (UID: \"569aea60-ecf2-4ccb-b516-93098c33139a\") " pod="openshift-nmstate/nmstate-handler-mtr66" Dec 01 08:50:24 crc kubenswrapper[4689]: I1201 08:50:24.359004 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gg6bn\" (UniqueName: \"kubernetes.io/projected/4eb87e27-d5ce-4aa6-9808-862d7afb9fd1-kube-api-access-gg6bn\") pod \"nmstate-webhook-5f6d4c5ccb-fbrdp\" (UID: \"4eb87e27-d5ce-4aa6-9808-862d7afb9fd1\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-fbrdp" Dec 01 08:50:24 crc kubenswrapper[4689]: I1201 08:50:24.359056 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/569aea60-ecf2-4ccb-b516-93098c33139a-nmstate-lock\") pod \"nmstate-handler-mtr66\" (UID: \"569aea60-ecf2-4ccb-b516-93098c33139a\") " pod="openshift-nmstate/nmstate-handler-mtr66" Dec 01 08:50:24 crc kubenswrapper[4689]: I1201 08:50:24.359074 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/569aea60-ecf2-4ccb-b516-93098c33139a-dbus-socket\") pod \"nmstate-handler-mtr66\" (UID: \"569aea60-ecf2-4ccb-b516-93098c33139a\") " pod="openshift-nmstate/nmstate-handler-mtr66" Dec 01 08:50:24 crc kubenswrapper[4689]: I1201 08:50:24.359098 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7kp2\" (UniqueName: \"kubernetes.io/projected/569aea60-ecf2-4ccb-b516-93098c33139a-kube-api-access-q7kp2\") pod \"nmstate-handler-mtr66\" (UID: \"569aea60-ecf2-4ccb-b516-93098c33139a\") " pod="openshift-nmstate/nmstate-handler-mtr66" Dec 01 08:50:24 crc kubenswrapper[4689]: I1201 08:50:24.359120 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zl9t\" (UniqueName: \"kubernetes.io/projected/c0686309-db1b-42c8-963c-e66bee2b8bb1-kube-api-access-5zl9t\") pod \"nmstate-metrics-7f946cbc9-nwsvj\" (UID: \"c0686309-db1b-42c8-963c-e66bee2b8bb1\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-nwsvj" Dec 01 08:50:24 crc kubenswrapper[4689]: I1201 08:50:24.359150 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/4eb87e27-d5ce-4aa6-9808-862d7afb9fd1-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-fbrdp\" (UID: \"4eb87e27-d5ce-4aa6-9808-862d7afb9fd1\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-fbrdp" Dec 01 08:50:24 crc kubenswrapper[4689]: I1201 08:50:24.388919 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zl9t\" (UniqueName: \"kubernetes.io/projected/c0686309-db1b-42c8-963c-e66bee2b8bb1-kube-api-access-5zl9t\") pod \"nmstate-metrics-7f946cbc9-nwsvj\" (UID: \"c0686309-db1b-42c8-963c-e66bee2b8bb1\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-nwsvj" Dec 01 08:50:24 crc kubenswrapper[4689]: I1201 08:50:24.423956 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-rls4h"] Dec 01 08:50:24 crc kubenswrapper[4689]: I1201 08:50:24.424618 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-rls4h" Dec 01 08:50:24 crc kubenswrapper[4689]: I1201 08:50:24.426758 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Dec 01 08:50:24 crc kubenswrapper[4689]: I1201 08:50:24.426841 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-knbpd" Dec 01 08:50:24 crc kubenswrapper[4689]: I1201 08:50:24.427436 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Dec 01 08:50:24 crc kubenswrapper[4689]: I1201 08:50:24.459884 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/4eb87e27-d5ce-4aa6-9808-862d7afb9fd1-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-fbrdp\" (UID: \"4eb87e27-d5ce-4aa6-9808-862d7afb9fd1\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-fbrdp" Dec 01 08:50:24 crc kubenswrapper[4689]: E1201 08:50:24.460136 4689 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Dec 01 08:50:24 crc kubenswrapper[4689]: I1201 08:50:24.460158 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwlpb\" (UniqueName: \"kubernetes.io/projected/888875d4-358f-4232-96f5-7fe326118284-kube-api-access-qwlpb\") pod \"nmstate-console-plugin-7fbb5f6569-rls4h\" (UID: \"888875d4-358f-4232-96f5-7fe326118284\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-rls4h" Dec 01 08:50:24 crc kubenswrapper[4689]: E1201 08:50:24.460313 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4eb87e27-d5ce-4aa6-9808-862d7afb9fd1-tls-key-pair podName:4eb87e27-d5ce-4aa6-9808-862d7afb9fd1 nodeName:}" failed. No retries permitted until 2025-12-01 08:50:24.960264354 +0000 UTC m=+705.032552328 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/4eb87e27-d5ce-4aa6-9808-862d7afb9fd1-tls-key-pair") pod "nmstate-webhook-5f6d4c5ccb-fbrdp" (UID: "4eb87e27-d5ce-4aa6-9808-862d7afb9fd1") : secret "openshift-nmstate-webhook" not found Dec 01 08:50:24 crc kubenswrapper[4689]: I1201 08:50:24.460524 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/569aea60-ecf2-4ccb-b516-93098c33139a-ovs-socket\") pod \"nmstate-handler-mtr66\" (UID: \"569aea60-ecf2-4ccb-b516-93098c33139a\") " pod="openshift-nmstate/nmstate-handler-mtr66" Dec 01 08:50:24 crc kubenswrapper[4689]: I1201 08:50:24.460647 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gg6bn\" (UniqueName: \"kubernetes.io/projected/4eb87e27-d5ce-4aa6-9808-862d7afb9fd1-kube-api-access-gg6bn\") pod \"nmstate-webhook-5f6d4c5ccb-fbrdp\" (UID: \"4eb87e27-d5ce-4aa6-9808-862d7afb9fd1\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-fbrdp" Dec 01 08:50:24 crc kubenswrapper[4689]: I1201 08:50:24.460749 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/888875d4-358f-4232-96f5-7fe326118284-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-rls4h\" (UID: \"888875d4-358f-4232-96f5-7fe326118284\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-rls4h" Dec 01 08:50:24 crc kubenswrapper[4689]: I1201 08:50:24.460866 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/569aea60-ecf2-4ccb-b516-93098c33139a-nmstate-lock\") pod \"nmstate-handler-mtr66\" (UID: \"569aea60-ecf2-4ccb-b516-93098c33139a\") " pod="openshift-nmstate/nmstate-handler-mtr66" Dec 01 08:50:24 crc kubenswrapper[4689]: I1201 08:50:24.460959 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/569aea60-ecf2-4ccb-b516-93098c33139a-dbus-socket\") pod \"nmstate-handler-mtr66\" (UID: \"569aea60-ecf2-4ccb-b516-93098c33139a\") " pod="openshift-nmstate/nmstate-handler-mtr66" Dec 01 08:50:24 crc kubenswrapper[4689]: I1201 08:50:24.461021 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/569aea60-ecf2-4ccb-b516-93098c33139a-nmstate-lock\") pod \"nmstate-handler-mtr66\" (UID: \"569aea60-ecf2-4ccb-b516-93098c33139a\") " pod="openshift-nmstate/nmstate-handler-mtr66" Dec 01 08:50:24 crc kubenswrapper[4689]: I1201 08:50:24.461151 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7kp2\" (UniqueName: \"kubernetes.io/projected/569aea60-ecf2-4ccb-b516-93098c33139a-kube-api-access-q7kp2\") pod \"nmstate-handler-mtr66\" (UID: \"569aea60-ecf2-4ccb-b516-93098c33139a\") " pod="openshift-nmstate/nmstate-handler-mtr66" Dec 01 08:50:24 crc kubenswrapper[4689]: I1201 08:50:24.461258 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/888875d4-358f-4232-96f5-7fe326118284-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-rls4h\" (UID: \"888875d4-358f-4232-96f5-7fe326118284\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-rls4h" Dec 01 08:50:24 crc kubenswrapper[4689]: I1201 08:50:24.460667 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/569aea60-ecf2-4ccb-b516-93098c33139a-ovs-socket\") pod \"nmstate-handler-mtr66\" (UID: \"569aea60-ecf2-4ccb-b516-93098c33139a\") " pod="openshift-nmstate/nmstate-handler-mtr66" Dec 01 08:50:24 crc kubenswrapper[4689]: I1201 08:50:24.461400 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/569aea60-ecf2-4ccb-b516-93098c33139a-dbus-socket\") pod \"nmstate-handler-mtr66\" (UID: \"569aea60-ecf2-4ccb-b516-93098c33139a\") " pod="openshift-nmstate/nmstate-handler-mtr66" Dec 01 08:50:24 crc kubenswrapper[4689]: I1201 08:50:24.474425 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-rls4h"] Dec 01 08:50:24 crc kubenswrapper[4689]: I1201 08:50:24.483760 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7kp2\" (UniqueName: \"kubernetes.io/projected/569aea60-ecf2-4ccb-b516-93098c33139a-kube-api-access-q7kp2\") pod \"nmstate-handler-mtr66\" (UID: \"569aea60-ecf2-4ccb-b516-93098c33139a\") " pod="openshift-nmstate/nmstate-handler-mtr66" Dec 01 08:50:24 crc kubenswrapper[4689]: I1201 08:50:24.484018 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gg6bn\" (UniqueName: \"kubernetes.io/projected/4eb87e27-d5ce-4aa6-9808-862d7afb9fd1-kube-api-access-gg6bn\") pod \"nmstate-webhook-5f6d4c5ccb-fbrdp\" (UID: \"4eb87e27-d5ce-4aa6-9808-862d7afb9fd1\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-fbrdp" Dec 01 08:50:24 crc kubenswrapper[4689]: I1201 08:50:24.562556 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/888875d4-358f-4232-96f5-7fe326118284-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-rls4h\" (UID: \"888875d4-358f-4232-96f5-7fe326118284\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-rls4h" Dec 01 08:50:24 crc kubenswrapper[4689]: I1201 08:50:24.562618 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/888875d4-358f-4232-96f5-7fe326118284-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-rls4h\" (UID: \"888875d4-358f-4232-96f5-7fe326118284\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-rls4h" Dec 01 08:50:24 crc kubenswrapper[4689]: I1201 08:50:24.562710 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwlpb\" (UniqueName: \"kubernetes.io/projected/888875d4-358f-4232-96f5-7fe326118284-kube-api-access-qwlpb\") pod \"nmstate-console-plugin-7fbb5f6569-rls4h\" (UID: \"888875d4-358f-4232-96f5-7fe326118284\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-rls4h" Dec 01 08:50:24 crc kubenswrapper[4689]: E1201 08:50:24.563233 4689 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Dec 01 08:50:24 crc kubenswrapper[4689]: E1201 08:50:24.563335 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/888875d4-358f-4232-96f5-7fe326118284-plugin-serving-cert podName:888875d4-358f-4232-96f5-7fe326118284 nodeName:}" failed. No retries permitted until 2025-12-01 08:50:25.063314666 +0000 UTC m=+705.135602570 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/888875d4-358f-4232-96f5-7fe326118284-plugin-serving-cert") pod "nmstate-console-plugin-7fbb5f6569-rls4h" (UID: "888875d4-358f-4232-96f5-7fe326118284") : secret "plugin-serving-cert" not found Dec 01 08:50:24 crc kubenswrapper[4689]: I1201 08:50:24.564093 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/888875d4-358f-4232-96f5-7fe326118284-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-rls4h\" (UID: \"888875d4-358f-4232-96f5-7fe326118284\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-rls4h" Dec 01 08:50:24 crc kubenswrapper[4689]: I1201 08:50:24.571627 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-nwsvj" Dec 01 08:50:24 crc kubenswrapper[4689]: I1201 08:50:24.598378 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwlpb\" (UniqueName: \"kubernetes.io/projected/888875d4-358f-4232-96f5-7fe326118284-kube-api-access-qwlpb\") pod \"nmstate-console-plugin-7fbb5f6569-rls4h\" (UID: \"888875d4-358f-4232-96f5-7fe326118284\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-rls4h" Dec 01 08:50:24 crc kubenswrapper[4689]: I1201 08:50:24.607960 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-mtr66" Dec 01 08:50:24 crc kubenswrapper[4689]: I1201 08:50:24.658618 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-866776c457-g542r"] Dec 01 08:50:24 crc kubenswrapper[4689]: I1201 08:50:24.659315 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-866776c457-g542r" Dec 01 08:50:24 crc kubenswrapper[4689]: I1201 08:50:24.663730 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qx47\" (UniqueName: \"kubernetes.io/projected/c24dc181-1b13-4a51-a87c-16a0b8d1d11d-kube-api-access-5qx47\") pod \"console-866776c457-g542r\" (UID: \"c24dc181-1b13-4a51-a87c-16a0b8d1d11d\") " pod="openshift-console/console-866776c457-g542r" Dec 01 08:50:24 crc kubenswrapper[4689]: I1201 08:50:24.663932 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c24dc181-1b13-4a51-a87c-16a0b8d1d11d-console-oauth-config\") pod \"console-866776c457-g542r\" (UID: \"c24dc181-1b13-4a51-a87c-16a0b8d1d11d\") " pod="openshift-console/console-866776c457-g542r" Dec 01 08:50:24 crc kubenswrapper[4689]: I1201 08:50:24.663997 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c24dc181-1b13-4a51-a87c-16a0b8d1d11d-service-ca\") pod \"console-866776c457-g542r\" (UID: \"c24dc181-1b13-4a51-a87c-16a0b8d1d11d\") " pod="openshift-console/console-866776c457-g542r" Dec 01 08:50:24 crc kubenswrapper[4689]: I1201 08:50:24.664068 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c24dc181-1b13-4a51-a87c-16a0b8d1d11d-console-config\") pod \"console-866776c457-g542r\" (UID: \"c24dc181-1b13-4a51-a87c-16a0b8d1d11d\") " pod="openshift-console/console-866776c457-g542r" Dec 01 08:50:24 crc kubenswrapper[4689]: I1201 08:50:24.664095 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c24dc181-1b13-4a51-a87c-16a0b8d1d11d-oauth-serving-cert\") pod \"console-866776c457-g542r\" (UID: \"c24dc181-1b13-4a51-a87c-16a0b8d1d11d\") " pod="openshift-console/console-866776c457-g542r" Dec 01 08:50:24 crc kubenswrapper[4689]: I1201 08:50:24.664175 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c24dc181-1b13-4a51-a87c-16a0b8d1d11d-console-serving-cert\") pod \"console-866776c457-g542r\" (UID: \"c24dc181-1b13-4a51-a87c-16a0b8d1d11d\") " pod="openshift-console/console-866776c457-g542r" Dec 01 08:50:24 crc kubenswrapper[4689]: I1201 08:50:24.664230 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c24dc181-1b13-4a51-a87c-16a0b8d1d11d-trusted-ca-bundle\") pod \"console-866776c457-g542r\" (UID: \"c24dc181-1b13-4a51-a87c-16a0b8d1d11d\") " pod="openshift-console/console-866776c457-g542r" Dec 01 08:50:24 crc kubenswrapper[4689]: I1201 08:50:24.676050 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-866776c457-g542r"] Dec 01 08:50:24 crc kubenswrapper[4689]: I1201 08:50:24.764992 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qx47\" (UniqueName: \"kubernetes.io/projected/c24dc181-1b13-4a51-a87c-16a0b8d1d11d-kube-api-access-5qx47\") pod \"console-866776c457-g542r\" (UID: \"c24dc181-1b13-4a51-a87c-16a0b8d1d11d\") " pod="openshift-console/console-866776c457-g542r" Dec 01 08:50:24 crc kubenswrapper[4689]: I1201 08:50:24.765046 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c24dc181-1b13-4a51-a87c-16a0b8d1d11d-console-oauth-config\") pod \"console-866776c457-g542r\" (UID: \"c24dc181-1b13-4a51-a87c-16a0b8d1d11d\") " pod="openshift-console/console-866776c457-g542r" Dec 01 08:50:24 crc kubenswrapper[4689]: I1201 08:50:24.765064 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c24dc181-1b13-4a51-a87c-16a0b8d1d11d-service-ca\") pod \"console-866776c457-g542r\" (UID: \"c24dc181-1b13-4a51-a87c-16a0b8d1d11d\") " pod="openshift-console/console-866776c457-g542r" Dec 01 08:50:24 crc kubenswrapper[4689]: I1201 08:50:24.765088 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c24dc181-1b13-4a51-a87c-16a0b8d1d11d-console-config\") pod \"console-866776c457-g542r\" (UID: \"c24dc181-1b13-4a51-a87c-16a0b8d1d11d\") " pod="openshift-console/console-866776c457-g542r" Dec 01 08:50:24 crc kubenswrapper[4689]: I1201 08:50:24.765105 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c24dc181-1b13-4a51-a87c-16a0b8d1d11d-oauth-serving-cert\") pod \"console-866776c457-g542r\" (UID: \"c24dc181-1b13-4a51-a87c-16a0b8d1d11d\") " pod="openshift-console/console-866776c457-g542r" Dec 01 08:50:24 crc kubenswrapper[4689]: I1201 08:50:24.765127 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c24dc181-1b13-4a51-a87c-16a0b8d1d11d-console-serving-cert\") pod \"console-866776c457-g542r\" (UID: \"c24dc181-1b13-4a51-a87c-16a0b8d1d11d\") " pod="openshift-console/console-866776c457-g542r" Dec 01 08:50:24 crc kubenswrapper[4689]: I1201 08:50:24.765150 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c24dc181-1b13-4a51-a87c-16a0b8d1d11d-trusted-ca-bundle\") pod \"console-866776c457-g542r\" (UID: \"c24dc181-1b13-4a51-a87c-16a0b8d1d11d\") " pod="openshift-console/console-866776c457-g542r" Dec 01 08:50:24 crc kubenswrapper[4689]: I1201 08:50:24.767120 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c24dc181-1b13-4a51-a87c-16a0b8d1d11d-trusted-ca-bundle\") pod \"console-866776c457-g542r\" (UID: \"c24dc181-1b13-4a51-a87c-16a0b8d1d11d\") " pod="openshift-console/console-866776c457-g542r" Dec 01 08:50:24 crc kubenswrapper[4689]: I1201 08:50:24.768845 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c24dc181-1b13-4a51-a87c-16a0b8d1d11d-console-config\") pod \"console-866776c457-g542r\" (UID: \"c24dc181-1b13-4a51-a87c-16a0b8d1d11d\") " pod="openshift-console/console-866776c457-g542r" Dec 01 08:50:24 crc kubenswrapper[4689]: I1201 08:50:24.769441 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c24dc181-1b13-4a51-a87c-16a0b8d1d11d-service-ca\") pod \"console-866776c457-g542r\" (UID: \"c24dc181-1b13-4a51-a87c-16a0b8d1d11d\") " pod="openshift-console/console-866776c457-g542r" Dec 01 08:50:24 crc kubenswrapper[4689]: I1201 08:50:24.770120 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c24dc181-1b13-4a51-a87c-16a0b8d1d11d-oauth-serving-cert\") pod \"console-866776c457-g542r\" (UID: \"c24dc181-1b13-4a51-a87c-16a0b8d1d11d\") " pod="openshift-console/console-866776c457-g542r" Dec 01 08:50:24 crc kubenswrapper[4689]: I1201 08:50:24.773834 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c24dc181-1b13-4a51-a87c-16a0b8d1d11d-console-oauth-config\") pod \"console-866776c457-g542r\" (UID: \"c24dc181-1b13-4a51-a87c-16a0b8d1d11d\") " pod="openshift-console/console-866776c457-g542r" Dec 01 08:50:24 crc kubenswrapper[4689]: I1201 08:50:24.774233 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c24dc181-1b13-4a51-a87c-16a0b8d1d11d-console-serving-cert\") pod \"console-866776c457-g542r\" (UID: \"c24dc181-1b13-4a51-a87c-16a0b8d1d11d\") " pod="openshift-console/console-866776c457-g542r" Dec 01 08:50:24 crc kubenswrapper[4689]: I1201 08:50:24.793529 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qx47\" (UniqueName: \"kubernetes.io/projected/c24dc181-1b13-4a51-a87c-16a0b8d1d11d-kube-api-access-5qx47\") pod \"console-866776c457-g542r\" (UID: \"c24dc181-1b13-4a51-a87c-16a0b8d1d11d\") " pod="openshift-console/console-866776c457-g542r" Dec 01 08:50:24 crc kubenswrapper[4689]: I1201 08:50:24.844174 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-nwsvj"] Dec 01 08:50:24 crc kubenswrapper[4689]: W1201 08:50:24.849747 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc0686309_db1b_42c8_963c_e66bee2b8bb1.slice/crio-b24559cb7471c6677e70609291449976fde3afa38c565e0e6cc6f84407f461a3 WatchSource:0}: Error finding container b24559cb7471c6677e70609291449976fde3afa38c565e0e6cc6f84407f461a3: Status 404 returned error can't find the container with id b24559cb7471c6677e70609291449976fde3afa38c565e0e6cc6f84407f461a3 Dec 01 08:50:24 crc kubenswrapper[4689]: I1201 08:50:24.919332 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-nwsvj" event={"ID":"c0686309-db1b-42c8-963c-e66bee2b8bb1","Type":"ContainerStarted","Data":"b24559cb7471c6677e70609291449976fde3afa38c565e0e6cc6f84407f461a3"} Dec 01 08:50:24 crc kubenswrapper[4689]: I1201 08:50:24.920468 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-mtr66" event={"ID":"569aea60-ecf2-4ccb-b516-93098c33139a","Type":"ContainerStarted","Data":"9564e3c401ba9da840cf94a38cfc1f783b50efa36d874125ebaf332c98ab5d14"} Dec 01 08:50:24 crc kubenswrapper[4689]: I1201 08:50:24.970342 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/4eb87e27-d5ce-4aa6-9808-862d7afb9fd1-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-fbrdp\" (UID: \"4eb87e27-d5ce-4aa6-9808-862d7afb9fd1\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-fbrdp" Dec 01 08:50:24 crc kubenswrapper[4689]: I1201 08:50:24.974617 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/4eb87e27-d5ce-4aa6-9808-862d7afb9fd1-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-fbrdp\" (UID: \"4eb87e27-d5ce-4aa6-9808-862d7afb9fd1\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-fbrdp" Dec 01 08:50:24 crc kubenswrapper[4689]: I1201 08:50:24.980405 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-866776c457-g542r" Dec 01 08:50:25 crc kubenswrapper[4689]: I1201 08:50:25.072072 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/888875d4-358f-4232-96f5-7fe326118284-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-rls4h\" (UID: \"888875d4-358f-4232-96f5-7fe326118284\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-rls4h" Dec 01 08:50:25 crc kubenswrapper[4689]: I1201 08:50:25.076332 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/888875d4-358f-4232-96f5-7fe326118284-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-rls4h\" (UID: \"888875d4-358f-4232-96f5-7fe326118284\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-rls4h" Dec 01 08:50:25 crc kubenswrapper[4689]: I1201 08:50:25.197284 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-fbrdp" Dec 01 08:50:25 crc kubenswrapper[4689]: I1201 08:50:25.248795 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-866776c457-g542r"] Dec 01 08:50:25 crc kubenswrapper[4689]: W1201 08:50:25.255723 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc24dc181_1b13_4a51_a87c_16a0b8d1d11d.slice/crio-ccf8c1751da4d7f7a6f441c5d23cb37e12d5044537a5615ac3c48ff221653d69 WatchSource:0}: Error finding container ccf8c1751da4d7f7a6f441c5d23cb37e12d5044537a5615ac3c48ff221653d69: Status 404 returned error can't find the container with id ccf8c1751da4d7f7a6f441c5d23cb37e12d5044537a5615ac3c48ff221653d69 Dec 01 08:50:25 crc kubenswrapper[4689]: I1201 08:50:25.338828 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-rls4h" Dec 01 08:50:25 crc kubenswrapper[4689]: I1201 08:50:25.470454 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-fbrdp"] Dec 01 08:50:25 crc kubenswrapper[4689]: I1201 08:50:25.783131 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-rls4h"] Dec 01 08:50:25 crc kubenswrapper[4689]: W1201 08:50:25.792587 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod888875d4_358f_4232_96f5_7fe326118284.slice/crio-dc1c38e9072ab5cf14cb4d51569b5eef38f29600a94006d707351fd4f13cb525 WatchSource:0}: Error finding container dc1c38e9072ab5cf14cb4d51569b5eef38f29600a94006d707351fd4f13cb525: Status 404 returned error can't find the container with id dc1c38e9072ab5cf14cb4d51569b5eef38f29600a94006d707351fd4f13cb525 Dec 01 08:50:25 crc kubenswrapper[4689]: I1201 08:50:25.931930 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-fbrdp" event={"ID":"4eb87e27-d5ce-4aa6-9808-862d7afb9fd1","Type":"ContainerStarted","Data":"feaed80057596d81fa507c4f39c8aed2b92d5b4f0fceeea22b3fff5e9148094d"} Dec 01 08:50:25 crc kubenswrapper[4689]: I1201 08:50:25.936791 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-866776c457-g542r" event={"ID":"c24dc181-1b13-4a51-a87c-16a0b8d1d11d","Type":"ContainerStarted","Data":"2abba3240e9e4eeaf650b3140491a451e1d082916f928a2a272d231dd36fc3f9"} Dec 01 08:50:25 crc kubenswrapper[4689]: I1201 08:50:25.936841 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-866776c457-g542r" event={"ID":"c24dc181-1b13-4a51-a87c-16a0b8d1d11d","Type":"ContainerStarted","Data":"ccf8c1751da4d7f7a6f441c5d23cb37e12d5044537a5615ac3c48ff221653d69"} Dec 01 08:50:25 crc kubenswrapper[4689]: I1201 08:50:25.939409 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-rls4h" event={"ID":"888875d4-358f-4232-96f5-7fe326118284","Type":"ContainerStarted","Data":"dc1c38e9072ab5cf14cb4d51569b5eef38f29600a94006d707351fd4f13cb525"} Dec 01 08:50:25 crc kubenswrapper[4689]: I1201 08:50:25.959305 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-866776c457-g542r" podStartSLOduration=1.95926159 podStartE2EDuration="1.95926159s" podCreationTimestamp="2025-12-01 08:50:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:50:25.953484962 +0000 UTC m=+706.025772876" watchObservedRunningTime="2025-12-01 08:50:25.95926159 +0000 UTC m=+706.031549494" Dec 01 08:50:27 crc kubenswrapper[4689]: I1201 08:50:27.951835 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-mtr66" event={"ID":"569aea60-ecf2-4ccb-b516-93098c33139a","Type":"ContainerStarted","Data":"3149fcf5c829d65fed085494db5979b0a8fe1d7453abc211748cfe3ec60dbae4"} Dec 01 08:50:27 crc kubenswrapper[4689]: I1201 08:50:27.952145 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-mtr66" Dec 01 08:50:27 crc kubenswrapper[4689]: I1201 08:50:27.953343 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-nwsvj" event={"ID":"c0686309-db1b-42c8-963c-e66bee2b8bb1","Type":"ContainerStarted","Data":"89c09f923ccea2ab68a94d4664b6750be4040a102ca4746229a541774015d9c5"} Dec 01 08:50:27 crc kubenswrapper[4689]: I1201 08:50:27.955488 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-fbrdp" event={"ID":"4eb87e27-d5ce-4aa6-9808-862d7afb9fd1","Type":"ContainerStarted","Data":"a43485be800fcb08cc50a5e5edad8122db4ddfdc8ccd3972f1effbe10b1c600a"} Dec 01 08:50:27 crc kubenswrapper[4689]: I1201 08:50:27.956527 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-fbrdp" Dec 01 08:50:27 crc kubenswrapper[4689]: I1201 08:50:27.975750 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-mtr66" podStartSLOduration=1.348075634 podStartE2EDuration="3.975726239s" podCreationTimestamp="2025-12-01 08:50:24 +0000 UTC" firstStartedPulling="2025-12-01 08:50:24.672457535 +0000 UTC m=+704.744745439" lastFinishedPulling="2025-12-01 08:50:27.30010814 +0000 UTC m=+707.372396044" observedRunningTime="2025-12-01 08:50:27.974820153 +0000 UTC m=+708.047108057" watchObservedRunningTime="2025-12-01 08:50:27.975726239 +0000 UTC m=+708.048014143" Dec 01 08:50:27 crc kubenswrapper[4689]: I1201 08:50:27.989674 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-fbrdp" podStartSLOduration=2.177135348 podStartE2EDuration="3.98960857s" podCreationTimestamp="2025-12-01 08:50:24 +0000 UTC" firstStartedPulling="2025-12-01 08:50:25.481050768 +0000 UTC m=+705.553338682" lastFinishedPulling="2025-12-01 08:50:27.293524 +0000 UTC m=+707.365811904" observedRunningTime="2025-12-01 08:50:27.986947947 +0000 UTC m=+708.059235851" watchObservedRunningTime="2025-12-01 08:50:27.98960857 +0000 UTC m=+708.061896474" Dec 01 08:50:28 crc kubenswrapper[4689]: I1201 08:50:28.962256 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-rls4h" event={"ID":"888875d4-358f-4232-96f5-7fe326118284","Type":"ContainerStarted","Data":"8d5256114f899f4e937f107ebe4380d663ec2c5ca00e61f25acb4fc0ca00588e"} Dec 01 08:50:28 crc kubenswrapper[4689]: I1201 08:50:28.980100 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-rls4h" podStartSLOduration=2.333216709 podStartE2EDuration="4.980070001s" podCreationTimestamp="2025-12-01 08:50:24 +0000 UTC" firstStartedPulling="2025-12-01 08:50:25.796100717 +0000 UTC m=+705.868388661" lastFinishedPulling="2025-12-01 08:50:28.442954059 +0000 UTC m=+708.515241953" observedRunningTime="2025-12-01 08:50:28.975306019 +0000 UTC m=+709.047593933" watchObservedRunningTime="2025-12-01 08:50:28.980070001 +0000 UTC m=+709.052357905" Dec 01 08:50:31 crc kubenswrapper[4689]: I1201 08:50:31.984107 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-nwsvj" event={"ID":"c0686309-db1b-42c8-963c-e66bee2b8bb1","Type":"ContainerStarted","Data":"cee7467ca05b73f5b020d9c9d38b0588e30043af4a348e9620883812f16e9848"} Dec 01 08:50:32 crc kubenswrapper[4689]: I1201 08:50:32.020718 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-nwsvj" podStartSLOduration=1.664855959 podStartE2EDuration="8.020682955s" podCreationTimestamp="2025-12-01 08:50:24 +0000 UTC" firstStartedPulling="2025-12-01 08:50:24.851786844 +0000 UTC m=+704.924074748" lastFinishedPulling="2025-12-01 08:50:31.20761384 +0000 UTC m=+711.279901744" observedRunningTime="2025-12-01 08:50:32.008882571 +0000 UTC m=+712.081170565" watchObservedRunningTime="2025-12-01 08:50:32.020682955 +0000 UTC m=+712.092970889" Dec 01 08:50:34 crc kubenswrapper[4689]: I1201 08:50:34.642482 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-mtr66" Dec 01 08:50:34 crc kubenswrapper[4689]: I1201 08:50:34.981177 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-866776c457-g542r" Dec 01 08:50:34 crc kubenswrapper[4689]: I1201 08:50:34.981260 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-866776c457-g542r" Dec 01 08:50:34 crc kubenswrapper[4689]: I1201 08:50:34.991485 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-866776c457-g542r" Dec 01 08:50:35 crc kubenswrapper[4689]: I1201 08:50:35.028977 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-866776c457-g542r" Dec 01 08:50:35 crc kubenswrapper[4689]: I1201 08:50:35.108633 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-j5r2f"] Dec 01 08:50:39 crc kubenswrapper[4689]: I1201 08:50:39.147415 4689 patch_prober.go:28] interesting pod/machine-config-daemon-hmdnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 08:50:39 crc kubenswrapper[4689]: I1201 08:50:39.147524 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 08:50:45 crc kubenswrapper[4689]: I1201 08:50:45.205271 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-fbrdp" Dec 01 08:51:00 crc kubenswrapper[4689]: I1201 08:51:00.156812 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-j5r2f" podUID="710ccb76-093a-484d-a784-737ae81e7c21" containerName="console" containerID="cri-o://3c18e4e46777bf2673a83d5ea4786a14057f72bd0db2cdad749ab894fa7b0556" gracePeriod=15 Dec 01 08:51:00 crc kubenswrapper[4689]: I1201 08:51:00.505729 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-j5r2f_710ccb76-093a-484d-a784-737ae81e7c21/console/0.log" Dec 01 08:51:00 crc kubenswrapper[4689]: I1201 08:51:00.506126 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-j5r2f" Dec 01 08:51:00 crc kubenswrapper[4689]: I1201 08:51:00.624559 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l5qsf\" (UniqueName: \"kubernetes.io/projected/710ccb76-093a-484d-a784-737ae81e7c21-kube-api-access-l5qsf\") pod \"710ccb76-093a-484d-a784-737ae81e7c21\" (UID: \"710ccb76-093a-484d-a784-737ae81e7c21\") " Dec 01 08:51:00 crc kubenswrapper[4689]: I1201 08:51:00.624604 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/710ccb76-093a-484d-a784-737ae81e7c21-console-serving-cert\") pod \"710ccb76-093a-484d-a784-737ae81e7c21\" (UID: \"710ccb76-093a-484d-a784-737ae81e7c21\") " Dec 01 08:51:00 crc kubenswrapper[4689]: I1201 08:51:00.624634 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/710ccb76-093a-484d-a784-737ae81e7c21-console-config\") pod \"710ccb76-093a-484d-a784-737ae81e7c21\" (UID: \"710ccb76-093a-484d-a784-737ae81e7c21\") " Dec 01 08:51:00 crc kubenswrapper[4689]: I1201 08:51:00.624682 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/710ccb76-093a-484d-a784-737ae81e7c21-oauth-serving-cert\") pod \"710ccb76-093a-484d-a784-737ae81e7c21\" (UID: \"710ccb76-093a-484d-a784-737ae81e7c21\") " Dec 01 08:51:00 crc kubenswrapper[4689]: I1201 08:51:00.624697 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/710ccb76-093a-484d-a784-737ae81e7c21-trusted-ca-bundle\") pod \"710ccb76-093a-484d-a784-737ae81e7c21\" (UID: \"710ccb76-093a-484d-a784-737ae81e7c21\") " Dec 01 08:51:00 crc kubenswrapper[4689]: I1201 08:51:00.624733 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/710ccb76-093a-484d-a784-737ae81e7c21-service-ca\") pod \"710ccb76-093a-484d-a784-737ae81e7c21\" (UID: \"710ccb76-093a-484d-a784-737ae81e7c21\") " Dec 01 08:51:00 crc kubenswrapper[4689]: I1201 08:51:00.624752 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/710ccb76-093a-484d-a784-737ae81e7c21-console-oauth-config\") pod \"710ccb76-093a-484d-a784-737ae81e7c21\" (UID: \"710ccb76-093a-484d-a784-737ae81e7c21\") " Dec 01 08:51:00 crc kubenswrapper[4689]: I1201 08:51:00.626051 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/710ccb76-093a-484d-a784-737ae81e7c21-service-ca" (OuterVolumeSpecName: "service-ca") pod "710ccb76-093a-484d-a784-737ae81e7c21" (UID: "710ccb76-093a-484d-a784-737ae81e7c21"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:51:00 crc kubenswrapper[4689]: I1201 08:51:00.626180 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/710ccb76-093a-484d-a784-737ae81e7c21-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "710ccb76-093a-484d-a784-737ae81e7c21" (UID: "710ccb76-093a-484d-a784-737ae81e7c21"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:51:00 crc kubenswrapper[4689]: I1201 08:51:00.626486 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/710ccb76-093a-484d-a784-737ae81e7c21-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "710ccb76-093a-484d-a784-737ae81e7c21" (UID: "710ccb76-093a-484d-a784-737ae81e7c21"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:51:00 crc kubenswrapper[4689]: I1201 08:51:00.626951 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/710ccb76-093a-484d-a784-737ae81e7c21-console-config" (OuterVolumeSpecName: "console-config") pod "710ccb76-093a-484d-a784-737ae81e7c21" (UID: "710ccb76-093a-484d-a784-737ae81e7c21"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:51:00 crc kubenswrapper[4689]: I1201 08:51:00.634450 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/710ccb76-093a-484d-a784-737ae81e7c21-kube-api-access-l5qsf" (OuterVolumeSpecName: "kube-api-access-l5qsf") pod "710ccb76-093a-484d-a784-737ae81e7c21" (UID: "710ccb76-093a-484d-a784-737ae81e7c21"). InnerVolumeSpecName "kube-api-access-l5qsf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:51:00 crc kubenswrapper[4689]: I1201 08:51:00.634871 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/710ccb76-093a-484d-a784-737ae81e7c21-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "710ccb76-093a-484d-a784-737ae81e7c21" (UID: "710ccb76-093a-484d-a784-737ae81e7c21"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:51:00 crc kubenswrapper[4689]: I1201 08:51:00.643058 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/710ccb76-093a-484d-a784-737ae81e7c21-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "710ccb76-093a-484d-a784-737ae81e7c21" (UID: "710ccb76-093a-484d-a784-737ae81e7c21"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:51:00 crc kubenswrapper[4689]: I1201 08:51:00.725489 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l5qsf\" (UniqueName: \"kubernetes.io/projected/710ccb76-093a-484d-a784-737ae81e7c21-kube-api-access-l5qsf\") on node \"crc\" DevicePath \"\"" Dec 01 08:51:00 crc kubenswrapper[4689]: I1201 08:51:00.725538 4689 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/710ccb76-093a-484d-a784-737ae81e7c21-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 08:51:00 crc kubenswrapper[4689]: I1201 08:51:00.725550 4689 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/710ccb76-093a-484d-a784-737ae81e7c21-console-config\") on node \"crc\" DevicePath \"\"" Dec 01 08:51:00 crc kubenswrapper[4689]: I1201 08:51:00.725561 4689 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/710ccb76-093a-484d-a784-737ae81e7c21-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 08:51:00 crc kubenswrapper[4689]: I1201 08:51:00.725572 4689 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/710ccb76-093a-484d-a784-737ae81e7c21-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 08:51:00 crc kubenswrapper[4689]: I1201 08:51:00.725582 4689 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/710ccb76-093a-484d-a784-737ae81e7c21-service-ca\") on node \"crc\" DevicePath \"\"" Dec 01 08:51:00 crc kubenswrapper[4689]: I1201 08:51:00.725592 4689 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/710ccb76-093a-484d-a784-737ae81e7c21-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 01 08:51:00 crc kubenswrapper[4689]: I1201 08:51:00.823737 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83rntb4"] Dec 01 08:51:00 crc kubenswrapper[4689]: E1201 08:51:00.824289 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="710ccb76-093a-484d-a784-737ae81e7c21" containerName="console" Dec 01 08:51:00 crc kubenswrapper[4689]: I1201 08:51:00.824327 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="710ccb76-093a-484d-a784-737ae81e7c21" containerName="console" Dec 01 08:51:00 crc kubenswrapper[4689]: I1201 08:51:00.824559 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="710ccb76-093a-484d-a784-737ae81e7c21" containerName="console" Dec 01 08:51:00 crc kubenswrapper[4689]: I1201 08:51:00.826853 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83rntb4" Dec 01 08:51:00 crc kubenswrapper[4689]: I1201 08:51:00.829743 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 01 08:51:00 crc kubenswrapper[4689]: I1201 08:51:00.838298 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83rntb4"] Dec 01 08:51:00 crc kubenswrapper[4689]: I1201 08:51:00.928201 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28jmk\" (UniqueName: \"kubernetes.io/projected/76b524e4-b427-4426-86ee-aa0b67f86533-kube-api-access-28jmk\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83rntb4\" (UID: \"76b524e4-b427-4426-86ee-aa0b67f86533\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83rntb4" Dec 01 08:51:00 crc kubenswrapper[4689]: I1201 08:51:00.928327 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/76b524e4-b427-4426-86ee-aa0b67f86533-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83rntb4\" (UID: \"76b524e4-b427-4426-86ee-aa0b67f86533\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83rntb4" Dec 01 08:51:00 crc kubenswrapper[4689]: I1201 08:51:00.928506 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/76b524e4-b427-4426-86ee-aa0b67f86533-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83rntb4\" (UID: \"76b524e4-b427-4426-86ee-aa0b67f86533\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83rntb4" Dec 01 08:51:01 crc kubenswrapper[4689]: I1201 08:51:01.029941 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/76b524e4-b427-4426-86ee-aa0b67f86533-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83rntb4\" (UID: \"76b524e4-b427-4426-86ee-aa0b67f86533\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83rntb4" Dec 01 08:51:01 crc kubenswrapper[4689]: I1201 08:51:01.030079 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28jmk\" (UniqueName: \"kubernetes.io/projected/76b524e4-b427-4426-86ee-aa0b67f86533-kube-api-access-28jmk\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83rntb4\" (UID: \"76b524e4-b427-4426-86ee-aa0b67f86533\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83rntb4" Dec 01 08:51:01 crc kubenswrapper[4689]: I1201 08:51:01.030129 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/76b524e4-b427-4426-86ee-aa0b67f86533-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83rntb4\" (UID: \"76b524e4-b427-4426-86ee-aa0b67f86533\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83rntb4" Dec 01 08:51:01 crc kubenswrapper[4689]: I1201 08:51:01.030693 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/76b524e4-b427-4426-86ee-aa0b67f86533-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83rntb4\" (UID: \"76b524e4-b427-4426-86ee-aa0b67f86533\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83rntb4" Dec 01 08:51:01 crc kubenswrapper[4689]: I1201 08:51:01.030877 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/76b524e4-b427-4426-86ee-aa0b67f86533-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83rntb4\" (UID: \"76b524e4-b427-4426-86ee-aa0b67f86533\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83rntb4" Dec 01 08:51:01 crc kubenswrapper[4689]: I1201 08:51:01.051555 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28jmk\" (UniqueName: \"kubernetes.io/projected/76b524e4-b427-4426-86ee-aa0b67f86533-kube-api-access-28jmk\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83rntb4\" (UID: \"76b524e4-b427-4426-86ee-aa0b67f86533\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83rntb4" Dec 01 08:51:01 crc kubenswrapper[4689]: I1201 08:51:01.143278 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83rntb4" Dec 01 08:51:01 crc kubenswrapper[4689]: I1201 08:51:01.208218 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-j5r2f_710ccb76-093a-484d-a784-737ae81e7c21/console/0.log" Dec 01 08:51:01 crc kubenswrapper[4689]: I1201 08:51:01.210693 4689 generic.go:334] "Generic (PLEG): container finished" podID="710ccb76-093a-484d-a784-737ae81e7c21" containerID="3c18e4e46777bf2673a83d5ea4786a14057f72bd0db2cdad749ab894fa7b0556" exitCode=2 Dec 01 08:51:01 crc kubenswrapper[4689]: I1201 08:51:01.210831 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-j5r2f" event={"ID":"710ccb76-093a-484d-a784-737ae81e7c21","Type":"ContainerDied","Data":"3c18e4e46777bf2673a83d5ea4786a14057f72bd0db2cdad749ab894fa7b0556"} Dec 01 08:51:01 crc kubenswrapper[4689]: I1201 08:51:01.210885 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-j5r2f" event={"ID":"710ccb76-093a-484d-a784-737ae81e7c21","Type":"ContainerDied","Data":"5671ca8829ff117d44511aa04de10cd2bf9fe76b37a026879c8d47ca73a4e996"} Dec 01 08:51:01 crc kubenswrapper[4689]: I1201 08:51:01.211024 4689 scope.go:117] "RemoveContainer" containerID="3c18e4e46777bf2673a83d5ea4786a14057f72bd0db2cdad749ab894fa7b0556" Dec 01 08:51:01 crc kubenswrapper[4689]: I1201 08:51:01.211792 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-j5r2f" Dec 01 08:51:01 crc kubenswrapper[4689]: I1201 08:51:01.232495 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-j5r2f"] Dec 01 08:51:01 crc kubenswrapper[4689]: I1201 08:51:01.236268 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-j5r2f"] Dec 01 08:51:01 crc kubenswrapper[4689]: I1201 08:51:01.248685 4689 scope.go:117] "RemoveContainer" containerID="3c18e4e46777bf2673a83d5ea4786a14057f72bd0db2cdad749ab894fa7b0556" Dec 01 08:51:01 crc kubenswrapper[4689]: E1201 08:51:01.249514 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c18e4e46777bf2673a83d5ea4786a14057f72bd0db2cdad749ab894fa7b0556\": container with ID starting with 3c18e4e46777bf2673a83d5ea4786a14057f72bd0db2cdad749ab894fa7b0556 not found: ID does not exist" containerID="3c18e4e46777bf2673a83d5ea4786a14057f72bd0db2cdad749ab894fa7b0556" Dec 01 08:51:01 crc kubenswrapper[4689]: I1201 08:51:01.249869 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c18e4e46777bf2673a83d5ea4786a14057f72bd0db2cdad749ab894fa7b0556"} err="failed to get container status \"3c18e4e46777bf2673a83d5ea4786a14057f72bd0db2cdad749ab894fa7b0556\": rpc error: code = NotFound desc = could not find container \"3c18e4e46777bf2673a83d5ea4786a14057f72bd0db2cdad749ab894fa7b0556\": container with ID starting with 3c18e4e46777bf2673a83d5ea4786a14057f72bd0db2cdad749ab894fa7b0556 not found: ID does not exist" Dec 01 08:51:01 crc kubenswrapper[4689]: I1201 08:51:01.413259 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83rntb4"] Dec 01 08:51:01 crc kubenswrapper[4689]: I1201 08:51:01.493543 4689 patch_prober.go:28] interesting pod/console-f9d7485db-j5r2f container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 01 08:51:01 crc kubenswrapper[4689]: I1201 08:51:01.493644 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-f9d7485db-j5r2f" podUID="710ccb76-093a-484d-a784-737ae81e7c21" containerName="console" probeResult="failure" output="Get \"https://10.217.0.6:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 01 08:51:02 crc kubenswrapper[4689]: I1201 08:51:02.221351 4689 generic.go:334] "Generic (PLEG): container finished" podID="76b524e4-b427-4426-86ee-aa0b67f86533" containerID="eca510f04a62dcc44ce72edea26c5fa514c87be2ad59ad0612333a16958bede9" exitCode=0 Dec 01 08:51:02 crc kubenswrapper[4689]: I1201 08:51:02.221438 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83rntb4" event={"ID":"76b524e4-b427-4426-86ee-aa0b67f86533","Type":"ContainerDied","Data":"eca510f04a62dcc44ce72edea26c5fa514c87be2ad59ad0612333a16958bede9"} Dec 01 08:51:02 crc kubenswrapper[4689]: I1201 08:51:02.221863 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83rntb4" event={"ID":"76b524e4-b427-4426-86ee-aa0b67f86533","Type":"ContainerStarted","Data":"f80fd8ee30916367cc159028718e652caec586120f5be4c4db3ecd7a78ec026e"} Dec 01 08:51:03 crc kubenswrapper[4689]: I1201 08:51:03.057684 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="710ccb76-093a-484d-a784-737ae81e7c21" path="/var/lib/kubelet/pods/710ccb76-093a-484d-a784-737ae81e7c21/volumes" Dec 01 08:51:04 crc kubenswrapper[4689]: I1201 08:51:04.238697 4689 generic.go:334] "Generic (PLEG): container finished" podID="76b524e4-b427-4426-86ee-aa0b67f86533" containerID="c69d64e6284df355775454d00b446872a0489d97156771bd3884bcd9f45fe283" exitCode=0 Dec 01 08:51:04 crc kubenswrapper[4689]: I1201 08:51:04.238851 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83rntb4" event={"ID":"76b524e4-b427-4426-86ee-aa0b67f86533","Type":"ContainerDied","Data":"c69d64e6284df355775454d00b446872a0489d97156771bd3884bcd9f45fe283"} Dec 01 08:51:05 crc kubenswrapper[4689]: I1201 08:51:05.251908 4689 generic.go:334] "Generic (PLEG): container finished" podID="76b524e4-b427-4426-86ee-aa0b67f86533" containerID="7279a38f9a1ea35c3324164896019a1439eff06e5328564a98a65a1d92d63da4" exitCode=0 Dec 01 08:51:05 crc kubenswrapper[4689]: I1201 08:51:05.251961 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83rntb4" event={"ID":"76b524e4-b427-4426-86ee-aa0b67f86533","Type":"ContainerDied","Data":"7279a38f9a1ea35c3324164896019a1439eff06e5328564a98a65a1d92d63da4"} Dec 01 08:51:06 crc kubenswrapper[4689]: I1201 08:51:06.558242 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83rntb4" Dec 01 08:51:06 crc kubenswrapper[4689]: I1201 08:51:06.721001 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-28jmk\" (UniqueName: \"kubernetes.io/projected/76b524e4-b427-4426-86ee-aa0b67f86533-kube-api-access-28jmk\") pod \"76b524e4-b427-4426-86ee-aa0b67f86533\" (UID: \"76b524e4-b427-4426-86ee-aa0b67f86533\") " Dec 01 08:51:06 crc kubenswrapper[4689]: I1201 08:51:06.721095 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/76b524e4-b427-4426-86ee-aa0b67f86533-bundle\") pod \"76b524e4-b427-4426-86ee-aa0b67f86533\" (UID: \"76b524e4-b427-4426-86ee-aa0b67f86533\") " Dec 01 08:51:06 crc kubenswrapper[4689]: I1201 08:51:06.721139 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/76b524e4-b427-4426-86ee-aa0b67f86533-util\") pod \"76b524e4-b427-4426-86ee-aa0b67f86533\" (UID: \"76b524e4-b427-4426-86ee-aa0b67f86533\") " Dec 01 08:51:06 crc kubenswrapper[4689]: I1201 08:51:06.722733 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76b524e4-b427-4426-86ee-aa0b67f86533-bundle" (OuterVolumeSpecName: "bundle") pod "76b524e4-b427-4426-86ee-aa0b67f86533" (UID: "76b524e4-b427-4426-86ee-aa0b67f86533"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 08:51:06 crc kubenswrapper[4689]: I1201 08:51:06.726894 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76b524e4-b427-4426-86ee-aa0b67f86533-kube-api-access-28jmk" (OuterVolumeSpecName: "kube-api-access-28jmk") pod "76b524e4-b427-4426-86ee-aa0b67f86533" (UID: "76b524e4-b427-4426-86ee-aa0b67f86533"). InnerVolumeSpecName "kube-api-access-28jmk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:51:06 crc kubenswrapper[4689]: I1201 08:51:06.738029 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76b524e4-b427-4426-86ee-aa0b67f86533-util" (OuterVolumeSpecName: "util") pod "76b524e4-b427-4426-86ee-aa0b67f86533" (UID: "76b524e4-b427-4426-86ee-aa0b67f86533"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 08:51:06 crc kubenswrapper[4689]: I1201 08:51:06.822729 4689 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/76b524e4-b427-4426-86ee-aa0b67f86533-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 08:51:06 crc kubenswrapper[4689]: I1201 08:51:06.822762 4689 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/76b524e4-b427-4426-86ee-aa0b67f86533-util\") on node \"crc\" DevicePath \"\"" Dec 01 08:51:06 crc kubenswrapper[4689]: I1201 08:51:06.822771 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-28jmk\" (UniqueName: \"kubernetes.io/projected/76b524e4-b427-4426-86ee-aa0b67f86533-kube-api-access-28jmk\") on node \"crc\" DevicePath \"\"" Dec 01 08:51:07 crc kubenswrapper[4689]: I1201 08:51:07.265537 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83rntb4" Dec 01 08:51:07 crc kubenswrapper[4689]: I1201 08:51:07.265482 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83rntb4" event={"ID":"76b524e4-b427-4426-86ee-aa0b67f86533","Type":"ContainerDied","Data":"f80fd8ee30916367cc159028718e652caec586120f5be4c4db3ecd7a78ec026e"} Dec 01 08:51:07 crc kubenswrapper[4689]: I1201 08:51:07.265679 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f80fd8ee30916367cc159028718e652caec586120f5be4c4db3ecd7a78ec026e" Dec 01 08:51:09 crc kubenswrapper[4689]: I1201 08:51:09.147494 4689 patch_prober.go:28] interesting pod/machine-config-daemon-hmdnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 08:51:09 crc kubenswrapper[4689]: I1201 08:51:09.148449 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 08:51:15 crc kubenswrapper[4689]: I1201 08:51:15.970330 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-6599c4498-sh7sl"] Dec 01 08:51:15 crc kubenswrapper[4689]: E1201 08:51:15.971276 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76b524e4-b427-4426-86ee-aa0b67f86533" containerName="extract" Dec 01 08:51:15 crc kubenswrapper[4689]: I1201 08:51:15.971305 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="76b524e4-b427-4426-86ee-aa0b67f86533" containerName="extract" Dec 01 08:51:15 crc kubenswrapper[4689]: E1201 08:51:15.971342 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76b524e4-b427-4426-86ee-aa0b67f86533" containerName="util" Dec 01 08:51:15 crc kubenswrapper[4689]: I1201 08:51:15.971350 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="76b524e4-b427-4426-86ee-aa0b67f86533" containerName="util" Dec 01 08:51:15 crc kubenswrapper[4689]: E1201 08:51:15.971376 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76b524e4-b427-4426-86ee-aa0b67f86533" containerName="pull" Dec 01 08:51:15 crc kubenswrapper[4689]: I1201 08:51:15.971383 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="76b524e4-b427-4426-86ee-aa0b67f86533" containerName="pull" Dec 01 08:51:15 crc kubenswrapper[4689]: I1201 08:51:15.971525 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="76b524e4-b427-4426-86ee-aa0b67f86533" containerName="extract" Dec 01 08:51:15 crc kubenswrapper[4689]: I1201 08:51:15.972247 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6599c4498-sh7sl" Dec 01 08:51:15 crc kubenswrapper[4689]: I1201 08:51:15.975397 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Dec 01 08:51:15 crc kubenswrapper[4689]: I1201 08:51:15.975909 4689 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Dec 01 08:51:15 crc kubenswrapper[4689]: I1201 08:51:15.977173 4689 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Dec 01 08:51:15 crc kubenswrapper[4689]: I1201 08:51:15.978636 4689 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-cffsk" Dec 01 08:51:15 crc kubenswrapper[4689]: I1201 08:51:15.978763 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Dec 01 08:51:15 crc kubenswrapper[4689]: I1201 08:51:15.997251 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6599c4498-sh7sl"] Dec 01 08:51:16 crc kubenswrapper[4689]: I1201 08:51:16.145606 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7d09395b-ad54-4b96-af05-ea6ce866de71-webhook-cert\") pod \"metallb-operator-controller-manager-6599c4498-sh7sl\" (UID: \"7d09395b-ad54-4b96-af05-ea6ce866de71\") " pod="metallb-system/metallb-operator-controller-manager-6599c4498-sh7sl" Dec 01 08:51:16 crc kubenswrapper[4689]: I1201 08:51:16.145681 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7d09395b-ad54-4b96-af05-ea6ce866de71-apiservice-cert\") pod \"metallb-operator-controller-manager-6599c4498-sh7sl\" (UID: \"7d09395b-ad54-4b96-af05-ea6ce866de71\") " pod="metallb-system/metallb-operator-controller-manager-6599c4498-sh7sl" Dec 01 08:51:16 crc kubenswrapper[4689]: I1201 08:51:16.146089 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lx44\" (UniqueName: \"kubernetes.io/projected/7d09395b-ad54-4b96-af05-ea6ce866de71-kube-api-access-9lx44\") pod \"metallb-operator-controller-manager-6599c4498-sh7sl\" (UID: \"7d09395b-ad54-4b96-af05-ea6ce866de71\") " pod="metallb-system/metallb-operator-controller-manager-6599c4498-sh7sl" Dec 01 08:51:16 crc kubenswrapper[4689]: I1201 08:51:16.247529 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7d09395b-ad54-4b96-af05-ea6ce866de71-webhook-cert\") pod \"metallb-operator-controller-manager-6599c4498-sh7sl\" (UID: \"7d09395b-ad54-4b96-af05-ea6ce866de71\") " pod="metallb-system/metallb-operator-controller-manager-6599c4498-sh7sl" Dec 01 08:51:16 crc kubenswrapper[4689]: I1201 08:51:16.247597 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7d09395b-ad54-4b96-af05-ea6ce866de71-apiservice-cert\") pod \"metallb-operator-controller-manager-6599c4498-sh7sl\" (UID: \"7d09395b-ad54-4b96-af05-ea6ce866de71\") " pod="metallb-system/metallb-operator-controller-manager-6599c4498-sh7sl" Dec 01 08:51:16 crc kubenswrapper[4689]: I1201 08:51:16.247672 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lx44\" (UniqueName: \"kubernetes.io/projected/7d09395b-ad54-4b96-af05-ea6ce866de71-kube-api-access-9lx44\") pod \"metallb-operator-controller-manager-6599c4498-sh7sl\" (UID: \"7d09395b-ad54-4b96-af05-ea6ce866de71\") " pod="metallb-system/metallb-operator-controller-manager-6599c4498-sh7sl" Dec 01 08:51:16 crc kubenswrapper[4689]: I1201 08:51:16.256993 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7d09395b-ad54-4b96-af05-ea6ce866de71-webhook-cert\") pod \"metallb-operator-controller-manager-6599c4498-sh7sl\" (UID: \"7d09395b-ad54-4b96-af05-ea6ce866de71\") " pod="metallb-system/metallb-operator-controller-manager-6599c4498-sh7sl" Dec 01 08:51:16 crc kubenswrapper[4689]: I1201 08:51:16.256994 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7d09395b-ad54-4b96-af05-ea6ce866de71-apiservice-cert\") pod \"metallb-operator-controller-manager-6599c4498-sh7sl\" (UID: \"7d09395b-ad54-4b96-af05-ea6ce866de71\") " pod="metallb-system/metallb-operator-controller-manager-6599c4498-sh7sl" Dec 01 08:51:16 crc kubenswrapper[4689]: I1201 08:51:16.280758 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lx44\" (UniqueName: \"kubernetes.io/projected/7d09395b-ad54-4b96-af05-ea6ce866de71-kube-api-access-9lx44\") pod \"metallb-operator-controller-manager-6599c4498-sh7sl\" (UID: \"7d09395b-ad54-4b96-af05-ea6ce866de71\") " pod="metallb-system/metallb-operator-controller-manager-6599c4498-sh7sl" Dec 01 08:51:16 crc kubenswrapper[4689]: I1201 08:51:16.289050 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6599c4498-sh7sl" Dec 01 08:51:16 crc kubenswrapper[4689]: I1201 08:51:16.376328 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-fd7fdd679-r8jpf"] Dec 01 08:51:16 crc kubenswrapper[4689]: I1201 08:51:16.377652 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-fd7fdd679-r8jpf" Dec 01 08:51:16 crc kubenswrapper[4689]: I1201 08:51:16.379193 4689 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Dec 01 08:51:16 crc kubenswrapper[4689]: I1201 08:51:16.379948 4689 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Dec 01 08:51:16 crc kubenswrapper[4689]: I1201 08:51:16.380154 4689 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-njstl" Dec 01 08:51:16 crc kubenswrapper[4689]: I1201 08:51:16.424327 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-fd7fdd679-r8jpf"] Dec 01 08:51:16 crc kubenswrapper[4689]: I1201 08:51:16.459917 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5p252\" (UniqueName: \"kubernetes.io/projected/1f4ef99a-e0b0-42c7-8599-284fdd6c5ae1-kube-api-access-5p252\") pod \"metallb-operator-webhook-server-fd7fdd679-r8jpf\" (UID: \"1f4ef99a-e0b0-42c7-8599-284fdd6c5ae1\") " pod="metallb-system/metallb-operator-webhook-server-fd7fdd679-r8jpf" Dec 01 08:51:16 crc kubenswrapper[4689]: I1201 08:51:16.460033 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1f4ef99a-e0b0-42c7-8599-284fdd6c5ae1-apiservice-cert\") pod \"metallb-operator-webhook-server-fd7fdd679-r8jpf\" (UID: \"1f4ef99a-e0b0-42c7-8599-284fdd6c5ae1\") " pod="metallb-system/metallb-operator-webhook-server-fd7fdd679-r8jpf" Dec 01 08:51:16 crc kubenswrapper[4689]: I1201 08:51:16.460058 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1f4ef99a-e0b0-42c7-8599-284fdd6c5ae1-webhook-cert\") pod \"metallb-operator-webhook-server-fd7fdd679-r8jpf\" (UID: \"1f4ef99a-e0b0-42c7-8599-284fdd6c5ae1\") " pod="metallb-system/metallb-operator-webhook-server-fd7fdd679-r8jpf" Dec 01 08:51:16 crc kubenswrapper[4689]: I1201 08:51:16.562003 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1f4ef99a-e0b0-42c7-8599-284fdd6c5ae1-webhook-cert\") pod \"metallb-operator-webhook-server-fd7fdd679-r8jpf\" (UID: \"1f4ef99a-e0b0-42c7-8599-284fdd6c5ae1\") " pod="metallb-system/metallb-operator-webhook-server-fd7fdd679-r8jpf" Dec 01 08:51:16 crc kubenswrapper[4689]: I1201 08:51:16.562151 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5p252\" (UniqueName: \"kubernetes.io/projected/1f4ef99a-e0b0-42c7-8599-284fdd6c5ae1-kube-api-access-5p252\") pod \"metallb-operator-webhook-server-fd7fdd679-r8jpf\" (UID: \"1f4ef99a-e0b0-42c7-8599-284fdd6c5ae1\") " pod="metallb-system/metallb-operator-webhook-server-fd7fdd679-r8jpf" Dec 01 08:51:16 crc kubenswrapper[4689]: I1201 08:51:16.562207 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1f4ef99a-e0b0-42c7-8599-284fdd6c5ae1-apiservice-cert\") pod \"metallb-operator-webhook-server-fd7fdd679-r8jpf\" (UID: \"1f4ef99a-e0b0-42c7-8599-284fdd6c5ae1\") " pod="metallb-system/metallb-operator-webhook-server-fd7fdd679-r8jpf" Dec 01 08:51:16 crc kubenswrapper[4689]: I1201 08:51:16.571219 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1f4ef99a-e0b0-42c7-8599-284fdd6c5ae1-apiservice-cert\") pod \"metallb-operator-webhook-server-fd7fdd679-r8jpf\" (UID: \"1f4ef99a-e0b0-42c7-8599-284fdd6c5ae1\") " pod="metallb-system/metallb-operator-webhook-server-fd7fdd679-r8jpf" Dec 01 08:51:16 crc kubenswrapper[4689]: I1201 08:51:16.585639 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1f4ef99a-e0b0-42c7-8599-284fdd6c5ae1-webhook-cert\") pod \"metallb-operator-webhook-server-fd7fdd679-r8jpf\" (UID: \"1f4ef99a-e0b0-42c7-8599-284fdd6c5ae1\") " pod="metallb-system/metallb-operator-webhook-server-fd7fdd679-r8jpf" Dec 01 08:51:16 crc kubenswrapper[4689]: I1201 08:51:16.591495 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5p252\" (UniqueName: \"kubernetes.io/projected/1f4ef99a-e0b0-42c7-8599-284fdd6c5ae1-kube-api-access-5p252\") pod \"metallb-operator-webhook-server-fd7fdd679-r8jpf\" (UID: \"1f4ef99a-e0b0-42c7-8599-284fdd6c5ae1\") " pod="metallb-system/metallb-operator-webhook-server-fd7fdd679-r8jpf" Dec 01 08:51:16 crc kubenswrapper[4689]: I1201 08:51:16.694951 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-fd7fdd679-r8jpf" Dec 01 08:51:16 crc kubenswrapper[4689]: I1201 08:51:16.698179 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6599c4498-sh7sl"] Dec 01 08:51:16 crc kubenswrapper[4689]: W1201 08:51:16.713078 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d09395b_ad54_4b96_af05_ea6ce866de71.slice/crio-60b58b05fb43facdaab6c3d9429ab863a0811fb9157af8aa76ebdd80a62594d6 WatchSource:0}: Error finding container 60b58b05fb43facdaab6c3d9429ab863a0811fb9157af8aa76ebdd80a62594d6: Status 404 returned error can't find the container with id 60b58b05fb43facdaab6c3d9429ab863a0811fb9157af8aa76ebdd80a62594d6 Dec 01 08:51:16 crc kubenswrapper[4689]: I1201 08:51:16.985970 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-fd7fdd679-r8jpf"] Dec 01 08:51:16 crc kubenswrapper[4689]: W1201 08:51:16.994621 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1f4ef99a_e0b0_42c7_8599_284fdd6c5ae1.slice/crio-847e8e7ca406e5767fbe3d7cbe12ce48b0dd052e5d880545702ec1771e8a4500 WatchSource:0}: Error finding container 847e8e7ca406e5767fbe3d7cbe12ce48b0dd052e5d880545702ec1771e8a4500: Status 404 returned error can't find the container with id 847e8e7ca406e5767fbe3d7cbe12ce48b0dd052e5d880545702ec1771e8a4500 Dec 01 08:51:17 crc kubenswrapper[4689]: I1201 08:51:17.331832 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6599c4498-sh7sl" event={"ID":"7d09395b-ad54-4b96-af05-ea6ce866de71","Type":"ContainerStarted","Data":"60b58b05fb43facdaab6c3d9429ab863a0811fb9157af8aa76ebdd80a62594d6"} Dec 01 08:51:17 crc kubenswrapper[4689]: I1201 08:51:17.333521 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-fd7fdd679-r8jpf" event={"ID":"1f4ef99a-e0b0-42c7-8599-284fdd6c5ae1","Type":"ContainerStarted","Data":"847e8e7ca406e5767fbe3d7cbe12ce48b0dd052e5d880545702ec1771e8a4500"} Dec 01 08:51:23 crc kubenswrapper[4689]: I1201 08:51:23.371875 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6599c4498-sh7sl" event={"ID":"7d09395b-ad54-4b96-af05-ea6ce866de71","Type":"ContainerStarted","Data":"ce9e6f72ccd8bad046549034209643bd06bc3ee86c38add8f2155910e6b38163"} Dec 01 08:51:23 crc kubenswrapper[4689]: I1201 08:51:23.372661 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-6599c4498-sh7sl" Dec 01 08:51:23 crc kubenswrapper[4689]: I1201 08:51:23.373529 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-fd7fdd679-r8jpf" event={"ID":"1f4ef99a-e0b0-42c7-8599-284fdd6c5ae1","Type":"ContainerStarted","Data":"2412062987c73359f5d244f9152423176f4aed195dc5d1bfadb6e30490397678"} Dec 01 08:51:23 crc kubenswrapper[4689]: I1201 08:51:23.373666 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-fd7fdd679-r8jpf" Dec 01 08:51:23 crc kubenswrapper[4689]: I1201 08:51:23.404654 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-6599c4498-sh7sl" podStartSLOduration=2.349623842 podStartE2EDuration="8.404614851s" podCreationTimestamp="2025-12-01 08:51:15 +0000 UTC" firstStartedPulling="2025-12-01 08:51:16.71716148 +0000 UTC m=+756.789449384" lastFinishedPulling="2025-12-01 08:51:22.772152479 +0000 UTC m=+762.844440393" observedRunningTime="2025-12-01 08:51:23.398828272 +0000 UTC m=+763.471116176" watchObservedRunningTime="2025-12-01 08:51:23.404614851 +0000 UTC m=+763.476902755" Dec 01 08:51:23 crc kubenswrapper[4689]: I1201 08:51:23.428002 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-fd7fdd679-r8jpf" podStartSLOduration=1.638491316 podStartE2EDuration="7.427972513s" podCreationTimestamp="2025-12-01 08:51:16 +0000 UTC" firstStartedPulling="2025-12-01 08:51:16.997579402 +0000 UTC m=+757.069867306" lastFinishedPulling="2025-12-01 08:51:22.787060579 +0000 UTC m=+762.859348503" observedRunningTime="2025-12-01 08:51:23.42347736 +0000 UTC m=+763.495765274" watchObservedRunningTime="2025-12-01 08:51:23.427972513 +0000 UTC m=+763.500260417" Dec 01 08:51:32 crc kubenswrapper[4689]: I1201 08:51:32.722151 4689 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 01 08:51:36 crc kubenswrapper[4689]: I1201 08:51:36.742482 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-fd7fdd679-r8jpf" Dec 01 08:51:39 crc kubenswrapper[4689]: I1201 08:51:39.147308 4689 patch_prober.go:28] interesting pod/machine-config-daemon-hmdnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 08:51:39 crc kubenswrapper[4689]: I1201 08:51:39.147438 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 08:51:39 crc kubenswrapper[4689]: I1201 08:51:39.147488 4689 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" Dec 01 08:51:39 crc kubenswrapper[4689]: I1201 08:51:39.148193 4689 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"74b1ead9c91ab196fa5f6493d6eb41ab2d35580a1ad359148d766458297d4a15"} pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 01 08:51:39 crc kubenswrapper[4689]: I1201 08:51:39.148301 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" containerName="machine-config-daemon" containerID="cri-o://74b1ead9c91ab196fa5f6493d6eb41ab2d35580a1ad359148d766458297d4a15" gracePeriod=600 Dec 01 08:51:39 crc kubenswrapper[4689]: I1201 08:51:39.466569 4689 generic.go:334] "Generic (PLEG): container finished" podID="3947625d-75bf-4332-a233-1491b2ee9d96" containerID="74b1ead9c91ab196fa5f6493d6eb41ab2d35580a1ad359148d766458297d4a15" exitCode=0 Dec 01 08:51:39 crc kubenswrapper[4689]: I1201 08:51:39.466779 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" event={"ID":"3947625d-75bf-4332-a233-1491b2ee9d96","Type":"ContainerDied","Data":"74b1ead9c91ab196fa5f6493d6eb41ab2d35580a1ad359148d766458297d4a15"} Dec 01 08:51:39 crc kubenswrapper[4689]: I1201 08:51:39.467029 4689 scope.go:117] "RemoveContainer" containerID="77ce6d5e8c89d838f6758e3e368fce5280554d8513e298f9f66f88dccdb20c3d" Dec 01 08:51:40 crc kubenswrapper[4689]: I1201 08:51:40.474176 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" event={"ID":"3947625d-75bf-4332-a233-1491b2ee9d96","Type":"ContainerStarted","Data":"dc69fc59569a57f3230435206eb87de05390f897bd389b5558c6be2f4c0990e0"} Dec 01 08:51:56 crc kubenswrapper[4689]: I1201 08:51:56.292882 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-6599c4498-sh7sl" Dec 01 08:51:57 crc kubenswrapper[4689]: I1201 08:51:57.168648 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-nmlzc"] Dec 01 08:51:57 crc kubenswrapper[4689]: I1201 08:51:57.169409 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-nmlzc" Dec 01 08:51:57 crc kubenswrapper[4689]: I1201 08:51:57.172472 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-5j4hf"] Dec 01 08:51:57 crc kubenswrapper[4689]: I1201 08:51:57.173419 4689 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Dec 01 08:51:57 crc kubenswrapper[4689]: I1201 08:51:57.179995 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-5j4hf" Dec 01 08:51:57 crc kubenswrapper[4689]: I1201 08:51:57.183861 4689 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-fb22n" Dec 01 08:51:57 crc kubenswrapper[4689]: I1201 08:51:57.189537 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b3b8a95d-6924-4416-a625-995ed59e230d-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-nmlzc\" (UID: \"b3b8a95d-6924-4416-a625-995ed59e230d\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-nmlzc" Dec 01 08:51:57 crc kubenswrapper[4689]: I1201 08:51:57.189892 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8rlw\" (UniqueName: \"kubernetes.io/projected/b3b8a95d-6924-4416-a625-995ed59e230d-kube-api-access-v8rlw\") pod \"frr-k8s-webhook-server-7fcb986d4-nmlzc\" (UID: \"b3b8a95d-6924-4416-a625-995ed59e230d\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-nmlzc" Dec 01 08:51:57 crc kubenswrapper[4689]: I1201 08:51:57.191793 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Dec 01 08:51:57 crc kubenswrapper[4689]: I1201 08:51:57.191834 4689 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Dec 01 08:51:57 crc kubenswrapper[4689]: I1201 08:51:57.208990 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-nmlzc"] Dec 01 08:51:57 crc kubenswrapper[4689]: I1201 08:51:57.291748 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/67f63643-d748-4058-b24c-66ce8a8c3234-reloader\") pod \"frr-k8s-5j4hf\" (UID: \"67f63643-d748-4058-b24c-66ce8a8c3234\") " pod="metallb-system/frr-k8s-5j4hf" Dec 01 08:51:57 crc kubenswrapper[4689]: I1201 08:51:57.292042 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8rlw\" (UniqueName: \"kubernetes.io/projected/b3b8a95d-6924-4416-a625-995ed59e230d-kube-api-access-v8rlw\") pod \"frr-k8s-webhook-server-7fcb986d4-nmlzc\" (UID: \"b3b8a95d-6924-4416-a625-995ed59e230d\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-nmlzc" Dec 01 08:51:57 crc kubenswrapper[4689]: I1201 08:51:57.292174 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8bt7\" (UniqueName: \"kubernetes.io/projected/67f63643-d748-4058-b24c-66ce8a8c3234-kube-api-access-z8bt7\") pod \"frr-k8s-5j4hf\" (UID: \"67f63643-d748-4058-b24c-66ce8a8c3234\") " pod="metallb-system/frr-k8s-5j4hf" Dec 01 08:51:57 crc kubenswrapper[4689]: I1201 08:51:57.292265 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/67f63643-d748-4058-b24c-66ce8a8c3234-frr-sockets\") pod \"frr-k8s-5j4hf\" (UID: \"67f63643-d748-4058-b24c-66ce8a8c3234\") " pod="metallb-system/frr-k8s-5j4hf" Dec 01 08:51:57 crc kubenswrapper[4689]: I1201 08:51:57.292378 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b3b8a95d-6924-4416-a625-995ed59e230d-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-nmlzc\" (UID: \"b3b8a95d-6924-4416-a625-995ed59e230d\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-nmlzc" Dec 01 08:51:57 crc kubenswrapper[4689]: I1201 08:51:57.292458 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/67f63643-d748-4058-b24c-66ce8a8c3234-metrics-certs\") pod \"frr-k8s-5j4hf\" (UID: \"67f63643-d748-4058-b24c-66ce8a8c3234\") " pod="metallb-system/frr-k8s-5j4hf" Dec 01 08:51:57 crc kubenswrapper[4689]: I1201 08:51:57.292556 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/67f63643-d748-4058-b24c-66ce8a8c3234-frr-conf\") pod \"frr-k8s-5j4hf\" (UID: \"67f63643-d748-4058-b24c-66ce8a8c3234\") " pod="metallb-system/frr-k8s-5j4hf" Dec 01 08:51:57 crc kubenswrapper[4689]: I1201 08:51:57.292654 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/67f63643-d748-4058-b24c-66ce8a8c3234-frr-startup\") pod \"frr-k8s-5j4hf\" (UID: \"67f63643-d748-4058-b24c-66ce8a8c3234\") " pod="metallb-system/frr-k8s-5j4hf" Dec 01 08:51:57 crc kubenswrapper[4689]: E1201 08:51:57.292690 4689 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Dec 01 08:51:57 crc kubenswrapper[4689]: E1201 08:51:57.292976 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3b8a95d-6924-4416-a625-995ed59e230d-cert podName:b3b8a95d-6924-4416-a625-995ed59e230d nodeName:}" failed. No retries permitted until 2025-12-01 08:51:57.79294018 +0000 UTC m=+797.865228074 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b3b8a95d-6924-4416-a625-995ed59e230d-cert") pod "frr-k8s-webhook-server-7fcb986d4-nmlzc" (UID: "b3b8a95d-6924-4416-a625-995ed59e230d") : secret "frr-k8s-webhook-server-cert" not found Dec 01 08:51:57 crc kubenswrapper[4689]: I1201 08:51:57.293681 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/67f63643-d748-4058-b24c-66ce8a8c3234-metrics\") pod \"frr-k8s-5j4hf\" (UID: \"67f63643-d748-4058-b24c-66ce8a8c3234\") " pod="metallb-system/frr-k8s-5j4hf" Dec 01 08:51:57 crc kubenswrapper[4689]: I1201 08:51:57.317906 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-5c56f"] Dec 01 08:51:57 crc kubenswrapper[4689]: I1201 08:51:57.320459 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-5c56f" Dec 01 08:51:57 crc kubenswrapper[4689]: I1201 08:51:57.319493 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8rlw\" (UniqueName: \"kubernetes.io/projected/b3b8a95d-6924-4416-a625-995ed59e230d-kube-api-access-v8rlw\") pod \"frr-k8s-webhook-server-7fcb986d4-nmlzc\" (UID: \"b3b8a95d-6924-4416-a625-995ed59e230d\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-nmlzc" Dec 01 08:51:57 crc kubenswrapper[4689]: I1201 08:51:57.326797 4689 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Dec 01 08:51:57 crc kubenswrapper[4689]: I1201 08:51:57.327080 4689 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Dec 01 08:51:57 crc kubenswrapper[4689]: I1201 08:51:57.327280 4689 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-4hh62" Dec 01 08:51:57 crc kubenswrapper[4689]: I1201 08:51:57.327504 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Dec 01 08:51:57 crc kubenswrapper[4689]: I1201 08:51:57.337165 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-f8648f98b-wrs47"] Dec 01 08:51:57 crc kubenswrapper[4689]: I1201 08:51:57.339905 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-f8648f98b-wrs47" Dec 01 08:51:57 crc kubenswrapper[4689]: I1201 08:51:57.342855 4689 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Dec 01 08:51:57 crc kubenswrapper[4689]: I1201 08:51:57.356920 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-f8648f98b-wrs47"] Dec 01 08:51:57 crc kubenswrapper[4689]: I1201 08:51:57.394886 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/67f63643-d748-4058-b24c-66ce8a8c3234-reloader\") pod \"frr-k8s-5j4hf\" (UID: \"67f63643-d748-4058-b24c-66ce8a8c3234\") " pod="metallb-system/frr-k8s-5j4hf" Dec 01 08:51:57 crc kubenswrapper[4689]: I1201 08:51:57.394960 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/4e0a2cf0-4edb-4f5e-90d2-e276ddd43fd1-metallb-excludel2\") pod \"speaker-5c56f\" (UID: \"4e0a2cf0-4edb-4f5e-90d2-e276ddd43fd1\") " pod="metallb-system/speaker-5c56f" Dec 01 08:51:57 crc kubenswrapper[4689]: I1201 08:51:57.394987 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8bt7\" (UniqueName: \"kubernetes.io/projected/67f63643-d748-4058-b24c-66ce8a8c3234-kube-api-access-z8bt7\") pod \"frr-k8s-5j4hf\" (UID: \"67f63643-d748-4058-b24c-66ce8a8c3234\") " pod="metallb-system/frr-k8s-5j4hf" Dec 01 08:51:57 crc kubenswrapper[4689]: I1201 08:51:57.395013 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ef20aee9-f534-4832-9bb0-ef4ec0c3c807-metrics-certs\") pod \"controller-f8648f98b-wrs47\" (UID: \"ef20aee9-f534-4832-9bb0-ef4ec0c3c807\") " pod="metallb-system/controller-f8648f98b-wrs47" Dec 01 08:51:57 crc kubenswrapper[4689]: I1201 08:51:57.395035 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4e0a2cf0-4edb-4f5e-90d2-e276ddd43fd1-metrics-certs\") pod \"speaker-5c56f\" (UID: \"4e0a2cf0-4edb-4f5e-90d2-e276ddd43fd1\") " pod="metallb-system/speaker-5c56f" Dec 01 08:51:57 crc kubenswrapper[4689]: I1201 08:51:57.395059 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/67f63643-d748-4058-b24c-66ce8a8c3234-frr-sockets\") pod \"frr-k8s-5j4hf\" (UID: \"67f63643-d748-4058-b24c-66ce8a8c3234\") " pod="metallb-system/frr-k8s-5j4hf" Dec 01 08:51:57 crc kubenswrapper[4689]: I1201 08:51:57.395102 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9hsl\" (UniqueName: \"kubernetes.io/projected/4e0a2cf0-4edb-4f5e-90d2-e276ddd43fd1-kube-api-access-j9hsl\") pod \"speaker-5c56f\" (UID: \"4e0a2cf0-4edb-4f5e-90d2-e276ddd43fd1\") " pod="metallb-system/speaker-5c56f" Dec 01 08:51:57 crc kubenswrapper[4689]: I1201 08:51:57.395220 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/67f63643-d748-4058-b24c-66ce8a8c3234-metrics-certs\") pod \"frr-k8s-5j4hf\" (UID: \"67f63643-d748-4058-b24c-66ce8a8c3234\") " pod="metallb-system/frr-k8s-5j4hf" Dec 01 08:51:57 crc kubenswrapper[4689]: I1201 08:51:57.395246 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/67f63643-d748-4058-b24c-66ce8a8c3234-frr-conf\") pod \"frr-k8s-5j4hf\" (UID: \"67f63643-d748-4058-b24c-66ce8a8c3234\") " pod="metallb-system/frr-k8s-5j4hf" Dec 01 08:51:57 crc kubenswrapper[4689]: I1201 08:51:57.395272 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4gbh\" (UniqueName: \"kubernetes.io/projected/ef20aee9-f534-4832-9bb0-ef4ec0c3c807-kube-api-access-b4gbh\") pod \"controller-f8648f98b-wrs47\" (UID: \"ef20aee9-f534-4832-9bb0-ef4ec0c3c807\") " pod="metallb-system/controller-f8648f98b-wrs47" Dec 01 08:51:57 crc kubenswrapper[4689]: I1201 08:51:57.395295 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ef20aee9-f534-4832-9bb0-ef4ec0c3c807-cert\") pod \"controller-f8648f98b-wrs47\" (UID: \"ef20aee9-f534-4832-9bb0-ef4ec0c3c807\") " pod="metallb-system/controller-f8648f98b-wrs47" Dec 01 08:51:57 crc kubenswrapper[4689]: I1201 08:51:57.395330 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/67f63643-d748-4058-b24c-66ce8a8c3234-frr-startup\") pod \"frr-k8s-5j4hf\" (UID: \"67f63643-d748-4058-b24c-66ce8a8c3234\") " pod="metallb-system/frr-k8s-5j4hf" Dec 01 08:51:57 crc kubenswrapper[4689]: I1201 08:51:57.395349 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/67f63643-d748-4058-b24c-66ce8a8c3234-metrics\") pod \"frr-k8s-5j4hf\" (UID: \"67f63643-d748-4058-b24c-66ce8a8c3234\") " pod="metallb-system/frr-k8s-5j4hf" Dec 01 08:51:57 crc kubenswrapper[4689]: I1201 08:51:57.395392 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4e0a2cf0-4edb-4f5e-90d2-e276ddd43fd1-memberlist\") pod \"speaker-5c56f\" (UID: \"4e0a2cf0-4edb-4f5e-90d2-e276ddd43fd1\") " pod="metallb-system/speaker-5c56f" Dec 01 08:51:57 crc kubenswrapper[4689]: I1201 08:51:57.395903 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/67f63643-d748-4058-b24c-66ce8a8c3234-reloader\") pod \"frr-k8s-5j4hf\" (UID: \"67f63643-d748-4058-b24c-66ce8a8c3234\") " pod="metallb-system/frr-k8s-5j4hf" Dec 01 08:51:57 crc kubenswrapper[4689]: I1201 08:51:57.396482 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/67f63643-d748-4058-b24c-66ce8a8c3234-frr-sockets\") pod \"frr-k8s-5j4hf\" (UID: \"67f63643-d748-4058-b24c-66ce8a8c3234\") " pod="metallb-system/frr-k8s-5j4hf" Dec 01 08:51:57 crc kubenswrapper[4689]: E1201 08:51:57.396601 4689 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Dec 01 08:51:57 crc kubenswrapper[4689]: E1201 08:51:57.396697 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/67f63643-d748-4058-b24c-66ce8a8c3234-metrics-certs podName:67f63643-d748-4058-b24c-66ce8a8c3234 nodeName:}" failed. No retries permitted until 2025-12-01 08:51:57.896676389 +0000 UTC m=+797.968964293 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/67f63643-d748-4058-b24c-66ce8a8c3234-metrics-certs") pod "frr-k8s-5j4hf" (UID: "67f63643-d748-4058-b24c-66ce8a8c3234") : secret "frr-k8s-certs-secret" not found Dec 01 08:51:57 crc kubenswrapper[4689]: I1201 08:51:57.396931 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/67f63643-d748-4058-b24c-66ce8a8c3234-frr-conf\") pod \"frr-k8s-5j4hf\" (UID: \"67f63643-d748-4058-b24c-66ce8a8c3234\") " pod="metallb-system/frr-k8s-5j4hf" Dec 01 08:51:57 crc kubenswrapper[4689]: I1201 08:51:57.397341 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/67f63643-d748-4058-b24c-66ce8a8c3234-metrics\") pod \"frr-k8s-5j4hf\" (UID: \"67f63643-d748-4058-b24c-66ce8a8c3234\") " pod="metallb-system/frr-k8s-5j4hf" Dec 01 08:51:57 crc kubenswrapper[4689]: I1201 08:51:57.397450 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/67f63643-d748-4058-b24c-66ce8a8c3234-frr-startup\") pod \"frr-k8s-5j4hf\" (UID: \"67f63643-d748-4058-b24c-66ce8a8c3234\") " pod="metallb-system/frr-k8s-5j4hf" Dec 01 08:51:57 crc kubenswrapper[4689]: I1201 08:51:57.416009 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8bt7\" (UniqueName: \"kubernetes.io/projected/67f63643-d748-4058-b24c-66ce8a8c3234-kube-api-access-z8bt7\") pod \"frr-k8s-5j4hf\" (UID: \"67f63643-d748-4058-b24c-66ce8a8c3234\") " pod="metallb-system/frr-k8s-5j4hf" Dec 01 08:51:57 crc kubenswrapper[4689]: I1201 08:51:57.496194 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ef20aee9-f534-4832-9bb0-ef4ec0c3c807-metrics-certs\") pod \"controller-f8648f98b-wrs47\" (UID: \"ef20aee9-f534-4832-9bb0-ef4ec0c3c807\") " pod="metallb-system/controller-f8648f98b-wrs47" Dec 01 08:51:57 crc kubenswrapper[4689]: I1201 08:51:57.496483 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4e0a2cf0-4edb-4f5e-90d2-e276ddd43fd1-metrics-certs\") pod \"speaker-5c56f\" (UID: \"4e0a2cf0-4edb-4f5e-90d2-e276ddd43fd1\") " pod="metallb-system/speaker-5c56f" Dec 01 08:51:57 crc kubenswrapper[4689]: I1201 08:51:57.496616 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9hsl\" (UniqueName: \"kubernetes.io/projected/4e0a2cf0-4edb-4f5e-90d2-e276ddd43fd1-kube-api-access-j9hsl\") pod \"speaker-5c56f\" (UID: \"4e0a2cf0-4edb-4f5e-90d2-e276ddd43fd1\") " pod="metallb-system/speaker-5c56f" Dec 01 08:51:57 crc kubenswrapper[4689]: I1201 08:51:57.496740 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4gbh\" (UniqueName: \"kubernetes.io/projected/ef20aee9-f534-4832-9bb0-ef4ec0c3c807-kube-api-access-b4gbh\") pod \"controller-f8648f98b-wrs47\" (UID: \"ef20aee9-f534-4832-9bb0-ef4ec0c3c807\") " pod="metallb-system/controller-f8648f98b-wrs47" Dec 01 08:51:57 crc kubenswrapper[4689]: I1201 08:51:57.496860 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ef20aee9-f534-4832-9bb0-ef4ec0c3c807-cert\") pod \"controller-f8648f98b-wrs47\" (UID: \"ef20aee9-f534-4832-9bb0-ef4ec0c3c807\") " pod="metallb-system/controller-f8648f98b-wrs47" Dec 01 08:51:57 crc kubenswrapper[4689]: I1201 08:51:57.496963 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4e0a2cf0-4edb-4f5e-90d2-e276ddd43fd1-memberlist\") pod \"speaker-5c56f\" (UID: \"4e0a2cf0-4edb-4f5e-90d2-e276ddd43fd1\") " pod="metallb-system/speaker-5c56f" Dec 01 08:51:57 crc kubenswrapper[4689]: I1201 08:51:57.497678 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/4e0a2cf0-4edb-4f5e-90d2-e276ddd43fd1-metallb-excludel2\") pod \"speaker-5c56f\" (UID: \"4e0a2cf0-4edb-4f5e-90d2-e276ddd43fd1\") " pod="metallb-system/speaker-5c56f" Dec 01 08:51:57 crc kubenswrapper[4689]: E1201 08:51:57.497217 4689 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Dec 01 08:51:57 crc kubenswrapper[4689]: E1201 08:51:57.497899 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4e0a2cf0-4edb-4f5e-90d2-e276ddd43fd1-memberlist podName:4e0a2cf0-4edb-4f5e-90d2-e276ddd43fd1 nodeName:}" failed. No retries permitted until 2025-12-01 08:51:57.997875968 +0000 UTC m=+798.070163872 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/4e0a2cf0-4edb-4f5e-90d2-e276ddd43fd1-memberlist") pod "speaker-5c56f" (UID: "4e0a2cf0-4edb-4f5e-90d2-e276ddd43fd1") : secret "metallb-memberlist" not found Dec 01 08:51:57 crc kubenswrapper[4689]: I1201 08:51:57.498321 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/4e0a2cf0-4edb-4f5e-90d2-e276ddd43fd1-metallb-excludel2\") pod \"speaker-5c56f\" (UID: \"4e0a2cf0-4edb-4f5e-90d2-e276ddd43fd1\") " pod="metallb-system/speaker-5c56f" Dec 01 08:51:57 crc kubenswrapper[4689]: I1201 08:51:57.500841 4689 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Dec 01 08:51:57 crc kubenswrapper[4689]: I1201 08:51:57.501635 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ef20aee9-f534-4832-9bb0-ef4ec0c3c807-metrics-certs\") pod \"controller-f8648f98b-wrs47\" (UID: \"ef20aee9-f534-4832-9bb0-ef4ec0c3c807\") " pod="metallb-system/controller-f8648f98b-wrs47" Dec 01 08:51:57 crc kubenswrapper[4689]: I1201 08:51:57.505164 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4e0a2cf0-4edb-4f5e-90d2-e276ddd43fd1-metrics-certs\") pod \"speaker-5c56f\" (UID: \"4e0a2cf0-4edb-4f5e-90d2-e276ddd43fd1\") " pod="metallb-system/speaker-5c56f" Dec 01 08:51:57 crc kubenswrapper[4689]: I1201 08:51:57.512760 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ef20aee9-f534-4832-9bb0-ef4ec0c3c807-cert\") pod \"controller-f8648f98b-wrs47\" (UID: \"ef20aee9-f534-4832-9bb0-ef4ec0c3c807\") " pod="metallb-system/controller-f8648f98b-wrs47" Dec 01 08:51:57 crc kubenswrapper[4689]: I1201 08:51:57.518778 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9hsl\" (UniqueName: \"kubernetes.io/projected/4e0a2cf0-4edb-4f5e-90d2-e276ddd43fd1-kube-api-access-j9hsl\") pod \"speaker-5c56f\" (UID: \"4e0a2cf0-4edb-4f5e-90d2-e276ddd43fd1\") " pod="metallb-system/speaker-5c56f" Dec 01 08:51:57 crc kubenswrapper[4689]: I1201 08:51:57.532932 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4gbh\" (UniqueName: \"kubernetes.io/projected/ef20aee9-f534-4832-9bb0-ef4ec0c3c807-kube-api-access-b4gbh\") pod \"controller-f8648f98b-wrs47\" (UID: \"ef20aee9-f534-4832-9bb0-ef4ec0c3c807\") " pod="metallb-system/controller-f8648f98b-wrs47" Dec 01 08:51:57 crc kubenswrapper[4689]: I1201 08:51:57.660952 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-f8648f98b-wrs47" Dec 01 08:51:57 crc kubenswrapper[4689]: I1201 08:51:57.801100 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b3b8a95d-6924-4416-a625-995ed59e230d-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-nmlzc\" (UID: \"b3b8a95d-6924-4416-a625-995ed59e230d\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-nmlzc" Dec 01 08:51:57 crc kubenswrapper[4689]: I1201 08:51:57.806409 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b3b8a95d-6924-4416-a625-995ed59e230d-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-nmlzc\" (UID: \"b3b8a95d-6924-4416-a625-995ed59e230d\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-nmlzc" Dec 01 08:51:57 crc kubenswrapper[4689]: I1201 08:51:57.902069 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/67f63643-d748-4058-b24c-66ce8a8c3234-metrics-certs\") pod \"frr-k8s-5j4hf\" (UID: \"67f63643-d748-4058-b24c-66ce8a8c3234\") " pod="metallb-system/frr-k8s-5j4hf" Dec 01 08:51:57 crc kubenswrapper[4689]: I1201 08:51:57.907359 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/67f63643-d748-4058-b24c-66ce8a8c3234-metrics-certs\") pod \"frr-k8s-5j4hf\" (UID: \"67f63643-d748-4058-b24c-66ce8a8c3234\") " pod="metallb-system/frr-k8s-5j4hf" Dec 01 08:51:58 crc kubenswrapper[4689]: I1201 08:51:58.003086 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4e0a2cf0-4edb-4f5e-90d2-e276ddd43fd1-memberlist\") pod \"speaker-5c56f\" (UID: \"4e0a2cf0-4edb-4f5e-90d2-e276ddd43fd1\") " pod="metallb-system/speaker-5c56f" Dec 01 08:51:58 crc kubenswrapper[4689]: E1201 08:51:58.003271 4689 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Dec 01 08:51:58 crc kubenswrapper[4689]: E1201 08:51:58.003354 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4e0a2cf0-4edb-4f5e-90d2-e276ddd43fd1-memberlist podName:4e0a2cf0-4edb-4f5e-90d2-e276ddd43fd1 nodeName:}" failed. No retries permitted until 2025-12-01 08:51:59.003323381 +0000 UTC m=+799.075611295 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/4e0a2cf0-4edb-4f5e-90d2-e276ddd43fd1-memberlist") pod "speaker-5c56f" (UID: "4e0a2cf0-4edb-4f5e-90d2-e276ddd43fd1") : secret "metallb-memberlist" not found Dec 01 08:51:58 crc kubenswrapper[4689]: I1201 08:51:58.093296 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-nmlzc" Dec 01 08:51:58 crc kubenswrapper[4689]: I1201 08:51:58.108828 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-f8648f98b-wrs47"] Dec 01 08:51:58 crc kubenswrapper[4689]: I1201 08:51:58.110182 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-5j4hf" Dec 01 08:51:58 crc kubenswrapper[4689]: I1201 08:51:58.377090 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-nmlzc"] Dec 01 08:51:58 crc kubenswrapper[4689]: I1201 08:51:58.591066 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-wrs47" event={"ID":"ef20aee9-f534-4832-9bb0-ef4ec0c3c807","Type":"ContainerStarted","Data":"0aefa30476b7c4bcdd4ea0634afef08f63ab282bf777b53cbf092e62ffcb8b6e"} Dec 01 08:51:58 crc kubenswrapper[4689]: I1201 08:51:58.592082 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-nmlzc" event={"ID":"b3b8a95d-6924-4416-a625-995ed59e230d","Type":"ContainerStarted","Data":"3fe3ca00f33e35417c7281dc14d65bc41c2756722d83594b20837f856a3456d4"} Dec 01 08:51:59 crc kubenswrapper[4689]: I1201 08:51:59.016976 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4e0a2cf0-4edb-4f5e-90d2-e276ddd43fd1-memberlist\") pod \"speaker-5c56f\" (UID: \"4e0a2cf0-4edb-4f5e-90d2-e276ddd43fd1\") " pod="metallb-system/speaker-5c56f" Dec 01 08:51:59 crc kubenswrapper[4689]: I1201 08:51:59.039112 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4e0a2cf0-4edb-4f5e-90d2-e276ddd43fd1-memberlist\") pod \"speaker-5c56f\" (UID: \"4e0a2cf0-4edb-4f5e-90d2-e276ddd43fd1\") " pod="metallb-system/speaker-5c56f" Dec 01 08:51:59 crc kubenswrapper[4689]: I1201 08:51:59.145505 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-5c56f" Dec 01 08:51:59 crc kubenswrapper[4689]: W1201 08:51:59.173963 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e0a2cf0_4edb_4f5e_90d2_e276ddd43fd1.slice/crio-b5bc8f0452c03fedf96eeea2a86af750c65401e43ea349fa022f6090a873488a WatchSource:0}: Error finding container b5bc8f0452c03fedf96eeea2a86af750c65401e43ea349fa022f6090a873488a: Status 404 returned error can't find the container with id b5bc8f0452c03fedf96eeea2a86af750c65401e43ea349fa022f6090a873488a Dec 01 08:51:59 crc kubenswrapper[4689]: I1201 08:51:59.600353 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5j4hf" event={"ID":"67f63643-d748-4058-b24c-66ce8a8c3234","Type":"ContainerStarted","Data":"2c5c61ac5639b08c8d0a16bf53837ee4e0ddeccc132f39b536e4c412d1de4ed9"} Dec 01 08:51:59 crc kubenswrapper[4689]: I1201 08:51:59.602215 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-wrs47" event={"ID":"ef20aee9-f534-4832-9bb0-ef4ec0c3c807","Type":"ContainerStarted","Data":"1c29d86a3bf9ff1436a872d9bbcbbf2d7019f5a5709d1b7d5cca497ba71d15b6"} Dec 01 08:51:59 crc kubenswrapper[4689]: I1201 08:51:59.602301 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-wrs47" event={"ID":"ef20aee9-f534-4832-9bb0-ef4ec0c3c807","Type":"ContainerStarted","Data":"1cfe64137162a5ef0b8ab8dceb067f31339d69ac5751cfc48e9897dd65658c5f"} Dec 01 08:51:59 crc kubenswrapper[4689]: I1201 08:51:59.602322 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-f8648f98b-wrs47" Dec 01 08:51:59 crc kubenswrapper[4689]: I1201 08:51:59.604681 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-5c56f" event={"ID":"4e0a2cf0-4edb-4f5e-90d2-e276ddd43fd1","Type":"ContainerStarted","Data":"e5d7e508a15b7fbc78e47144c9b5278a60dab21b547acd114dee6298729f4e40"} Dec 01 08:51:59 crc kubenswrapper[4689]: I1201 08:51:59.604715 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-5c56f" event={"ID":"4e0a2cf0-4edb-4f5e-90d2-e276ddd43fd1","Type":"ContainerStarted","Data":"b5bc8f0452c03fedf96eeea2a86af750c65401e43ea349fa022f6090a873488a"} Dec 01 08:51:59 crc kubenswrapper[4689]: I1201 08:51:59.623708 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-f8648f98b-wrs47" podStartSLOduration=2.623689287 podStartE2EDuration="2.623689287s" podCreationTimestamp="2025-12-01 08:51:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:51:59.622893595 +0000 UTC m=+799.695181509" watchObservedRunningTime="2025-12-01 08:51:59.623689287 +0000 UTC m=+799.695977191" Dec 01 08:52:00 crc kubenswrapper[4689]: I1201 08:52:00.614155 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-5c56f" event={"ID":"4e0a2cf0-4edb-4f5e-90d2-e276ddd43fd1","Type":"ContainerStarted","Data":"40b00eb4c6acf1e8bcb5f07863f017a7e8ec4150432486e7f9fc74e776c4f4e3"} Dec 01 08:52:00 crc kubenswrapper[4689]: I1201 08:52:00.614547 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-5c56f" Dec 01 08:52:00 crc kubenswrapper[4689]: I1201 08:52:00.648921 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-5c56f" podStartSLOduration=3.648902896 podStartE2EDuration="3.648902896s" podCreationTimestamp="2025-12-01 08:51:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:52:00.645467572 +0000 UTC m=+800.717755476" watchObservedRunningTime="2025-12-01 08:52:00.648902896 +0000 UTC m=+800.721190790" Dec 01 08:52:06 crc kubenswrapper[4689]: I1201 08:52:06.648442 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-nmlzc" event={"ID":"b3b8a95d-6924-4416-a625-995ed59e230d","Type":"ContainerStarted","Data":"b1db51fa46542660b98af4fd3546b2f369a3eb985aba2651d2d2b0ca268a5650"} Dec 01 08:52:06 crc kubenswrapper[4689]: I1201 08:52:06.649073 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-nmlzc" Dec 01 08:52:06 crc kubenswrapper[4689]: I1201 08:52:06.652238 4689 generic.go:334] "Generic (PLEG): container finished" podID="67f63643-d748-4058-b24c-66ce8a8c3234" containerID="62de8dbc0a70b89fe8be26c5b4656b21e2d2a979ee00159a4f7e1b7fa4399588" exitCode=0 Dec 01 08:52:06 crc kubenswrapper[4689]: I1201 08:52:06.652300 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5j4hf" event={"ID":"67f63643-d748-4058-b24c-66ce8a8c3234","Type":"ContainerDied","Data":"62de8dbc0a70b89fe8be26c5b4656b21e2d2a979ee00159a4f7e1b7fa4399588"} Dec 01 08:52:06 crc kubenswrapper[4689]: I1201 08:52:06.679759 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-nmlzc" podStartSLOduration=2.104357734 podStartE2EDuration="9.679736083s" podCreationTimestamp="2025-12-01 08:51:57 +0000 UTC" firstStartedPulling="2025-12-01 08:51:58.390966459 +0000 UTC m=+798.463254363" lastFinishedPulling="2025-12-01 08:52:05.966344798 +0000 UTC m=+806.038632712" observedRunningTime="2025-12-01 08:52:06.674190531 +0000 UTC m=+806.746478445" watchObservedRunningTime="2025-12-01 08:52:06.679736083 +0000 UTC m=+806.752023997" Dec 01 08:52:07 crc kubenswrapper[4689]: I1201 08:52:07.659332 4689 generic.go:334] "Generic (PLEG): container finished" podID="67f63643-d748-4058-b24c-66ce8a8c3234" containerID="3799a4bb2f83343e2a8a354954c94f93718fda91ec252acbbee81e2d683a1737" exitCode=0 Dec 01 08:52:07 crc kubenswrapper[4689]: I1201 08:52:07.659533 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5j4hf" event={"ID":"67f63643-d748-4058-b24c-66ce8a8c3234","Type":"ContainerDied","Data":"3799a4bb2f83343e2a8a354954c94f93718fda91ec252acbbee81e2d683a1737"} Dec 01 08:52:08 crc kubenswrapper[4689]: I1201 08:52:08.668310 4689 generic.go:334] "Generic (PLEG): container finished" podID="67f63643-d748-4058-b24c-66ce8a8c3234" containerID="316ba0891edb187f38226ac46ddad4ac8f9288aed8dc8c63c16b7086e9d93b15" exitCode=0 Dec 01 08:52:08 crc kubenswrapper[4689]: I1201 08:52:08.668408 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5j4hf" event={"ID":"67f63643-d748-4058-b24c-66ce8a8c3234","Type":"ContainerDied","Data":"316ba0891edb187f38226ac46ddad4ac8f9288aed8dc8c63c16b7086e9d93b15"} Dec 01 08:52:09 crc kubenswrapper[4689]: I1201 08:52:09.150500 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-5c56f" Dec 01 08:52:09 crc kubenswrapper[4689]: I1201 08:52:09.684391 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5j4hf" event={"ID":"67f63643-d748-4058-b24c-66ce8a8c3234","Type":"ContainerStarted","Data":"814895b5f89a4620c2fbc9fae7b9d026cf7b406552e552a40ffaa2e029efc1d1"} Dec 01 08:52:09 crc kubenswrapper[4689]: I1201 08:52:09.684785 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5j4hf" event={"ID":"67f63643-d748-4058-b24c-66ce8a8c3234","Type":"ContainerStarted","Data":"192391cd6bbeefc4c0b36ce20c2f9f1a88b5bf8585f1e80a6c1eb86a9f045ba8"} Dec 01 08:52:09 crc kubenswrapper[4689]: I1201 08:52:09.684800 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5j4hf" event={"ID":"67f63643-d748-4058-b24c-66ce8a8c3234","Type":"ContainerStarted","Data":"882e8b12c00a258fc42133c561d07329a84e1a3e3eb243199b4dba4a89fb32ec"} Dec 01 08:52:09 crc kubenswrapper[4689]: I1201 08:52:09.684811 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5j4hf" event={"ID":"67f63643-d748-4058-b24c-66ce8a8c3234","Type":"ContainerStarted","Data":"a18f052a10909c7fdc87d752ff93b173cc4f7d08a7b486723e057dbc4764d1df"} Dec 01 08:52:09 crc kubenswrapper[4689]: I1201 08:52:09.684823 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5j4hf" event={"ID":"67f63643-d748-4058-b24c-66ce8a8c3234","Type":"ContainerStarted","Data":"1d0b41ea95e4758a5262a4c250f4dbdb119d3a551f7cc60ac7742485a3776b77"} Dec 01 08:52:10 crc kubenswrapper[4689]: I1201 08:52:10.697932 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5j4hf" event={"ID":"67f63643-d748-4058-b24c-66ce8a8c3234","Type":"ContainerStarted","Data":"2f57f5ac6136eacaa92e93bc126e0ee3429b3168c9f8f354d56c9e7b6489c664"} Dec 01 08:52:10 crc kubenswrapper[4689]: I1201 08:52:10.698263 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-5j4hf" Dec 01 08:52:10 crc kubenswrapper[4689]: I1201 08:52:10.726521 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-5j4hf" podStartSLOduration=6.376091003 podStartE2EDuration="13.726495784s" podCreationTimestamp="2025-12-01 08:51:57 +0000 UTC" firstStartedPulling="2025-12-01 08:51:58.644182463 +0000 UTC m=+798.716470377" lastFinishedPulling="2025-12-01 08:52:05.994587254 +0000 UTC m=+806.066875158" observedRunningTime="2025-12-01 08:52:10.723476011 +0000 UTC m=+810.795763985" watchObservedRunningTime="2025-12-01 08:52:10.726495784 +0000 UTC m=+810.798783698" Dec 01 08:52:12 crc kubenswrapper[4689]: I1201 08:52:12.653209 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-rxwb4"] Dec 01 08:52:12 crc kubenswrapper[4689]: I1201 08:52:12.654265 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-rxwb4" Dec 01 08:52:12 crc kubenswrapper[4689]: I1201 08:52:12.664210 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Dec 01 08:52:12 crc kubenswrapper[4689]: I1201 08:52:12.664611 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-rd5pr" Dec 01 08:52:12 crc kubenswrapper[4689]: I1201 08:52:12.666059 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Dec 01 08:52:12 crc kubenswrapper[4689]: I1201 08:52:12.754643 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-rxwb4"] Dec 01 08:52:12 crc kubenswrapper[4689]: I1201 08:52:12.756603 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkcnh\" (UniqueName: \"kubernetes.io/projected/e9c1499c-c8f6-4bb4-b02b-f9db7d5d90f7-kube-api-access-wkcnh\") pod \"openstack-operator-index-rxwb4\" (UID: \"e9c1499c-c8f6-4bb4-b02b-f9db7d5d90f7\") " pod="openstack-operators/openstack-operator-index-rxwb4" Dec 01 08:52:12 crc kubenswrapper[4689]: I1201 08:52:12.858069 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkcnh\" (UniqueName: \"kubernetes.io/projected/e9c1499c-c8f6-4bb4-b02b-f9db7d5d90f7-kube-api-access-wkcnh\") pod \"openstack-operator-index-rxwb4\" (UID: \"e9c1499c-c8f6-4bb4-b02b-f9db7d5d90f7\") " pod="openstack-operators/openstack-operator-index-rxwb4" Dec 01 08:52:12 crc kubenswrapper[4689]: I1201 08:52:12.894498 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkcnh\" (UniqueName: \"kubernetes.io/projected/e9c1499c-c8f6-4bb4-b02b-f9db7d5d90f7-kube-api-access-wkcnh\") pod \"openstack-operator-index-rxwb4\" (UID: \"e9c1499c-c8f6-4bb4-b02b-f9db7d5d90f7\") " pod="openstack-operators/openstack-operator-index-rxwb4" Dec 01 08:52:12 crc kubenswrapper[4689]: I1201 08:52:12.975331 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-rxwb4" Dec 01 08:52:13 crc kubenswrapper[4689]: I1201 08:52:13.112757 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-5j4hf" Dec 01 08:52:13 crc kubenswrapper[4689]: I1201 08:52:13.173590 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-5j4hf" Dec 01 08:52:13 crc kubenswrapper[4689]: I1201 08:52:13.540488 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-rxwb4"] Dec 01 08:52:13 crc kubenswrapper[4689]: I1201 08:52:13.768290 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-rxwb4" event={"ID":"e9c1499c-c8f6-4bb4-b02b-f9db7d5d90f7","Type":"ContainerStarted","Data":"c8461ac0e5d0e003a7fcda5b82f95631833c5e1513ce76de76e8bbba130f5878"} Dec 01 08:52:16 crc kubenswrapper[4689]: I1201 08:52:16.010549 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-rxwb4"] Dec 01 08:52:16 crc kubenswrapper[4689]: I1201 08:52:16.631177 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-jss7v"] Dec 01 08:52:16 crc kubenswrapper[4689]: I1201 08:52:16.632800 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-jss7v" Dec 01 08:52:16 crc kubenswrapper[4689]: I1201 08:52:16.644947 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-jss7v"] Dec 01 08:52:16 crc kubenswrapper[4689]: I1201 08:52:16.791528 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-rxwb4" event={"ID":"e9c1499c-c8f6-4bb4-b02b-f9db7d5d90f7","Type":"ContainerStarted","Data":"97e897bfd8df16212687a365e58b9129a43021ddba94a69b8a87ba0ef86b11bb"} Dec 01 08:52:16 crc kubenswrapper[4689]: I1201 08:52:16.791679 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-rxwb4" podUID="e9c1499c-c8f6-4bb4-b02b-f9db7d5d90f7" containerName="registry-server" containerID="cri-o://97e897bfd8df16212687a365e58b9129a43021ddba94a69b8a87ba0ef86b11bb" gracePeriod=2 Dec 01 08:52:16 crc kubenswrapper[4689]: I1201 08:52:16.806199 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hjqt\" (UniqueName: \"kubernetes.io/projected/b5026a2c-ab73-4b77-99d4-79dd6bcdb139-kube-api-access-6hjqt\") pod \"openstack-operator-index-jss7v\" (UID: \"b5026a2c-ab73-4b77-99d4-79dd6bcdb139\") " pod="openstack-operators/openstack-operator-index-jss7v" Dec 01 08:52:16 crc kubenswrapper[4689]: I1201 08:52:16.811076 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-rxwb4" podStartSLOduration=2.353071235 podStartE2EDuration="4.811056497s" podCreationTimestamp="2025-12-01 08:52:12 +0000 UTC" firstStartedPulling="2025-12-01 08:52:13.560828354 +0000 UTC m=+813.633116258" lastFinishedPulling="2025-12-01 08:52:16.018813586 +0000 UTC m=+816.091101520" observedRunningTime="2025-12-01 08:52:16.809102723 +0000 UTC m=+816.881390687" watchObservedRunningTime="2025-12-01 08:52:16.811056497 +0000 UTC m=+816.883344411" Dec 01 08:52:16 crc kubenswrapper[4689]: I1201 08:52:16.907479 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hjqt\" (UniqueName: \"kubernetes.io/projected/b5026a2c-ab73-4b77-99d4-79dd6bcdb139-kube-api-access-6hjqt\") pod \"openstack-operator-index-jss7v\" (UID: \"b5026a2c-ab73-4b77-99d4-79dd6bcdb139\") " pod="openstack-operators/openstack-operator-index-jss7v" Dec 01 08:52:16 crc kubenswrapper[4689]: I1201 08:52:16.936280 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hjqt\" (UniqueName: \"kubernetes.io/projected/b5026a2c-ab73-4b77-99d4-79dd6bcdb139-kube-api-access-6hjqt\") pod \"openstack-operator-index-jss7v\" (UID: \"b5026a2c-ab73-4b77-99d4-79dd6bcdb139\") " pod="openstack-operators/openstack-operator-index-jss7v" Dec 01 08:52:16 crc kubenswrapper[4689]: I1201 08:52:16.957943 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-jss7v" Dec 01 08:52:17 crc kubenswrapper[4689]: I1201 08:52:17.169790 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-rxwb4" Dec 01 08:52:17 crc kubenswrapper[4689]: I1201 08:52:17.312915 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wkcnh\" (UniqueName: \"kubernetes.io/projected/e9c1499c-c8f6-4bb4-b02b-f9db7d5d90f7-kube-api-access-wkcnh\") pod \"e9c1499c-c8f6-4bb4-b02b-f9db7d5d90f7\" (UID: \"e9c1499c-c8f6-4bb4-b02b-f9db7d5d90f7\") " Dec 01 08:52:17 crc kubenswrapper[4689]: I1201 08:52:17.318053 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9c1499c-c8f6-4bb4-b02b-f9db7d5d90f7-kube-api-access-wkcnh" (OuterVolumeSpecName: "kube-api-access-wkcnh") pod "e9c1499c-c8f6-4bb4-b02b-f9db7d5d90f7" (UID: "e9c1499c-c8f6-4bb4-b02b-f9db7d5d90f7"). InnerVolumeSpecName "kube-api-access-wkcnh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:52:17 crc kubenswrapper[4689]: I1201 08:52:17.360393 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-jss7v"] Dec 01 08:52:17 crc kubenswrapper[4689]: I1201 08:52:17.414877 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wkcnh\" (UniqueName: \"kubernetes.io/projected/e9c1499c-c8f6-4bb4-b02b-f9db7d5d90f7-kube-api-access-wkcnh\") on node \"crc\" DevicePath \"\"" Dec 01 08:52:17 crc kubenswrapper[4689]: I1201 08:52:17.666276 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-f8648f98b-wrs47" Dec 01 08:52:17 crc kubenswrapper[4689]: I1201 08:52:17.798674 4689 generic.go:334] "Generic (PLEG): container finished" podID="e9c1499c-c8f6-4bb4-b02b-f9db7d5d90f7" containerID="97e897bfd8df16212687a365e58b9129a43021ddba94a69b8a87ba0ef86b11bb" exitCode=0 Dec 01 08:52:17 crc kubenswrapper[4689]: I1201 08:52:17.798718 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-rxwb4" event={"ID":"e9c1499c-c8f6-4bb4-b02b-f9db7d5d90f7","Type":"ContainerDied","Data":"97e897bfd8df16212687a365e58b9129a43021ddba94a69b8a87ba0ef86b11bb"} Dec 01 08:52:17 crc kubenswrapper[4689]: I1201 08:52:17.798744 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-rxwb4" Dec 01 08:52:17 crc kubenswrapper[4689]: I1201 08:52:17.799106 4689 scope.go:117] "RemoveContainer" containerID="97e897bfd8df16212687a365e58b9129a43021ddba94a69b8a87ba0ef86b11bb" Dec 01 08:52:17 crc kubenswrapper[4689]: I1201 08:52:17.799077 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-rxwb4" event={"ID":"e9c1499c-c8f6-4bb4-b02b-f9db7d5d90f7","Type":"ContainerDied","Data":"c8461ac0e5d0e003a7fcda5b82f95631833c5e1513ce76de76e8bbba130f5878"} Dec 01 08:52:17 crc kubenswrapper[4689]: I1201 08:52:17.800033 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-jss7v" event={"ID":"b5026a2c-ab73-4b77-99d4-79dd6bcdb139","Type":"ContainerStarted","Data":"6f239c3d2bfa7c7bdaceddda1500df772cf8ea29032ef9aa587f5566c6ba07c7"} Dec 01 08:52:17 crc kubenswrapper[4689]: I1201 08:52:17.800184 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-jss7v" event={"ID":"b5026a2c-ab73-4b77-99d4-79dd6bcdb139","Type":"ContainerStarted","Data":"5febe4d3167eaf80a33298971543e880258a0039e0e21caeea5fce398632cbec"} Dec 01 08:52:17 crc kubenswrapper[4689]: I1201 08:52:17.815710 4689 scope.go:117] "RemoveContainer" containerID="97e897bfd8df16212687a365e58b9129a43021ddba94a69b8a87ba0ef86b11bb" Dec 01 08:52:17 crc kubenswrapper[4689]: E1201 08:52:17.816287 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97e897bfd8df16212687a365e58b9129a43021ddba94a69b8a87ba0ef86b11bb\": container with ID starting with 97e897bfd8df16212687a365e58b9129a43021ddba94a69b8a87ba0ef86b11bb not found: ID does not exist" containerID="97e897bfd8df16212687a365e58b9129a43021ddba94a69b8a87ba0ef86b11bb" Dec 01 08:52:17 crc kubenswrapper[4689]: I1201 08:52:17.816336 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97e897bfd8df16212687a365e58b9129a43021ddba94a69b8a87ba0ef86b11bb"} err="failed to get container status \"97e897bfd8df16212687a365e58b9129a43021ddba94a69b8a87ba0ef86b11bb\": rpc error: code = NotFound desc = could not find container \"97e897bfd8df16212687a365e58b9129a43021ddba94a69b8a87ba0ef86b11bb\": container with ID starting with 97e897bfd8df16212687a365e58b9129a43021ddba94a69b8a87ba0ef86b11bb not found: ID does not exist" Dec 01 08:52:17 crc kubenswrapper[4689]: I1201 08:52:17.840233 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-jss7v" podStartSLOduration=1.7116539419999999 podStartE2EDuration="1.840208393s" podCreationTimestamp="2025-12-01 08:52:16 +0000 UTC" firstStartedPulling="2025-12-01 08:52:17.373105694 +0000 UTC m=+817.445393608" lastFinishedPulling="2025-12-01 08:52:17.501660165 +0000 UTC m=+817.573948059" observedRunningTime="2025-12-01 08:52:17.82366234 +0000 UTC m=+817.895950254" watchObservedRunningTime="2025-12-01 08:52:17.840208393 +0000 UTC m=+817.912496297" Dec 01 08:52:17 crc kubenswrapper[4689]: I1201 08:52:17.841561 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-rxwb4"] Dec 01 08:52:17 crc kubenswrapper[4689]: I1201 08:52:17.845900 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-rxwb4"] Dec 01 08:52:18 crc kubenswrapper[4689]: I1201 08:52:18.098052 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-nmlzc" Dec 01 08:52:18 crc kubenswrapper[4689]: I1201 08:52:18.115061 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-5j4hf" Dec 01 08:52:19 crc kubenswrapper[4689]: I1201 08:52:19.059122 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9c1499c-c8f6-4bb4-b02b-f9db7d5d90f7" path="/var/lib/kubelet/pods/e9c1499c-c8f6-4bb4-b02b-f9db7d5d90f7/volumes" Dec 01 08:52:26 crc kubenswrapper[4689]: I1201 08:52:26.959203 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-jss7v" Dec 01 08:52:26 crc kubenswrapper[4689]: I1201 08:52:26.960323 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-jss7v" Dec 01 08:52:27 crc kubenswrapper[4689]: I1201 08:52:27.007661 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-jss7v" Dec 01 08:52:27 crc kubenswrapper[4689]: I1201 08:52:27.906229 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-jss7v" Dec 01 08:52:34 crc kubenswrapper[4689]: I1201 08:52:34.249011 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/f2e15f12445683aa2b8703b846077ffc01e309ca5b05095633f7ac13f1j55cj"] Dec 01 08:52:34 crc kubenswrapper[4689]: E1201 08:52:34.250529 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9c1499c-c8f6-4bb4-b02b-f9db7d5d90f7" containerName="registry-server" Dec 01 08:52:34 crc kubenswrapper[4689]: I1201 08:52:34.250573 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9c1499c-c8f6-4bb4-b02b-f9db7d5d90f7" containerName="registry-server" Dec 01 08:52:34 crc kubenswrapper[4689]: I1201 08:52:34.250804 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9c1499c-c8f6-4bb4-b02b-f9db7d5d90f7" containerName="registry-server" Dec 01 08:52:34 crc kubenswrapper[4689]: I1201 08:52:34.252279 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f2e15f12445683aa2b8703b846077ffc01e309ca5b05095633f7ac13f1j55cj" Dec 01 08:52:34 crc kubenswrapper[4689]: I1201 08:52:34.255801 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-bphgg" Dec 01 08:52:34 crc kubenswrapper[4689]: I1201 08:52:34.266826 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/f2e15f12445683aa2b8703b846077ffc01e309ca5b05095633f7ac13f1j55cj"] Dec 01 08:52:34 crc kubenswrapper[4689]: I1201 08:52:34.270504 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8405d928-22c4-4389-9b08-f6e3dc2acfdc-bundle\") pod \"f2e15f12445683aa2b8703b846077ffc01e309ca5b05095633f7ac13f1j55cj\" (UID: \"8405d928-22c4-4389-9b08-f6e3dc2acfdc\") " pod="openstack-operators/f2e15f12445683aa2b8703b846077ffc01e309ca5b05095633f7ac13f1j55cj" Dec 01 08:52:34 crc kubenswrapper[4689]: I1201 08:52:34.270568 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8405d928-22c4-4389-9b08-f6e3dc2acfdc-util\") pod \"f2e15f12445683aa2b8703b846077ffc01e309ca5b05095633f7ac13f1j55cj\" (UID: \"8405d928-22c4-4389-9b08-f6e3dc2acfdc\") " pod="openstack-operators/f2e15f12445683aa2b8703b846077ffc01e309ca5b05095633f7ac13f1j55cj" Dec 01 08:52:34 crc kubenswrapper[4689]: I1201 08:52:34.270608 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rfbw\" (UniqueName: \"kubernetes.io/projected/8405d928-22c4-4389-9b08-f6e3dc2acfdc-kube-api-access-2rfbw\") pod \"f2e15f12445683aa2b8703b846077ffc01e309ca5b05095633f7ac13f1j55cj\" (UID: \"8405d928-22c4-4389-9b08-f6e3dc2acfdc\") " pod="openstack-operators/f2e15f12445683aa2b8703b846077ffc01e309ca5b05095633f7ac13f1j55cj" Dec 01 08:52:34 crc kubenswrapper[4689]: I1201 08:52:34.372747 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8405d928-22c4-4389-9b08-f6e3dc2acfdc-util\") pod \"f2e15f12445683aa2b8703b846077ffc01e309ca5b05095633f7ac13f1j55cj\" (UID: \"8405d928-22c4-4389-9b08-f6e3dc2acfdc\") " pod="openstack-operators/f2e15f12445683aa2b8703b846077ffc01e309ca5b05095633f7ac13f1j55cj" Dec 01 08:52:34 crc kubenswrapper[4689]: I1201 08:52:34.372847 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rfbw\" (UniqueName: \"kubernetes.io/projected/8405d928-22c4-4389-9b08-f6e3dc2acfdc-kube-api-access-2rfbw\") pod \"f2e15f12445683aa2b8703b846077ffc01e309ca5b05095633f7ac13f1j55cj\" (UID: \"8405d928-22c4-4389-9b08-f6e3dc2acfdc\") " pod="openstack-operators/f2e15f12445683aa2b8703b846077ffc01e309ca5b05095633f7ac13f1j55cj" Dec 01 08:52:34 crc kubenswrapper[4689]: I1201 08:52:34.372935 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8405d928-22c4-4389-9b08-f6e3dc2acfdc-bundle\") pod \"f2e15f12445683aa2b8703b846077ffc01e309ca5b05095633f7ac13f1j55cj\" (UID: \"8405d928-22c4-4389-9b08-f6e3dc2acfdc\") " pod="openstack-operators/f2e15f12445683aa2b8703b846077ffc01e309ca5b05095633f7ac13f1j55cj" Dec 01 08:52:34 crc kubenswrapper[4689]: I1201 08:52:34.373944 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8405d928-22c4-4389-9b08-f6e3dc2acfdc-bundle\") pod \"f2e15f12445683aa2b8703b846077ffc01e309ca5b05095633f7ac13f1j55cj\" (UID: \"8405d928-22c4-4389-9b08-f6e3dc2acfdc\") " pod="openstack-operators/f2e15f12445683aa2b8703b846077ffc01e309ca5b05095633f7ac13f1j55cj" Dec 01 08:52:34 crc kubenswrapper[4689]: I1201 08:52:34.374164 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8405d928-22c4-4389-9b08-f6e3dc2acfdc-util\") pod \"f2e15f12445683aa2b8703b846077ffc01e309ca5b05095633f7ac13f1j55cj\" (UID: \"8405d928-22c4-4389-9b08-f6e3dc2acfdc\") " pod="openstack-operators/f2e15f12445683aa2b8703b846077ffc01e309ca5b05095633f7ac13f1j55cj" Dec 01 08:52:34 crc kubenswrapper[4689]: I1201 08:52:34.398974 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rfbw\" (UniqueName: \"kubernetes.io/projected/8405d928-22c4-4389-9b08-f6e3dc2acfdc-kube-api-access-2rfbw\") pod \"f2e15f12445683aa2b8703b846077ffc01e309ca5b05095633f7ac13f1j55cj\" (UID: \"8405d928-22c4-4389-9b08-f6e3dc2acfdc\") " pod="openstack-operators/f2e15f12445683aa2b8703b846077ffc01e309ca5b05095633f7ac13f1j55cj" Dec 01 08:52:34 crc kubenswrapper[4689]: I1201 08:52:34.584575 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f2e15f12445683aa2b8703b846077ffc01e309ca5b05095633f7ac13f1j55cj" Dec 01 08:52:34 crc kubenswrapper[4689]: I1201 08:52:34.811872 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/f2e15f12445683aa2b8703b846077ffc01e309ca5b05095633f7ac13f1j55cj"] Dec 01 08:52:34 crc kubenswrapper[4689]: W1201 08:52:34.822305 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8405d928_22c4_4389_9b08_f6e3dc2acfdc.slice/crio-13e5ebf25b0417a8658e78bc69378c385decac6ab9ee1069919e45005b579dfe WatchSource:0}: Error finding container 13e5ebf25b0417a8658e78bc69378c385decac6ab9ee1069919e45005b579dfe: Status 404 returned error can't find the container with id 13e5ebf25b0417a8658e78bc69378c385decac6ab9ee1069919e45005b579dfe Dec 01 08:52:34 crc kubenswrapper[4689]: I1201 08:52:34.950340 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f2e15f12445683aa2b8703b846077ffc01e309ca5b05095633f7ac13f1j55cj" event={"ID":"8405d928-22c4-4389-9b08-f6e3dc2acfdc","Type":"ContainerStarted","Data":"13e5ebf25b0417a8658e78bc69378c385decac6ab9ee1069919e45005b579dfe"} Dec 01 08:52:36 crc kubenswrapper[4689]: I1201 08:52:36.973987 4689 generic.go:334] "Generic (PLEG): container finished" podID="8405d928-22c4-4389-9b08-f6e3dc2acfdc" containerID="0ea1a2e57b3c5745981cf57dc557818624b8ae921ac98acc8382539e60db1ba1" exitCode=0 Dec 01 08:52:36 crc kubenswrapper[4689]: I1201 08:52:36.974175 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f2e15f12445683aa2b8703b846077ffc01e309ca5b05095633f7ac13f1j55cj" event={"ID":"8405d928-22c4-4389-9b08-f6e3dc2acfdc","Type":"ContainerDied","Data":"0ea1a2e57b3c5745981cf57dc557818624b8ae921ac98acc8382539e60db1ba1"} Dec 01 08:52:37 crc kubenswrapper[4689]: I1201 08:52:37.983591 4689 generic.go:334] "Generic (PLEG): container finished" podID="8405d928-22c4-4389-9b08-f6e3dc2acfdc" containerID="c33f941226626467fa23da6c5d444e4d268d211f336f2b2bfed199dc590fd3ad" exitCode=0 Dec 01 08:52:37 crc kubenswrapper[4689]: I1201 08:52:37.984432 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f2e15f12445683aa2b8703b846077ffc01e309ca5b05095633f7ac13f1j55cj" event={"ID":"8405d928-22c4-4389-9b08-f6e3dc2acfdc","Type":"ContainerDied","Data":"c33f941226626467fa23da6c5d444e4d268d211f336f2b2bfed199dc590fd3ad"} Dec 01 08:52:38 crc kubenswrapper[4689]: I1201 08:52:38.994236 4689 generic.go:334] "Generic (PLEG): container finished" podID="8405d928-22c4-4389-9b08-f6e3dc2acfdc" containerID="a625e32d2c3d5f2860196d42ff1c4289b730fff277c2fb903f09212230a0f2e8" exitCode=0 Dec 01 08:52:38 crc kubenswrapper[4689]: I1201 08:52:38.994402 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f2e15f12445683aa2b8703b846077ffc01e309ca5b05095633f7ac13f1j55cj" event={"ID":"8405d928-22c4-4389-9b08-f6e3dc2acfdc","Type":"ContainerDied","Data":"a625e32d2c3d5f2860196d42ff1c4289b730fff277c2fb903f09212230a0f2e8"} Dec 01 08:52:40 crc kubenswrapper[4689]: I1201 08:52:40.226432 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f2e15f12445683aa2b8703b846077ffc01e309ca5b05095633f7ac13f1j55cj" Dec 01 08:52:40 crc kubenswrapper[4689]: I1201 08:52:40.279976 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8405d928-22c4-4389-9b08-f6e3dc2acfdc-bundle\") pod \"8405d928-22c4-4389-9b08-f6e3dc2acfdc\" (UID: \"8405d928-22c4-4389-9b08-f6e3dc2acfdc\") " Dec 01 08:52:40 crc kubenswrapper[4689]: I1201 08:52:40.280019 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8405d928-22c4-4389-9b08-f6e3dc2acfdc-util\") pod \"8405d928-22c4-4389-9b08-f6e3dc2acfdc\" (UID: \"8405d928-22c4-4389-9b08-f6e3dc2acfdc\") " Dec 01 08:52:40 crc kubenswrapper[4689]: I1201 08:52:40.280116 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rfbw\" (UniqueName: \"kubernetes.io/projected/8405d928-22c4-4389-9b08-f6e3dc2acfdc-kube-api-access-2rfbw\") pod \"8405d928-22c4-4389-9b08-f6e3dc2acfdc\" (UID: \"8405d928-22c4-4389-9b08-f6e3dc2acfdc\") " Dec 01 08:52:40 crc kubenswrapper[4689]: I1201 08:52:40.280569 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8405d928-22c4-4389-9b08-f6e3dc2acfdc-bundle" (OuterVolumeSpecName: "bundle") pod "8405d928-22c4-4389-9b08-f6e3dc2acfdc" (UID: "8405d928-22c4-4389-9b08-f6e3dc2acfdc"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 08:52:40 crc kubenswrapper[4689]: I1201 08:52:40.285601 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8405d928-22c4-4389-9b08-f6e3dc2acfdc-kube-api-access-2rfbw" (OuterVolumeSpecName: "kube-api-access-2rfbw") pod "8405d928-22c4-4389-9b08-f6e3dc2acfdc" (UID: "8405d928-22c4-4389-9b08-f6e3dc2acfdc"). InnerVolumeSpecName "kube-api-access-2rfbw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:52:40 crc kubenswrapper[4689]: I1201 08:52:40.292987 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8405d928-22c4-4389-9b08-f6e3dc2acfdc-util" (OuterVolumeSpecName: "util") pod "8405d928-22c4-4389-9b08-f6e3dc2acfdc" (UID: "8405d928-22c4-4389-9b08-f6e3dc2acfdc"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 08:52:40 crc kubenswrapper[4689]: I1201 08:52:40.381822 4689 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8405d928-22c4-4389-9b08-f6e3dc2acfdc-util\") on node \"crc\" DevicePath \"\"" Dec 01 08:52:40 crc kubenswrapper[4689]: I1201 08:52:40.382052 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2rfbw\" (UniqueName: \"kubernetes.io/projected/8405d928-22c4-4389-9b08-f6e3dc2acfdc-kube-api-access-2rfbw\") on node \"crc\" DevicePath \"\"" Dec 01 08:52:40 crc kubenswrapper[4689]: I1201 08:52:40.382173 4689 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8405d928-22c4-4389-9b08-f6e3dc2acfdc-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 08:52:41 crc kubenswrapper[4689]: I1201 08:52:41.008190 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f2e15f12445683aa2b8703b846077ffc01e309ca5b05095633f7ac13f1j55cj" event={"ID":"8405d928-22c4-4389-9b08-f6e3dc2acfdc","Type":"ContainerDied","Data":"13e5ebf25b0417a8658e78bc69378c385decac6ab9ee1069919e45005b579dfe"} Dec 01 08:52:41 crc kubenswrapper[4689]: I1201 08:52:41.008243 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="13e5ebf25b0417a8658e78bc69378c385decac6ab9ee1069919e45005b579dfe" Dec 01 08:52:41 crc kubenswrapper[4689]: I1201 08:52:41.008352 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f2e15f12445683aa2b8703b846077ffc01e309ca5b05095633f7ac13f1j55cj" Dec 01 08:52:42 crc kubenswrapper[4689]: E1201 08:52:42.966013 4689 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8405d928_22c4_4389_9b08_f6e3dc2acfdc.slice\": RecentStats: unable to find data in memory cache]" Dec 01 08:52:47 crc kubenswrapper[4689]: I1201 08:52:47.069982 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-c6fb994fd-5lzsb"] Dec 01 08:52:47 crc kubenswrapper[4689]: E1201 08:52:47.070568 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8405d928-22c4-4389-9b08-f6e3dc2acfdc" containerName="util" Dec 01 08:52:47 crc kubenswrapper[4689]: I1201 08:52:47.070582 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="8405d928-22c4-4389-9b08-f6e3dc2acfdc" containerName="util" Dec 01 08:52:47 crc kubenswrapper[4689]: E1201 08:52:47.070598 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8405d928-22c4-4389-9b08-f6e3dc2acfdc" containerName="pull" Dec 01 08:52:47 crc kubenswrapper[4689]: I1201 08:52:47.070604 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="8405d928-22c4-4389-9b08-f6e3dc2acfdc" containerName="pull" Dec 01 08:52:47 crc kubenswrapper[4689]: E1201 08:52:47.070616 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8405d928-22c4-4389-9b08-f6e3dc2acfdc" containerName="extract" Dec 01 08:52:47 crc kubenswrapper[4689]: I1201 08:52:47.070623 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="8405d928-22c4-4389-9b08-f6e3dc2acfdc" containerName="extract" Dec 01 08:52:47 crc kubenswrapper[4689]: I1201 08:52:47.070752 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="8405d928-22c4-4389-9b08-f6e3dc2acfdc" containerName="extract" Dec 01 08:52:47 crc kubenswrapper[4689]: I1201 08:52:47.071282 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-c6fb994fd-5lzsb" Dec 01 08:52:47 crc kubenswrapper[4689]: I1201 08:52:47.074917 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-vdmtb" Dec 01 08:52:47 crc kubenswrapper[4689]: I1201 08:52:47.099300 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-c6fb994fd-5lzsb"] Dec 01 08:52:47 crc kubenswrapper[4689]: I1201 08:52:47.172988 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kc8zt\" (UniqueName: \"kubernetes.io/projected/161f3daa-6403-48b2-8e33-b01d632a2316-kube-api-access-kc8zt\") pod \"openstack-operator-controller-operator-c6fb994fd-5lzsb\" (UID: \"161f3daa-6403-48b2-8e33-b01d632a2316\") " pod="openstack-operators/openstack-operator-controller-operator-c6fb994fd-5lzsb" Dec 01 08:52:47 crc kubenswrapper[4689]: I1201 08:52:47.273946 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kc8zt\" (UniqueName: \"kubernetes.io/projected/161f3daa-6403-48b2-8e33-b01d632a2316-kube-api-access-kc8zt\") pod \"openstack-operator-controller-operator-c6fb994fd-5lzsb\" (UID: \"161f3daa-6403-48b2-8e33-b01d632a2316\") " pod="openstack-operators/openstack-operator-controller-operator-c6fb994fd-5lzsb" Dec 01 08:52:47 crc kubenswrapper[4689]: I1201 08:52:47.292385 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kc8zt\" (UniqueName: \"kubernetes.io/projected/161f3daa-6403-48b2-8e33-b01d632a2316-kube-api-access-kc8zt\") pod \"openstack-operator-controller-operator-c6fb994fd-5lzsb\" (UID: \"161f3daa-6403-48b2-8e33-b01d632a2316\") " pod="openstack-operators/openstack-operator-controller-operator-c6fb994fd-5lzsb" Dec 01 08:52:47 crc kubenswrapper[4689]: I1201 08:52:47.388494 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-c6fb994fd-5lzsb" Dec 01 08:52:47 crc kubenswrapper[4689]: I1201 08:52:47.735011 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-c6fb994fd-5lzsb"] Dec 01 08:52:48 crc kubenswrapper[4689]: I1201 08:52:48.071171 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-c6fb994fd-5lzsb" event={"ID":"161f3daa-6403-48b2-8e33-b01d632a2316","Type":"ContainerStarted","Data":"3549057d418269964f0710ff6279463cb0287e8c41771131d8cf58e28a935f9c"} Dec 01 08:52:53 crc kubenswrapper[4689]: E1201 08:52:53.112271 4689 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8405d928_22c4_4389_9b08_f6e3dc2acfdc.slice\": RecentStats: unable to find data in memory cache]" Dec 01 08:52:54 crc kubenswrapper[4689]: I1201 08:52:54.114032 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-c6fb994fd-5lzsb" event={"ID":"161f3daa-6403-48b2-8e33-b01d632a2316","Type":"ContainerStarted","Data":"5058e6f5c019833af021d291ce4976a2be12230b9ac08aa8c1bd60069256a5ee"} Dec 01 08:52:54 crc kubenswrapper[4689]: I1201 08:52:54.114434 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-c6fb994fd-5lzsb" Dec 01 08:52:54 crc kubenswrapper[4689]: I1201 08:52:54.150355 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-c6fb994fd-5lzsb" podStartSLOduration=2.60102449 podStartE2EDuration="8.150309026s" podCreationTimestamp="2025-12-01 08:52:46 +0000 UTC" firstStartedPulling="2025-12-01 08:52:47.765309748 +0000 UTC m=+847.837597652" lastFinishedPulling="2025-12-01 08:52:53.314594284 +0000 UTC m=+853.386882188" observedRunningTime="2025-12-01 08:52:54.147588631 +0000 UTC m=+854.219876535" watchObservedRunningTime="2025-12-01 08:52:54.150309026 +0000 UTC m=+854.222596940" Dec 01 08:53:03 crc kubenswrapper[4689]: E1201 08:53:03.288976 4689 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8405d928_22c4_4389_9b08_f6e3dc2acfdc.slice\": RecentStats: unable to find data in memory cache]" Dec 01 08:53:07 crc kubenswrapper[4689]: I1201 08:53:07.392314 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-c6fb994fd-5lzsb" Dec 01 08:53:13 crc kubenswrapper[4689]: E1201 08:53:13.444773 4689 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8405d928_22c4_4389_9b08_f6e3dc2acfdc.slice\": RecentStats: unable to find data in memory cache]" Dec 01 08:53:23 crc kubenswrapper[4689]: E1201 08:53:23.607680 4689 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8405d928_22c4_4389_9b08_f6e3dc2acfdc.slice\": RecentStats: unable to find data in memory cache]" Dec 01 08:53:33 crc kubenswrapper[4689]: E1201 08:53:33.771119 4689 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8405d928_22c4_4389_9b08_f6e3dc2acfdc.slice\": RecentStats: unable to find data in memory cache]" Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.161273 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-7vlqn"] Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.162809 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-7vlqn" Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.165700 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-6dvdc" Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.165982 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-859b6ccc6-7vrt5"] Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.167177 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-7vrt5" Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.171013 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-mdb4c" Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.175627 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-78b4bc895b-25q6j"] Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.176602 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-25q6j" Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.181358 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-6whfb" Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.222575 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-668d9c48b9-xhrp7"] Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.223488 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-xhrp7" Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.226881 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-24vlc" Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.233766 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-78b4bc895b-25q6j"] Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.237773 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-859b6ccc6-7vrt5"] Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.246453 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-w6qx2"] Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.247423 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-w6qx2" Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.257754 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-5zg4n" Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.265759 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-7vlqn"] Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.273544 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-668d9c48b9-xhrp7"] Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.292307 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pcl5\" (UniqueName: \"kubernetes.io/projected/2b35aff9-c66d-448c-9883-05e650f7f147-kube-api-access-7pcl5\") pod \"designate-operator-controller-manager-78b4bc895b-25q6j\" (UID: \"2b35aff9-c66d-448c-9883-05e650f7f147\") " pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-25q6j" Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.292387 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhljd\" (UniqueName: \"kubernetes.io/projected/7ce2f328-3ee3-4800-89e4-9141c841c258-kube-api-access-vhljd\") pod \"barbican-operator-controller-manager-7d9dfd778-7vlqn\" (UID: \"7ce2f328-3ee3-4800-89e4-9141c841c258\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-7vlqn" Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.292419 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbzlv\" (UniqueName: \"kubernetes.io/projected/5266d333-3337-4481-9478-2e1df848bfa2-kube-api-access-dbzlv\") pod \"cinder-operator-controller-manager-859b6ccc6-7vrt5\" (UID: \"5266d333-3337-4481-9478-2e1df848bfa2\") " pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-7vrt5" Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.324408 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-w6qx2"] Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.330431 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-dp8gl"] Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.331529 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-dp8gl" Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.341965 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-mv5fg" Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.347435 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-tgmx9"] Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.348665 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-57548d458d-tgmx9" Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.353493 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-dhvt8" Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.353871 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.378424 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-dp8gl"] Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.383785 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-758d67db86-z298n"] Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.385173 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-758d67db86-z298n" Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.388824 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-mq8bs" Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.397033 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhljd\" (UniqueName: \"kubernetes.io/projected/7ce2f328-3ee3-4800-89e4-9141c841c258-kube-api-access-vhljd\") pod \"barbican-operator-controller-manager-7d9dfd778-7vlqn\" (UID: \"7ce2f328-3ee3-4800-89e4-9141c841c258\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-7vlqn" Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.397090 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clssl\" (UniqueName: \"kubernetes.io/projected/ae47d16a-5025-44f4-8fa4-f5aa08b126b8-kube-api-access-clssl\") pod \"glance-operator-controller-manager-668d9c48b9-xhrp7\" (UID: \"ae47d16a-5025-44f4-8fa4-f5aa08b126b8\") " pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-xhrp7" Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.397117 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbzlv\" (UniqueName: \"kubernetes.io/projected/5266d333-3337-4481-9478-2e1df848bfa2-kube-api-access-dbzlv\") pod \"cinder-operator-controller-manager-859b6ccc6-7vrt5\" (UID: \"5266d333-3337-4481-9478-2e1df848bfa2\") " pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-7vrt5" Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.397147 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mprq5\" (UniqueName: \"kubernetes.io/projected/fc02885a-340a-4800-bd0b-360c0476b456-kube-api-access-mprq5\") pod \"heat-operator-controller-manager-5f64f6f8bb-w6qx2\" (UID: \"fc02885a-340a-4800-bd0b-360c0476b456\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-w6qx2" Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.397197 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pcl5\" (UniqueName: \"kubernetes.io/projected/2b35aff9-c66d-448c-9883-05e650f7f147-kube-api-access-7pcl5\") pod \"designate-operator-controller-manager-78b4bc895b-25q6j\" (UID: \"2b35aff9-c66d-448c-9883-05e650f7f147\") " pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-25q6j" Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.401634 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6c548fd776-f7xtr"] Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.402788 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-f7xtr" Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.415288 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-n8fvw" Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.419837 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-tgmx9"] Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.423410 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-758d67db86-z298n"] Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.463361 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pcl5\" (UniqueName: \"kubernetes.io/projected/2b35aff9-c66d-448c-9883-05e650f7f147-kube-api-access-7pcl5\") pod \"designate-operator-controller-manager-78b4bc895b-25q6j\" (UID: \"2b35aff9-c66d-448c-9883-05e650f7f147\") " pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-25q6j" Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.472835 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-6546668bfd-x722t"] Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.476630 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhljd\" (UniqueName: \"kubernetes.io/projected/7ce2f328-3ee3-4800-89e4-9141c841c258-kube-api-access-vhljd\") pod \"barbican-operator-controller-manager-7d9dfd778-7vlqn\" (UID: \"7ce2f328-3ee3-4800-89e4-9141c841c258\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-7vlqn" Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.485147 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbzlv\" (UniqueName: \"kubernetes.io/projected/5266d333-3337-4481-9478-2e1df848bfa2-kube-api-access-dbzlv\") pod \"cinder-operator-controller-manager-859b6ccc6-7vrt5\" (UID: \"5266d333-3337-4481-9478-2e1df848bfa2\") " pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-7vrt5" Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.485738 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-7vlqn" Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.496056 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-6546668bfd-x722t" Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.503742 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nfrn\" (UniqueName: \"kubernetes.io/projected/ffc5e400-7853-4b1d-ae11-d6ffa553093a-kube-api-access-2nfrn\") pod \"horizon-operator-controller-manager-68c6d99b8f-dp8gl\" (UID: \"ffc5e400-7853-4b1d-ae11-d6ffa553093a\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-dp8gl" Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.503787 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldmkg\" (UniqueName: \"kubernetes.io/projected/e44ef73a-e172-4557-920d-42f84488390e-kube-api-access-ldmkg\") pod \"infra-operator-controller-manager-57548d458d-tgmx9\" (UID: \"e44ef73a-e172-4557-920d-42f84488390e\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-tgmx9" Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.503811 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clssl\" (UniqueName: \"kubernetes.io/projected/ae47d16a-5025-44f4-8fa4-f5aa08b126b8-kube-api-access-clssl\") pod \"glance-operator-controller-manager-668d9c48b9-xhrp7\" (UID: \"ae47d16a-5025-44f4-8fa4-f5aa08b126b8\") " pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-xhrp7" Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.503839 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e44ef73a-e172-4557-920d-42f84488390e-cert\") pod \"infra-operator-controller-manager-57548d458d-tgmx9\" (UID: \"e44ef73a-e172-4557-920d-42f84488390e\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-tgmx9" Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.503860 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mprq5\" (UniqueName: \"kubernetes.io/projected/fc02885a-340a-4800-bd0b-360c0476b456-kube-api-access-mprq5\") pod \"heat-operator-controller-manager-5f64f6f8bb-w6qx2\" (UID: \"fc02885a-340a-4800-bd0b-360c0476b456\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-w6qx2" Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.503897 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfhqp\" (UniqueName: \"kubernetes.io/projected/2974e300-3f26-4ec0-912a-9ee6b78f33ce-kube-api-access-jfhqp\") pod \"keystone-operator-controller-manager-758d67db86-z298n\" (UID: \"2974e300-3f26-4ec0-912a-9ee6b78f33ce\") " pod="openstack-operators/keystone-operator-controller-manager-758d67db86-z298n" Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.503924 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbnqh\" (UniqueName: \"kubernetes.io/projected/ea3e4b08-090d-444e-ba53-a3df490fbaf8-kube-api-access-bbnqh\") pod \"ironic-operator-controller-manager-6c548fd776-f7xtr\" (UID: \"ea3e4b08-090d-444e-ba53-a3df490fbaf8\") " pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-f7xtr" Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.504766 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-k8b9v" Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.513619 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-7vrt5" Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.522455 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-25q6j" Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.525280 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6c548fd776-f7xtr"] Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.597995 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-fm9bv"] Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.602067 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mprq5\" (UniqueName: \"kubernetes.io/projected/fc02885a-340a-4800-bd0b-360c0476b456-kube-api-access-mprq5\") pod \"heat-operator-controller-manager-5f64f6f8bb-w6qx2\" (UID: \"fc02885a-340a-4800-bd0b-360c0476b456\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-w6qx2" Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.602637 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clssl\" (UniqueName: \"kubernetes.io/projected/ae47d16a-5025-44f4-8fa4-f5aa08b126b8-kube-api-access-clssl\") pod \"glance-operator-controller-manager-668d9c48b9-xhrp7\" (UID: \"ae47d16a-5025-44f4-8fa4-f5aa08b126b8\") " pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-xhrp7" Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.603914 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-fm9bv" Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.606036 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nfrn\" (UniqueName: \"kubernetes.io/projected/ffc5e400-7853-4b1d-ae11-d6ffa553093a-kube-api-access-2nfrn\") pod \"horizon-operator-controller-manager-68c6d99b8f-dp8gl\" (UID: \"ffc5e400-7853-4b1d-ae11-d6ffa553093a\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-dp8gl" Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.606099 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldmkg\" (UniqueName: \"kubernetes.io/projected/e44ef73a-e172-4557-920d-42f84488390e-kube-api-access-ldmkg\") pod \"infra-operator-controller-manager-57548d458d-tgmx9\" (UID: \"e44ef73a-e172-4557-920d-42f84488390e\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-tgmx9" Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.606139 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56tkb\" (UniqueName: \"kubernetes.io/projected/3751be2a-8675-4b07-8198-101bfdd71d72-kube-api-access-56tkb\") pod \"manila-operator-controller-manager-6546668bfd-x722t\" (UID: \"3751be2a-8675-4b07-8198-101bfdd71d72\") " pod="openstack-operators/manila-operator-controller-manager-6546668bfd-x722t" Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.606161 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e44ef73a-e172-4557-920d-42f84488390e-cert\") pod \"infra-operator-controller-manager-57548d458d-tgmx9\" (UID: \"e44ef73a-e172-4557-920d-42f84488390e\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-tgmx9" Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.606187 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfhqp\" (UniqueName: \"kubernetes.io/projected/2974e300-3f26-4ec0-912a-9ee6b78f33ce-kube-api-access-jfhqp\") pod \"keystone-operator-controller-manager-758d67db86-z298n\" (UID: \"2974e300-3f26-4ec0-912a-9ee6b78f33ce\") " pod="openstack-operators/keystone-operator-controller-manager-758d67db86-z298n" Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.606223 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbnqh\" (UniqueName: \"kubernetes.io/projected/ea3e4b08-090d-444e-ba53-a3df490fbaf8-kube-api-access-bbnqh\") pod \"ironic-operator-controller-manager-6c548fd776-f7xtr\" (UID: \"ea3e4b08-090d-444e-ba53-a3df490fbaf8\") " pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-f7xtr" Dec 01 08:53:34 crc kubenswrapper[4689]: E1201 08:53:34.606761 4689 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 01 08:53:34 crc kubenswrapper[4689]: E1201 08:53:34.606869 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e44ef73a-e172-4557-920d-42f84488390e-cert podName:e44ef73a-e172-4557-920d-42f84488390e nodeName:}" failed. No retries permitted until 2025-12-01 08:53:35.106839004 +0000 UTC m=+895.179126908 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e44ef73a-e172-4557-920d-42f84488390e-cert") pod "infra-operator-controller-manager-57548d458d-tgmx9" (UID: "e44ef73a-e172-4557-920d-42f84488390e") : secret "infra-operator-webhook-server-cert" not found Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.624679 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-fm9bv"] Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.630518 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-hbf52" Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.651591 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbnqh\" (UniqueName: \"kubernetes.io/projected/ea3e4b08-090d-444e-ba53-a3df490fbaf8-kube-api-access-bbnqh\") pod \"ironic-operator-controller-manager-6c548fd776-f7xtr\" (UID: \"ea3e4b08-090d-444e-ba53-a3df490fbaf8\") " pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-f7xtr" Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.655251 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-6546668bfd-x722t"] Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.675060 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-ghq5b"] Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.676103 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-ghq5b" Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.678296 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldmkg\" (UniqueName: \"kubernetes.io/projected/e44ef73a-e172-4557-920d-42f84488390e-kube-api-access-ldmkg\") pod \"infra-operator-controller-manager-57548d458d-tgmx9\" (UID: \"e44ef73a-e172-4557-920d-42f84488390e\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-tgmx9" Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.683662 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nfrn\" (UniqueName: \"kubernetes.io/projected/ffc5e400-7853-4b1d-ae11-d6ffa553093a-kube-api-access-2nfrn\") pod \"horizon-operator-controller-manager-68c6d99b8f-dp8gl\" (UID: \"ffc5e400-7853-4b1d-ae11-d6ffa553093a\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-dp8gl" Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.685400 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-kbkk5" Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.690351 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfhqp\" (UniqueName: \"kubernetes.io/projected/2974e300-3f26-4ec0-912a-9ee6b78f33ce-kube-api-access-jfhqp\") pod \"keystone-operator-controller-manager-758d67db86-z298n\" (UID: \"2974e300-3f26-4ec0-912a-9ee6b78f33ce\") " pod="openstack-operators/keystone-operator-controller-manager-758d67db86-z298n" Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.691288 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-pssbg"] Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.692362 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-pssbg" Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.704693 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-48mk2" Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.709314 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56tkb\" (UniqueName: \"kubernetes.io/projected/3751be2a-8675-4b07-8198-101bfdd71d72-kube-api-access-56tkb\") pod \"manila-operator-controller-manager-6546668bfd-x722t\" (UID: \"3751be2a-8675-4b07-8198-101bfdd71d72\") " pod="openstack-operators/manila-operator-controller-manager-6546668bfd-x722t" Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.709354 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xx8kx\" (UniqueName: \"kubernetes.io/projected/0d311ded-de3a-42e8-87d3-23c50c4fbd8a-kube-api-access-xx8kx\") pod \"mariadb-operator-controller-manager-56bbcc9d85-fm9bv\" (UID: \"0d311ded-de3a-42e8-87d3-23c50c4fbd8a\") " pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-fm9bv" Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.718618 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-ghq5b"] Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.728682 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-758d67db86-z298n" Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.734098 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-vfnzm"] Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.735245 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-998648c74-vfnzm" Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.741686 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56tkb\" (UniqueName: \"kubernetes.io/projected/3751be2a-8675-4b07-8198-101bfdd71d72-kube-api-access-56tkb\") pod \"manila-operator-controller-manager-6546668bfd-x722t\" (UID: \"3751be2a-8675-4b07-8198-101bfdd71d72\") " pod="openstack-operators/manila-operator-controller-manager-6546668bfd-x722t" Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.751871 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-pssbg"] Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.755426 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-jwrnw" Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.781438 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-vfnzm"] Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.821149 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xx8kx\" (UniqueName: \"kubernetes.io/projected/0d311ded-de3a-42e8-87d3-23c50c4fbd8a-kube-api-access-xx8kx\") pod \"mariadb-operator-controller-manager-56bbcc9d85-fm9bv\" (UID: \"0d311ded-de3a-42e8-87d3-23c50c4fbd8a\") " pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-fm9bv" Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.821236 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvdjz\" (UniqueName: \"kubernetes.io/projected/d4a1d78c-9486-4b3b-afac-2d51d2cb14df-kube-api-access-vvdjz\") pod \"nova-operator-controller-manager-697bc559fc-pssbg\" (UID: \"d4a1d78c-9486-4b3b-afac-2d51d2cb14df\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-pssbg" Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.821262 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnv4p\" (UniqueName: \"kubernetes.io/projected/4d923f8c-103b-4b12-b2e7-ea926440e5e7-kube-api-access-dnv4p\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-ghq5b\" (UID: \"4d923f8c-103b-4b12-b2e7-ea926440e5e7\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-ghq5b" Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.822945 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-f7xtr" Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.838910 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-xhrp7" Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.843343 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4sbfl9"] Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.844570 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4sbfl9" Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.865934 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.866141 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-wrfks" Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.867317 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-w6qx2" Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.874353 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xx8kx\" (UniqueName: \"kubernetes.io/projected/0d311ded-de3a-42e8-87d3-23c50c4fbd8a-kube-api-access-xx8kx\") pod \"mariadb-operator-controller-manager-56bbcc9d85-fm9bv\" (UID: \"0d311ded-de3a-42e8-87d3-23c50c4fbd8a\") " pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-fm9bv" Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.904862 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-6546668bfd-x722t" Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.908458 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-p296h"] Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.912688 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-p296h" Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.918466 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-nsnm9"] Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.920782 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-78f8948974-nsnm9" Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.931141 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvdjz\" (UniqueName: \"kubernetes.io/projected/d4a1d78c-9486-4b3b-afac-2d51d2cb14df-kube-api-access-vvdjz\") pod \"nova-operator-controller-manager-697bc559fc-pssbg\" (UID: \"d4a1d78c-9486-4b3b-afac-2d51d2cb14df\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-pssbg" Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.931214 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnv4p\" (UniqueName: \"kubernetes.io/projected/4d923f8c-103b-4b12-b2e7-ea926440e5e7-kube-api-access-dnv4p\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-ghq5b\" (UID: \"4d923f8c-103b-4b12-b2e7-ea926440e5e7\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-ghq5b" Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.931240 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfzn2\" (UniqueName: \"kubernetes.io/projected/6d24ac0e-a14a-4644-ba9a-bc0a6bb0c538-kube-api-access-pfzn2\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4sbfl9\" (UID: \"6d24ac0e-a14a-4644-ba9a-bc0a6bb0c538\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4sbfl9" Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.931283 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4bq2\" (UniqueName: \"kubernetes.io/projected/12885cbd-1d3e-40c1-b7f5-73bdb6572db9-kube-api-access-s4bq2\") pod \"octavia-operator-controller-manager-998648c74-vfnzm\" (UID: \"12885cbd-1d3e-40c1-b7f5-73bdb6572db9\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-vfnzm" Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.931347 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6d24ac0e-a14a-4644-ba9a-bc0a6bb0c538-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4sbfl9\" (UID: \"6d24ac0e-a14a-4644-ba9a-bc0a6bb0c538\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4sbfl9" Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.931391 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72p7h\" (UniqueName: \"kubernetes.io/projected/b3049390-311d-46ed-b472-d32a22f2f8d2-kube-api-access-72p7h\") pod \"ovn-operator-controller-manager-b6456fdb6-p296h\" (UID: \"b3049390-311d-46ed-b472-d32a22f2f8d2\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-p296h" Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.931454 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9b26\" (UniqueName: \"kubernetes.io/projected/3e8aa0dc-ea41-48e6-b047-4bb71fd01f8a-kube-api-access-n9b26\") pod \"placement-operator-controller-manager-78f8948974-nsnm9\" (UID: \"3e8aa0dc-ea41-48e6-b047-4bb71fd01f8a\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-nsnm9" Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.933278 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-456v5" Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.934912 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-lm6hj" Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.940019 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4sbfl9"] Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.954857 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-dp8gl" Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.962221 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnv4p\" (UniqueName: \"kubernetes.io/projected/4d923f8c-103b-4b12-b2e7-ea926440e5e7-kube-api-access-dnv4p\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-ghq5b\" (UID: \"4d923f8c-103b-4b12-b2e7-ea926440e5e7\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-ghq5b" Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.962624 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-fm9bv" Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.978159 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-p296h"] Dec 01 08:53:34 crc kubenswrapper[4689]: I1201 08:53:34.990413 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvdjz\" (UniqueName: \"kubernetes.io/projected/d4a1d78c-9486-4b3b-afac-2d51d2cb14df-kube-api-access-vvdjz\") pod \"nova-operator-controller-manager-697bc559fc-pssbg\" (UID: \"d4a1d78c-9486-4b3b-afac-2d51d2cb14df\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-pssbg" Dec 01 08:53:35 crc kubenswrapper[4689]: I1201 08:53:35.007501 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-nsnm9"] Dec 01 08:53:35 crc kubenswrapper[4689]: I1201 08:53:35.007771 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-ghq5b" Dec 01 08:53:35 crc kubenswrapper[4689]: I1201 08:53:35.033209 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9b26\" (UniqueName: \"kubernetes.io/projected/3e8aa0dc-ea41-48e6-b047-4bb71fd01f8a-kube-api-access-n9b26\") pod \"placement-operator-controller-manager-78f8948974-nsnm9\" (UID: \"3e8aa0dc-ea41-48e6-b047-4bb71fd01f8a\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-nsnm9" Dec 01 08:53:35 crc kubenswrapper[4689]: I1201 08:53:35.033293 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfzn2\" (UniqueName: \"kubernetes.io/projected/6d24ac0e-a14a-4644-ba9a-bc0a6bb0c538-kube-api-access-pfzn2\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4sbfl9\" (UID: \"6d24ac0e-a14a-4644-ba9a-bc0a6bb0c538\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4sbfl9" Dec 01 08:53:35 crc kubenswrapper[4689]: I1201 08:53:35.033654 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4bq2\" (UniqueName: \"kubernetes.io/projected/12885cbd-1d3e-40c1-b7f5-73bdb6572db9-kube-api-access-s4bq2\") pod \"octavia-operator-controller-manager-998648c74-vfnzm\" (UID: \"12885cbd-1d3e-40c1-b7f5-73bdb6572db9\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-vfnzm" Dec 01 08:53:35 crc kubenswrapper[4689]: I1201 08:53:35.033719 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6d24ac0e-a14a-4644-ba9a-bc0a6bb0c538-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4sbfl9\" (UID: \"6d24ac0e-a14a-4644-ba9a-bc0a6bb0c538\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4sbfl9" Dec 01 08:53:35 crc kubenswrapper[4689]: I1201 08:53:35.033750 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72p7h\" (UniqueName: \"kubernetes.io/projected/b3049390-311d-46ed-b472-d32a22f2f8d2-kube-api-access-72p7h\") pod \"ovn-operator-controller-manager-b6456fdb6-p296h\" (UID: \"b3049390-311d-46ed-b472-d32a22f2f8d2\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-p296h" Dec 01 08:53:35 crc kubenswrapper[4689]: E1201 08:53:35.033909 4689 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 01 08:53:35 crc kubenswrapper[4689]: E1201 08:53:35.033964 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d24ac0e-a14a-4644-ba9a-bc0a6bb0c538-cert podName:6d24ac0e-a14a-4644-ba9a-bc0a6bb0c538 nodeName:}" failed. No retries permitted until 2025-12-01 08:53:35.533947778 +0000 UTC m=+895.606235682 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6d24ac0e-a14a-4644-ba9a-bc0a6bb0c538-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4sbfl9" (UID: "6d24ac0e-a14a-4644-ba9a-bc0a6bb0c538") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 01 08:53:35 crc kubenswrapper[4689]: I1201 08:53:35.044297 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-pssbg" Dec 01 08:53:35 crc kubenswrapper[4689]: I1201 08:53:35.046133 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f8c65bbfc-5d8x5"] Dec 01 08:53:35 crc kubenswrapper[4689]: I1201 08:53:35.050042 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-5d8x5" Dec 01 08:53:35 crc kubenswrapper[4689]: I1201 08:53:35.062726 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-2ltd2" Dec 01 08:53:35 crc kubenswrapper[4689]: I1201 08:53:35.078936 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72p7h\" (UniqueName: \"kubernetes.io/projected/b3049390-311d-46ed-b472-d32a22f2f8d2-kube-api-access-72p7h\") pod \"ovn-operator-controller-manager-b6456fdb6-p296h\" (UID: \"b3049390-311d-46ed-b472-d32a22f2f8d2\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-p296h" Dec 01 08:53:35 crc kubenswrapper[4689]: I1201 08:53:35.088502 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfzn2\" (UniqueName: \"kubernetes.io/projected/6d24ac0e-a14a-4644-ba9a-bc0a6bb0c538-kube-api-access-pfzn2\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4sbfl9\" (UID: \"6d24ac0e-a14a-4644-ba9a-bc0a6bb0c538\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4sbfl9" Dec 01 08:53:35 crc kubenswrapper[4689]: I1201 08:53:35.090431 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9b26\" (UniqueName: \"kubernetes.io/projected/3e8aa0dc-ea41-48e6-b047-4bb71fd01f8a-kube-api-access-n9b26\") pod \"placement-operator-controller-manager-78f8948974-nsnm9\" (UID: \"3e8aa0dc-ea41-48e6-b047-4bb71fd01f8a\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-nsnm9" Dec 01 08:53:35 crc kubenswrapper[4689]: I1201 08:53:35.097056 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4bq2\" (UniqueName: \"kubernetes.io/projected/12885cbd-1d3e-40c1-b7f5-73bdb6572db9-kube-api-access-s4bq2\") pod \"octavia-operator-controller-manager-998648c74-vfnzm\" (UID: \"12885cbd-1d3e-40c1-b7f5-73bdb6572db9\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-vfnzm" Dec 01 08:53:35 crc kubenswrapper[4689]: I1201 08:53:35.102824 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-prvxn"] Dec 01 08:53:35 crc kubenswrapper[4689]: I1201 08:53:35.104252 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-prvxn"] Dec 01 08:53:35 crc kubenswrapper[4689]: I1201 08:53:35.104274 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-769dc69bc-sfplx"] Dec 01 08:53:35 crc kubenswrapper[4689]: I1201 08:53:35.105487 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-sfplx" Dec 01 08:53:35 crc kubenswrapper[4689]: I1201 08:53:35.106264 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-prvxn" Dec 01 08:53:35 crc kubenswrapper[4689]: I1201 08:53:35.109677 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-vnj9m" Dec 01 08:53:35 crc kubenswrapper[4689]: I1201 08:53:35.110060 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-tnrj8" Dec 01 08:53:35 crc kubenswrapper[4689]: I1201 08:53:35.191247 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e44ef73a-e172-4557-920d-42f84488390e-cert\") pod \"infra-operator-controller-manager-57548d458d-tgmx9\" (UID: \"e44ef73a-e172-4557-920d-42f84488390e\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-tgmx9" Dec 01 08:53:35 crc kubenswrapper[4689]: E1201 08:53:35.191619 4689 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 01 08:53:35 crc kubenswrapper[4689]: E1201 08:53:35.191685 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e44ef73a-e172-4557-920d-42f84488390e-cert podName:e44ef73a-e172-4557-920d-42f84488390e nodeName:}" failed. No retries permitted until 2025-12-01 08:53:36.191666911 +0000 UTC m=+896.263954815 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e44ef73a-e172-4557-920d-42f84488390e-cert") pod "infra-operator-controller-manager-57548d458d-tgmx9" (UID: "e44ef73a-e172-4557-920d-42f84488390e") : secret "infra-operator-webhook-server-cert" not found Dec 01 08:53:35 crc kubenswrapper[4689]: I1201 08:53:35.209816 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f8c65bbfc-5d8x5"] Dec 01 08:53:35 crc kubenswrapper[4689]: I1201 08:53:35.240499 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-vbkrn"] Dec 01 08:53:35 crc kubenswrapper[4689]: I1201 08:53:35.248705 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5854674fcc-vbkrn" Dec 01 08:53:35 crc kubenswrapper[4689]: I1201 08:53:35.250258 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-qvd29" Dec 01 08:53:35 crc kubenswrapper[4689]: I1201 08:53:35.283904 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-769dc69bc-sfplx"] Dec 01 08:53:35 crc kubenswrapper[4689]: I1201 08:53:35.291496 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-p296h" Dec 01 08:53:35 crc kubenswrapper[4689]: I1201 08:53:35.292537 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltd7k\" (UniqueName: \"kubernetes.io/projected/5f9861d6-2700-4af6-b385-e79220c14b2e-kube-api-access-ltd7k\") pod \"watcher-operator-controller-manager-769dc69bc-sfplx\" (UID: \"5f9861d6-2700-4af6-b385-e79220c14b2e\") " pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-sfplx" Dec 01 08:53:35 crc kubenswrapper[4689]: I1201 08:53:35.292563 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tt5n\" (UniqueName: \"kubernetes.io/projected/8b33263b-a51c-49e4-b301-b975791e098a-kube-api-access-5tt5n\") pod \"swift-operator-controller-manager-5f8c65bbfc-5d8x5\" (UID: \"8b33263b-a51c-49e4-b301-b975791e098a\") " pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-5d8x5" Dec 01 08:53:35 crc kubenswrapper[4689]: I1201 08:53:35.292658 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54hlz\" (UniqueName: \"kubernetes.io/projected/af92d0ca-8211-49a0-9362-bd5749143fff-kube-api-access-54hlz\") pod \"telemetry-operator-controller-manager-76cc84c6bb-prvxn\" (UID: \"af92d0ca-8211-49a0-9362-bd5749143fff\") " pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-prvxn" Dec 01 08:53:35 crc kubenswrapper[4689]: I1201 08:53:35.304049 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-vbkrn"] Dec 01 08:53:35 crc kubenswrapper[4689]: I1201 08:53:35.314804 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-78f8948974-nsnm9" Dec 01 08:53:35 crc kubenswrapper[4689]: I1201 08:53:35.364990 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6fc767d767-8r9dw"] Dec 01 08:53:35 crc kubenswrapper[4689]: I1201 08:53:35.365999 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6fc767d767-8r9dw" Dec 01 08:53:35 crc kubenswrapper[4689]: I1201 08:53:35.368840 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-dvxk6" Dec 01 08:53:35 crc kubenswrapper[4689]: I1201 08:53:35.369427 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Dec 01 08:53:35 crc kubenswrapper[4689]: I1201 08:53:35.373281 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Dec 01 08:53:35 crc kubenswrapper[4689]: I1201 08:53:35.376803 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-998648c74-vfnzm" Dec 01 08:53:35 crc kubenswrapper[4689]: I1201 08:53:35.398476 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4f43cf3a-d166-44ba-8d44-9e81b0666e0a-webhook-certs\") pod \"openstack-operator-controller-manager-6fc767d767-8r9dw\" (UID: \"4f43cf3a-d166-44ba-8d44-9e81b0666e0a\") " pod="openstack-operators/openstack-operator-controller-manager-6fc767d767-8r9dw" Dec 01 08:53:35 crc kubenswrapper[4689]: I1201 08:53:35.398535 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4f43cf3a-d166-44ba-8d44-9e81b0666e0a-metrics-certs\") pod \"openstack-operator-controller-manager-6fc767d767-8r9dw\" (UID: \"4f43cf3a-d166-44ba-8d44-9e81b0666e0a\") " pod="openstack-operators/openstack-operator-controller-manager-6fc767d767-8r9dw" Dec 01 08:53:35 crc kubenswrapper[4689]: I1201 08:53:35.398560 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54hlz\" (UniqueName: \"kubernetes.io/projected/af92d0ca-8211-49a0-9362-bd5749143fff-kube-api-access-54hlz\") pod \"telemetry-operator-controller-manager-76cc84c6bb-prvxn\" (UID: \"af92d0ca-8211-49a0-9362-bd5749143fff\") " pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-prvxn" Dec 01 08:53:35 crc kubenswrapper[4689]: I1201 08:53:35.398590 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwn2h\" (UniqueName: \"kubernetes.io/projected/4f43cf3a-d166-44ba-8d44-9e81b0666e0a-kube-api-access-wwn2h\") pod \"openstack-operator-controller-manager-6fc767d767-8r9dw\" (UID: \"4f43cf3a-d166-44ba-8d44-9e81b0666e0a\") " pod="openstack-operators/openstack-operator-controller-manager-6fc767d767-8r9dw" Dec 01 08:53:35 crc kubenswrapper[4689]: I1201 08:53:35.398617 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vf8ll\" (UniqueName: \"kubernetes.io/projected/f94d79da-740a-4080-81d0-ff3bf1867b3d-kube-api-access-vf8ll\") pod \"test-operator-controller-manager-5854674fcc-vbkrn\" (UID: \"f94d79da-740a-4080-81d0-ff3bf1867b3d\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-vbkrn" Dec 01 08:53:35 crc kubenswrapper[4689]: I1201 08:53:35.398637 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltd7k\" (UniqueName: \"kubernetes.io/projected/5f9861d6-2700-4af6-b385-e79220c14b2e-kube-api-access-ltd7k\") pod \"watcher-operator-controller-manager-769dc69bc-sfplx\" (UID: \"5f9861d6-2700-4af6-b385-e79220c14b2e\") " pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-sfplx" Dec 01 08:53:35 crc kubenswrapper[4689]: I1201 08:53:35.398655 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tt5n\" (UniqueName: \"kubernetes.io/projected/8b33263b-a51c-49e4-b301-b975791e098a-kube-api-access-5tt5n\") pod \"swift-operator-controller-manager-5f8c65bbfc-5d8x5\" (UID: \"8b33263b-a51c-49e4-b301-b975791e098a\") " pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-5d8x5" Dec 01 08:53:35 crc kubenswrapper[4689]: I1201 08:53:35.418031 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltd7k\" (UniqueName: \"kubernetes.io/projected/5f9861d6-2700-4af6-b385-e79220c14b2e-kube-api-access-ltd7k\") pod \"watcher-operator-controller-manager-769dc69bc-sfplx\" (UID: \"5f9861d6-2700-4af6-b385-e79220c14b2e\") " pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-sfplx" Dec 01 08:53:35 crc kubenswrapper[4689]: I1201 08:53:35.420110 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54hlz\" (UniqueName: \"kubernetes.io/projected/af92d0ca-8211-49a0-9362-bd5749143fff-kube-api-access-54hlz\") pod \"telemetry-operator-controller-manager-76cc84c6bb-prvxn\" (UID: \"af92d0ca-8211-49a0-9362-bd5749143fff\") " pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-prvxn" Dec 01 08:53:35 crc kubenswrapper[4689]: I1201 08:53:35.422870 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tt5n\" (UniqueName: \"kubernetes.io/projected/8b33263b-a51c-49e4-b301-b975791e098a-kube-api-access-5tt5n\") pod \"swift-operator-controller-manager-5f8c65bbfc-5d8x5\" (UID: \"8b33263b-a51c-49e4-b301-b975791e098a\") " pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-5d8x5" Dec 01 08:53:35 crc kubenswrapper[4689]: I1201 08:53:35.430033 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6fc767d767-8r9dw"] Dec 01 08:53:35 crc kubenswrapper[4689]: I1201 08:53:35.446515 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-t56mz"] Dec 01 08:53:35 crc kubenswrapper[4689]: I1201 08:53:35.448935 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-t56mz" Dec 01 08:53:35 crc kubenswrapper[4689]: I1201 08:53:35.457405 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-xhr79" Dec 01 08:53:35 crc kubenswrapper[4689]: I1201 08:53:35.465076 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-t56mz"] Dec 01 08:53:35 crc kubenswrapper[4689]: I1201 08:53:35.499684 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4f43cf3a-d166-44ba-8d44-9e81b0666e0a-webhook-certs\") pod \"openstack-operator-controller-manager-6fc767d767-8r9dw\" (UID: \"4f43cf3a-d166-44ba-8d44-9e81b0666e0a\") " pod="openstack-operators/openstack-operator-controller-manager-6fc767d767-8r9dw" Dec 01 08:53:35 crc kubenswrapper[4689]: I1201 08:53:35.499791 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4f43cf3a-d166-44ba-8d44-9e81b0666e0a-metrics-certs\") pod \"openstack-operator-controller-manager-6fc767d767-8r9dw\" (UID: \"4f43cf3a-d166-44ba-8d44-9e81b0666e0a\") " pod="openstack-operators/openstack-operator-controller-manager-6fc767d767-8r9dw" Dec 01 08:53:35 crc kubenswrapper[4689]: I1201 08:53:35.499820 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwn2h\" (UniqueName: \"kubernetes.io/projected/4f43cf3a-d166-44ba-8d44-9e81b0666e0a-kube-api-access-wwn2h\") pod \"openstack-operator-controller-manager-6fc767d767-8r9dw\" (UID: \"4f43cf3a-d166-44ba-8d44-9e81b0666e0a\") " pod="openstack-operators/openstack-operator-controller-manager-6fc767d767-8r9dw" Dec 01 08:53:35 crc kubenswrapper[4689]: I1201 08:53:35.499975 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vf8ll\" (UniqueName: \"kubernetes.io/projected/f94d79da-740a-4080-81d0-ff3bf1867b3d-kube-api-access-vf8ll\") pod \"test-operator-controller-manager-5854674fcc-vbkrn\" (UID: \"f94d79da-740a-4080-81d0-ff3bf1867b3d\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-vbkrn" Dec 01 08:53:35 crc kubenswrapper[4689]: I1201 08:53:35.500045 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xn55\" (UniqueName: \"kubernetes.io/projected/7085b604-e50c-4940-ac21-b6fe208c82cd-kube-api-access-7xn55\") pod \"rabbitmq-cluster-operator-manager-668c99d594-t56mz\" (UID: \"7085b604-e50c-4940-ac21-b6fe208c82cd\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-t56mz" Dec 01 08:53:35 crc kubenswrapper[4689]: E1201 08:53:35.499898 4689 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 01 08:53:35 crc kubenswrapper[4689]: E1201 08:53:35.500147 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f43cf3a-d166-44ba-8d44-9e81b0666e0a-webhook-certs podName:4f43cf3a-d166-44ba-8d44-9e81b0666e0a nodeName:}" failed. No retries permitted until 2025-12-01 08:53:36.000129565 +0000 UTC m=+896.072417459 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4f43cf3a-d166-44ba-8d44-9e81b0666e0a-webhook-certs") pod "openstack-operator-controller-manager-6fc767d767-8r9dw" (UID: "4f43cf3a-d166-44ba-8d44-9e81b0666e0a") : secret "webhook-server-cert" not found Dec 01 08:53:35 crc kubenswrapper[4689]: E1201 08:53:35.499940 4689 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 01 08:53:35 crc kubenswrapper[4689]: E1201 08:53:35.500248 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f43cf3a-d166-44ba-8d44-9e81b0666e0a-metrics-certs podName:4f43cf3a-d166-44ba-8d44-9e81b0666e0a nodeName:}" failed. No retries permitted until 2025-12-01 08:53:36.000220397 +0000 UTC m=+896.072508301 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4f43cf3a-d166-44ba-8d44-9e81b0666e0a-metrics-certs") pod "openstack-operator-controller-manager-6fc767d767-8r9dw" (UID: "4f43cf3a-d166-44ba-8d44-9e81b0666e0a") : secret "metrics-server-cert" not found Dec 01 08:53:35 crc kubenswrapper[4689]: I1201 08:53:35.530974 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vf8ll\" (UniqueName: \"kubernetes.io/projected/f94d79da-740a-4080-81d0-ff3bf1867b3d-kube-api-access-vf8ll\") pod \"test-operator-controller-manager-5854674fcc-vbkrn\" (UID: \"f94d79da-740a-4080-81d0-ff3bf1867b3d\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-vbkrn" Dec 01 08:53:35 crc kubenswrapper[4689]: I1201 08:53:35.542434 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwn2h\" (UniqueName: \"kubernetes.io/projected/4f43cf3a-d166-44ba-8d44-9e81b0666e0a-kube-api-access-wwn2h\") pod \"openstack-operator-controller-manager-6fc767d767-8r9dw\" (UID: \"4f43cf3a-d166-44ba-8d44-9e81b0666e0a\") " pod="openstack-operators/openstack-operator-controller-manager-6fc767d767-8r9dw" Dec 01 08:53:35 crc kubenswrapper[4689]: I1201 08:53:35.577809 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-859b6ccc6-7vrt5"] Dec 01 08:53:35 crc kubenswrapper[4689]: I1201 08:53:35.584267 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-sfplx" Dec 01 08:53:35 crc kubenswrapper[4689]: I1201 08:53:35.589844 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-prvxn" Dec 01 08:53:35 crc kubenswrapper[4689]: I1201 08:53:35.600804 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6d24ac0e-a14a-4644-ba9a-bc0a6bb0c538-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4sbfl9\" (UID: \"6d24ac0e-a14a-4644-ba9a-bc0a6bb0c538\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4sbfl9" Dec 01 08:53:35 crc kubenswrapper[4689]: I1201 08:53:35.600885 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xn55\" (UniqueName: \"kubernetes.io/projected/7085b604-e50c-4940-ac21-b6fe208c82cd-kube-api-access-7xn55\") pod \"rabbitmq-cluster-operator-manager-668c99d594-t56mz\" (UID: \"7085b604-e50c-4940-ac21-b6fe208c82cd\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-t56mz" Dec 01 08:53:35 crc kubenswrapper[4689]: E1201 08:53:35.601509 4689 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 01 08:53:35 crc kubenswrapper[4689]: E1201 08:53:35.601635 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d24ac0e-a14a-4644-ba9a-bc0a6bb0c538-cert podName:6d24ac0e-a14a-4644-ba9a-bc0a6bb0c538 nodeName:}" failed. No retries permitted until 2025-12-01 08:53:36.601588621 +0000 UTC m=+896.673876525 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6d24ac0e-a14a-4644-ba9a-bc0a6bb0c538-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4sbfl9" (UID: "6d24ac0e-a14a-4644-ba9a-bc0a6bb0c538") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 01 08:53:35 crc kubenswrapper[4689]: I1201 08:53:35.605764 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5854674fcc-vbkrn" Dec 01 08:53:35 crc kubenswrapper[4689]: I1201 08:53:35.649470 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xn55\" (UniqueName: \"kubernetes.io/projected/7085b604-e50c-4940-ac21-b6fe208c82cd-kube-api-access-7xn55\") pod \"rabbitmq-cluster-operator-manager-668c99d594-t56mz\" (UID: \"7085b604-e50c-4940-ac21-b6fe208c82cd\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-t56mz" Dec 01 08:53:35 crc kubenswrapper[4689]: I1201 08:53:35.687348 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-78b4bc895b-25q6j"] Dec 01 08:53:35 crc kubenswrapper[4689]: I1201 08:53:35.689719 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-5d8x5" Dec 01 08:53:35 crc kubenswrapper[4689]: I1201 08:53:35.699119 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-7vlqn"] Dec 01 08:53:35 crc kubenswrapper[4689]: I1201 08:53:35.715441 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-758d67db86-z298n"] Dec 01 08:53:35 crc kubenswrapper[4689]: I1201 08:53:35.738769 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6c548fd776-f7xtr"] Dec 01 08:53:35 crc kubenswrapper[4689]: W1201 08:53:35.783202 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2974e300_3f26_4ec0_912a_9ee6b78f33ce.slice/crio-798d86dc9a93bbc970166244a1d613e70665914204dc51d3063df0737ab58686 WatchSource:0}: Error finding container 798d86dc9a93bbc970166244a1d613e70665914204dc51d3063df0737ab58686: Status 404 returned error can't find the container with id 798d86dc9a93bbc970166244a1d613e70665914204dc51d3063df0737ab58686 Dec 01 08:53:35 crc kubenswrapper[4689]: W1201 08:53:35.810627 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ce2f328_3ee3_4800_89e4_9141c841c258.slice/crio-41295bea79b36b4011c0035a4f264b8504071d7af184ba4c5e2f30446fadd378 WatchSource:0}: Error finding container 41295bea79b36b4011c0035a4f264b8504071d7af184ba4c5e2f30446fadd378: Status 404 returned error can't find the container with id 41295bea79b36b4011c0035a4f264b8504071d7af184ba4c5e2f30446fadd378 Dec 01 08:53:35 crc kubenswrapper[4689]: I1201 08:53:35.908475 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-t56mz" Dec 01 08:53:36 crc kubenswrapper[4689]: I1201 08:53:36.007421 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4f43cf3a-d166-44ba-8d44-9e81b0666e0a-metrics-certs\") pod \"openstack-operator-controller-manager-6fc767d767-8r9dw\" (UID: \"4f43cf3a-d166-44ba-8d44-9e81b0666e0a\") " pod="openstack-operators/openstack-operator-controller-manager-6fc767d767-8r9dw" Dec 01 08:53:36 crc kubenswrapper[4689]: I1201 08:53:36.007567 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4f43cf3a-d166-44ba-8d44-9e81b0666e0a-webhook-certs\") pod \"openstack-operator-controller-manager-6fc767d767-8r9dw\" (UID: \"4f43cf3a-d166-44ba-8d44-9e81b0666e0a\") " pod="openstack-operators/openstack-operator-controller-manager-6fc767d767-8r9dw" Dec 01 08:53:36 crc kubenswrapper[4689]: E1201 08:53:36.007702 4689 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 01 08:53:36 crc kubenswrapper[4689]: E1201 08:53:36.007755 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f43cf3a-d166-44ba-8d44-9e81b0666e0a-webhook-certs podName:4f43cf3a-d166-44ba-8d44-9e81b0666e0a nodeName:}" failed. No retries permitted until 2025-12-01 08:53:37.00773901 +0000 UTC m=+897.080026914 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4f43cf3a-d166-44ba-8d44-9e81b0666e0a-webhook-certs") pod "openstack-operator-controller-manager-6fc767d767-8r9dw" (UID: "4f43cf3a-d166-44ba-8d44-9e81b0666e0a") : secret "webhook-server-cert" not found Dec 01 08:53:36 crc kubenswrapper[4689]: E1201 08:53:36.008068 4689 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 01 08:53:36 crc kubenswrapper[4689]: E1201 08:53:36.008151 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f43cf3a-d166-44ba-8d44-9e81b0666e0a-metrics-certs podName:4f43cf3a-d166-44ba-8d44-9e81b0666e0a nodeName:}" failed. No retries permitted until 2025-12-01 08:53:37.008131541 +0000 UTC m=+897.080419445 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4f43cf3a-d166-44ba-8d44-9e81b0666e0a-metrics-certs") pod "openstack-operator-controller-manager-6fc767d767-8r9dw" (UID: "4f43cf3a-d166-44ba-8d44-9e81b0666e0a") : secret "metrics-server-cert" not found Dec 01 08:53:36 crc kubenswrapper[4689]: I1201 08:53:36.071446 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-w6qx2"] Dec 01 08:53:36 crc kubenswrapper[4689]: I1201 08:53:36.096683 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-6546668bfd-x722t"] Dec 01 08:53:36 crc kubenswrapper[4689]: I1201 08:53:36.115176 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-668d9c48b9-xhrp7"] Dec 01 08:53:36 crc kubenswrapper[4689]: I1201 08:53:36.211925 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e44ef73a-e172-4557-920d-42f84488390e-cert\") pod \"infra-operator-controller-manager-57548d458d-tgmx9\" (UID: \"e44ef73a-e172-4557-920d-42f84488390e\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-tgmx9" Dec 01 08:53:36 crc kubenswrapper[4689]: E1201 08:53:36.212075 4689 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 01 08:53:36 crc kubenswrapper[4689]: E1201 08:53:36.212135 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e44ef73a-e172-4557-920d-42f84488390e-cert podName:e44ef73a-e172-4557-920d-42f84488390e nodeName:}" failed. No retries permitted until 2025-12-01 08:53:38.212119491 +0000 UTC m=+898.284407395 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e44ef73a-e172-4557-920d-42f84488390e-cert") pod "infra-operator-controller-manager-57548d458d-tgmx9" (UID: "e44ef73a-e172-4557-920d-42f84488390e") : secret "infra-operator-webhook-server-cert" not found Dec 01 08:53:36 crc kubenswrapper[4689]: I1201 08:53:36.245583 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-ghq5b"] Dec 01 08:53:36 crc kubenswrapper[4689]: I1201 08:53:36.269240 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-dp8gl"] Dec 01 08:53:36 crc kubenswrapper[4689]: I1201 08:53:36.275930 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-pssbg"] Dec 01 08:53:36 crc kubenswrapper[4689]: I1201 08:53:36.288083 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-fm9bv"] Dec 01 08:53:36 crc kubenswrapper[4689]: W1201 08:53:36.304861 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podffc5e400_7853_4b1d_ae11_d6ffa553093a.slice/crio-c277204ca35edde2bf23ecaa9975a2612a00015bb244907ed4aee54fb1fe9c80 WatchSource:0}: Error finding container c277204ca35edde2bf23ecaa9975a2612a00015bb244907ed4aee54fb1fe9c80: Status 404 returned error can't find the container with id c277204ca35edde2bf23ecaa9975a2612a00015bb244907ed4aee54fb1fe9c80 Dec 01 08:53:36 crc kubenswrapper[4689]: I1201 08:53:36.461096 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-p296h"] Dec 01 08:53:36 crc kubenswrapper[4689]: I1201 08:53:36.528447 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-nsnm9"] Dec 01 08:53:36 crc kubenswrapper[4689]: I1201 08:53:36.550197 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-pssbg" event={"ID":"d4a1d78c-9486-4b3b-afac-2d51d2cb14df","Type":"ContainerStarted","Data":"7b59a7e2dfa13a8612df06306401c1de07b9ef69cc4715ce40e7039d87ff0ac4"} Dec 01 08:53:36 crc kubenswrapper[4689]: I1201 08:53:36.551709 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-vfnzm"] Dec 01 08:53:36 crc kubenswrapper[4689]: E1201 08:53:36.554036 4689 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s4bq2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-998648c74-vfnzm_openstack-operators(12885cbd-1d3e-40c1-b7f5-73bdb6572db9): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 01 08:53:36 crc kubenswrapper[4689]: E1201 08:53:36.556894 4689 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s4bq2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-998648c74-vfnzm_openstack-operators(12885cbd-1d3e-40c1-b7f5-73bdb6572db9): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 01 08:53:36 crc kubenswrapper[4689]: I1201 08:53:36.557127 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-6546668bfd-x722t" event={"ID":"3751be2a-8675-4b07-8198-101bfdd71d72","Type":"ContainerStarted","Data":"8a0bb89c2b67d53a7d735fe7c2e80e34b3687a8f08840296afd32d3157f38680"} Dec 01 08:53:36 crc kubenswrapper[4689]: E1201 08:53:36.558624 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/octavia-operator-controller-manager-998648c74-vfnzm" podUID="12885cbd-1d3e-40c1-b7f5-73bdb6572db9" Dec 01 08:53:36 crc kubenswrapper[4689]: I1201 08:53:36.559357 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-xhrp7" event={"ID":"ae47d16a-5025-44f4-8fa4-f5aa08b126b8","Type":"ContainerStarted","Data":"7f4a6414b8a6baa62744607b1d5e51a7e5c0d4e48fcf26b0bbbdb98ef81bc944"} Dec 01 08:53:36 crc kubenswrapper[4689]: I1201 08:53:36.565792 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-25q6j" event={"ID":"2b35aff9-c66d-448c-9883-05e650f7f147","Type":"ContainerStarted","Data":"30131c3539f322d21e4e5ee6fdb55f4890a3ad3c494950a4d779338aeb82f329"} Dec 01 08:53:36 crc kubenswrapper[4689]: I1201 08:53:36.567092 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-ghq5b" event={"ID":"4d923f8c-103b-4b12-b2e7-ea926440e5e7","Type":"ContainerStarted","Data":"aa08f3c9ac725868b6a166484140181bd9a59eafc23d10796e8d859fba08812a"} Dec 01 08:53:36 crc kubenswrapper[4689]: I1201 08:53:36.568163 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-7vrt5" event={"ID":"5266d333-3337-4481-9478-2e1df848bfa2","Type":"ContainerStarted","Data":"21475d69db08577c69b738121a83785c552b7f4200afb61ae98f407cd45b6f65"} Dec 01 08:53:36 crc kubenswrapper[4689]: I1201 08:53:36.570024 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-f7xtr" event={"ID":"ea3e4b08-090d-444e-ba53-a3df490fbaf8","Type":"ContainerStarted","Data":"a6ea2ea6ff62ea4683e78e355d44d4e934e1a63fa4c7ff6d14eddae947d93f88"} Dec 01 08:53:36 crc kubenswrapper[4689]: I1201 08:53:36.575907 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-p296h" event={"ID":"b3049390-311d-46ed-b472-d32a22f2f8d2","Type":"ContainerStarted","Data":"1b250ac5c5818f0f17787dffff007b0c65b75b356208a6389d35370b87282cc7"} Dec 01 08:53:36 crc kubenswrapper[4689]: I1201 08:53:36.581872 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-758d67db86-z298n" event={"ID":"2974e300-3f26-4ec0-912a-9ee6b78f33ce","Type":"ContainerStarted","Data":"798d86dc9a93bbc970166244a1d613e70665914204dc51d3063df0737ab58686"} Dec 01 08:53:36 crc kubenswrapper[4689]: I1201 08:53:36.585605 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-dp8gl" event={"ID":"ffc5e400-7853-4b1d-ae11-d6ffa553093a","Type":"ContainerStarted","Data":"c277204ca35edde2bf23ecaa9975a2612a00015bb244907ed4aee54fb1fe9c80"} Dec 01 08:53:36 crc kubenswrapper[4689]: I1201 08:53:36.598973 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-w6qx2" event={"ID":"fc02885a-340a-4800-bd0b-360c0476b456","Type":"ContainerStarted","Data":"a8378d32748415ab2a652d194612b7c5001f5ecd5123a23db0df573518714971"} Dec 01 08:53:36 crc kubenswrapper[4689]: I1201 08:53:36.600905 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-fm9bv" event={"ID":"0d311ded-de3a-42e8-87d3-23c50c4fbd8a","Type":"ContainerStarted","Data":"ebdba53a859f43f06fcc7e730f10206f0c0c7b6d6a87a7a69c096660ed545010"} Dec 01 08:53:36 crc kubenswrapper[4689]: I1201 08:53:36.603135 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-7vlqn" event={"ID":"7ce2f328-3ee3-4800-89e4-9141c841c258","Type":"ContainerStarted","Data":"41295bea79b36b4011c0035a4f264b8504071d7af184ba4c5e2f30446fadd378"} Dec 01 08:53:36 crc kubenswrapper[4689]: I1201 08:53:36.611404 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-t56mz"] Dec 01 08:53:36 crc kubenswrapper[4689]: I1201 08:53:36.632650 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-769dc69bc-sfplx"] Dec 01 08:53:36 crc kubenswrapper[4689]: E1201 08:53:36.626316 4689 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 01 08:53:36 crc kubenswrapper[4689]: E1201 08:53:36.632805 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d24ac0e-a14a-4644-ba9a-bc0a6bb0c538-cert podName:6d24ac0e-a14a-4644-ba9a-bc0a6bb0c538 nodeName:}" failed. No retries permitted until 2025-12-01 08:53:38.632776188 +0000 UTC m=+898.705064092 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6d24ac0e-a14a-4644-ba9a-bc0a6bb0c538-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4sbfl9" (UID: "6d24ac0e-a14a-4644-ba9a-bc0a6bb0c538") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 01 08:53:36 crc kubenswrapper[4689]: I1201 08:53:36.626203 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6d24ac0e-a14a-4644-ba9a-bc0a6bb0c538-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4sbfl9\" (UID: \"6d24ac0e-a14a-4644-ba9a-bc0a6bb0c538\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4sbfl9" Dec 01 08:53:36 crc kubenswrapper[4689]: I1201 08:53:36.637198 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-prvxn"] Dec 01 08:53:36 crc kubenswrapper[4689]: I1201 08:53:36.643724 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f8c65bbfc-5d8x5"] Dec 01 08:53:36 crc kubenswrapper[4689]: I1201 08:53:36.651129 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-vbkrn"] Dec 01 08:53:36 crc kubenswrapper[4689]: E1201 08:53:36.671763 4689 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:9aa8c03633e4b934c57868c1660acf47e7d386ac86bcb344df262c9ad76b8621,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ltd7k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-769dc69bc-sfplx_openstack-operators(5f9861d6-2700-4af6-b385-e79220c14b2e): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 01 08:53:36 crc kubenswrapper[4689]: E1201 08:53:36.674985 4689 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ltd7k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-769dc69bc-sfplx_openstack-operators(5f9861d6-2700-4af6-b385-e79220c14b2e): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 01 08:53:36 crc kubenswrapper[4689]: E1201 08:53:36.675523 4689 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7xn55,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-t56mz_openstack-operators(7085b604-e50c-4940-ac21-b6fe208c82cd): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 01 08:53:36 crc kubenswrapper[4689]: E1201 08:53:36.675667 4689 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vf8ll,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5854674fcc-vbkrn_openstack-operators(f94d79da-740a-4080-81d0-ff3bf1867b3d): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 01 08:53:36 crc kubenswrapper[4689]: E1201 08:53:36.676057 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-sfplx" podUID="5f9861d6-2700-4af6-b385-e79220c14b2e" Dec 01 08:53:36 crc kubenswrapper[4689]: E1201 08:53:36.676713 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-t56mz" podUID="7085b604-e50c-4940-ac21-b6fe208c82cd" Dec 01 08:53:36 crc kubenswrapper[4689]: E1201 08:53:36.678284 4689 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vf8ll,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5854674fcc-vbkrn_openstack-operators(f94d79da-740a-4080-81d0-ff3bf1867b3d): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 01 08:53:36 crc kubenswrapper[4689]: E1201 08:53:36.678478 4689 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:2a3d21728a8bfb4e64617e63e61e2d1cb70a383ea3e8f846e0c3c3c02d2b0a9d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5tt5n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-5f8c65bbfc-5d8x5_openstack-operators(8b33263b-a51c-49e4-b301-b975791e098a): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 01 08:53:36 crc kubenswrapper[4689]: E1201 08:53:36.680289 4689 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:7d66757c0af67104f0389e851a7cc0daa44443ad202d157417bd86bbb57cc385,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-54hlz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-76cc84c6bb-prvxn_openstack-operators(af92d0ca-8211-49a0-9362-bd5749143fff): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 01 08:53:36 crc kubenswrapper[4689]: E1201 08:53:36.680527 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/test-operator-controller-manager-5854674fcc-vbkrn" podUID="f94d79da-740a-4080-81d0-ff3bf1867b3d" Dec 01 08:53:36 crc kubenswrapper[4689]: E1201 08:53:36.681233 4689 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5tt5n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-5f8c65bbfc-5d8x5_openstack-operators(8b33263b-a51c-49e4-b301-b975791e098a): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 01 08:53:36 crc kubenswrapper[4689]: E1201 08:53:36.681931 4689 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-54hlz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-76cc84c6bb-prvxn_openstack-operators(af92d0ca-8211-49a0-9362-bd5749143fff): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 01 08:53:36 crc kubenswrapper[4689]: E1201 08:53:36.682415 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-5d8x5" podUID="8b33263b-a51c-49e4-b301-b975791e098a" Dec 01 08:53:36 crc kubenswrapper[4689]: E1201 08:53:36.683084 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-prvxn" podUID="af92d0ca-8211-49a0-9362-bd5749143fff" Dec 01 08:53:37 crc kubenswrapper[4689]: I1201 08:53:37.039302 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4f43cf3a-d166-44ba-8d44-9e81b0666e0a-webhook-certs\") pod \"openstack-operator-controller-manager-6fc767d767-8r9dw\" (UID: \"4f43cf3a-d166-44ba-8d44-9e81b0666e0a\") " pod="openstack-operators/openstack-operator-controller-manager-6fc767d767-8r9dw" Dec 01 08:53:37 crc kubenswrapper[4689]: I1201 08:53:37.039414 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4f43cf3a-d166-44ba-8d44-9e81b0666e0a-metrics-certs\") pod \"openstack-operator-controller-manager-6fc767d767-8r9dw\" (UID: \"4f43cf3a-d166-44ba-8d44-9e81b0666e0a\") " pod="openstack-operators/openstack-operator-controller-manager-6fc767d767-8r9dw" Dec 01 08:53:37 crc kubenswrapper[4689]: E1201 08:53:37.039585 4689 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 01 08:53:37 crc kubenswrapper[4689]: E1201 08:53:37.039642 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f43cf3a-d166-44ba-8d44-9e81b0666e0a-metrics-certs podName:4f43cf3a-d166-44ba-8d44-9e81b0666e0a nodeName:}" failed. No retries permitted until 2025-12-01 08:53:39.039626747 +0000 UTC m=+899.111914641 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4f43cf3a-d166-44ba-8d44-9e81b0666e0a-metrics-certs") pod "openstack-operator-controller-manager-6fc767d767-8r9dw" (UID: "4f43cf3a-d166-44ba-8d44-9e81b0666e0a") : secret "metrics-server-cert" not found Dec 01 08:53:37 crc kubenswrapper[4689]: E1201 08:53:37.040065 4689 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 01 08:53:37 crc kubenswrapper[4689]: E1201 08:53:37.040093 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f43cf3a-d166-44ba-8d44-9e81b0666e0a-webhook-certs podName:4f43cf3a-d166-44ba-8d44-9e81b0666e0a nodeName:}" failed. No retries permitted until 2025-12-01 08:53:39.040085919 +0000 UTC m=+899.112373823 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4f43cf3a-d166-44ba-8d44-9e81b0666e0a-webhook-certs") pod "openstack-operator-controller-manager-6fc767d767-8r9dw" (UID: "4f43cf3a-d166-44ba-8d44-9e81b0666e0a") : secret "webhook-server-cert" not found Dec 01 08:53:37 crc kubenswrapper[4689]: I1201 08:53:37.619518 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-t56mz" event={"ID":"7085b604-e50c-4940-ac21-b6fe208c82cd","Type":"ContainerStarted","Data":"c22bbfe20786223fe25ae739ab3dad57e2159df016e4e93da8143b09b4e299e9"} Dec 01 08:53:37 crc kubenswrapper[4689]: E1201 08:53:37.623214 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-t56mz" podUID="7085b604-e50c-4940-ac21-b6fe208c82cd" Dec 01 08:53:37 crc kubenswrapper[4689]: I1201 08:53:37.626399 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-nsnm9" event={"ID":"3e8aa0dc-ea41-48e6-b047-4bb71fd01f8a","Type":"ContainerStarted","Data":"2395b0a633e6714ad860d038094ea8aafa21beb10ca1de21982c810880838b5c"} Dec 01 08:53:37 crc kubenswrapper[4689]: I1201 08:53:37.627971 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-prvxn" event={"ID":"af92d0ca-8211-49a0-9362-bd5749143fff","Type":"ContainerStarted","Data":"d25c39aa49b4f7882e793e86421e661d64ac8330af2f45506e969cc02b80f69a"} Dec 01 08:53:37 crc kubenswrapper[4689]: I1201 08:53:37.651479 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-5d8x5" event={"ID":"8b33263b-a51c-49e4-b301-b975791e098a","Type":"ContainerStarted","Data":"386df5f3cefb0b96dfc5551c09062211690e1dfadcccee0495daea8749da29ae"} Dec 01 08:53:37 crc kubenswrapper[4689]: E1201 08:53:37.653407 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:2a3d21728a8bfb4e64617e63e61e2d1cb70a383ea3e8f846e0c3c3c02d2b0a9d\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-5d8x5" podUID="8b33263b-a51c-49e4-b301-b975791e098a" Dec 01 08:53:37 crc kubenswrapper[4689]: E1201 08:53:37.658213 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:7d66757c0af67104f0389e851a7cc0daa44443ad202d157417bd86bbb57cc385\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-prvxn" podUID="af92d0ca-8211-49a0-9362-bd5749143fff" Dec 01 08:53:37 crc kubenswrapper[4689]: I1201 08:53:37.668580 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-vbkrn" event={"ID":"f94d79da-740a-4080-81d0-ff3bf1867b3d","Type":"ContainerStarted","Data":"bce787fbf56a675a812859aee704dd50632ee860c7cffbd0f1e8ff536de4a425"} Dec 01 08:53:37 crc kubenswrapper[4689]: E1201 08:53:37.683269 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/test-operator-controller-manager-5854674fcc-vbkrn" podUID="f94d79da-740a-4080-81d0-ff3bf1867b3d" Dec 01 08:53:37 crc kubenswrapper[4689]: I1201 08:53:37.694507 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-sfplx" event={"ID":"5f9861d6-2700-4af6-b385-e79220c14b2e","Type":"ContainerStarted","Data":"ef2bb5f9085fddc7bef67e3dc0133547641360ce75ac35f3b897d0464e2f8b32"} Dec 01 08:53:37 crc kubenswrapper[4689]: I1201 08:53:37.734598 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-vfnzm" event={"ID":"12885cbd-1d3e-40c1-b7f5-73bdb6572db9","Type":"ContainerStarted","Data":"10cd49db7d2ece5ae5f584d470827a5968a944eef65b1150a5867af10da0d93e"} Dec 01 08:53:37 crc kubenswrapper[4689]: E1201 08:53:37.735083 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:9aa8c03633e4b934c57868c1660acf47e7d386ac86bcb344df262c9ad76b8621\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-sfplx" podUID="5f9861d6-2700-4af6-b385-e79220c14b2e" Dec 01 08:53:37 crc kubenswrapper[4689]: E1201 08:53:37.739358 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/octavia-operator-controller-manager-998648c74-vfnzm" podUID="12885cbd-1d3e-40c1-b7f5-73bdb6572db9" Dec 01 08:53:38 crc kubenswrapper[4689]: I1201 08:53:38.273336 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e44ef73a-e172-4557-920d-42f84488390e-cert\") pod \"infra-operator-controller-manager-57548d458d-tgmx9\" (UID: \"e44ef73a-e172-4557-920d-42f84488390e\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-tgmx9" Dec 01 08:53:38 crc kubenswrapper[4689]: E1201 08:53:38.273522 4689 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 01 08:53:38 crc kubenswrapper[4689]: E1201 08:53:38.273575 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e44ef73a-e172-4557-920d-42f84488390e-cert podName:e44ef73a-e172-4557-920d-42f84488390e nodeName:}" failed. No retries permitted until 2025-12-01 08:53:42.27356101 +0000 UTC m=+902.345848914 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e44ef73a-e172-4557-920d-42f84488390e-cert") pod "infra-operator-controller-manager-57548d458d-tgmx9" (UID: "e44ef73a-e172-4557-920d-42f84488390e") : secret "infra-operator-webhook-server-cert" not found Dec 01 08:53:38 crc kubenswrapper[4689]: I1201 08:53:38.681178 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6d24ac0e-a14a-4644-ba9a-bc0a6bb0c538-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4sbfl9\" (UID: \"6d24ac0e-a14a-4644-ba9a-bc0a6bb0c538\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4sbfl9" Dec 01 08:53:38 crc kubenswrapper[4689]: E1201 08:53:38.681430 4689 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 01 08:53:38 crc kubenswrapper[4689]: E1201 08:53:38.681482 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d24ac0e-a14a-4644-ba9a-bc0a6bb0c538-cert podName:6d24ac0e-a14a-4644-ba9a-bc0a6bb0c538 nodeName:}" failed. No retries permitted until 2025-12-01 08:53:42.681466898 +0000 UTC m=+902.753754802 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6d24ac0e-a14a-4644-ba9a-bc0a6bb0c538-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4sbfl9" (UID: "6d24ac0e-a14a-4644-ba9a-bc0a6bb0c538") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 01 08:53:38 crc kubenswrapper[4689]: E1201 08:53:38.747065 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-t56mz" podUID="7085b604-e50c-4940-ac21-b6fe208c82cd" Dec 01 08:53:38 crc kubenswrapper[4689]: E1201 08:53:38.748976 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:9aa8c03633e4b934c57868c1660acf47e7d386ac86bcb344df262c9ad76b8621\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-sfplx" podUID="5f9861d6-2700-4af6-b385-e79220c14b2e" Dec 01 08:53:38 crc kubenswrapper[4689]: E1201 08:53:38.749309 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/octavia-operator-controller-manager-998648c74-vfnzm" podUID="12885cbd-1d3e-40c1-b7f5-73bdb6572db9" Dec 01 08:53:38 crc kubenswrapper[4689]: E1201 08:53:38.749255 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:2a3d21728a8bfb4e64617e63e61e2d1cb70a383ea3e8f846e0c3c3c02d2b0a9d\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-5d8x5" podUID="8b33263b-a51c-49e4-b301-b975791e098a" Dec 01 08:53:38 crc kubenswrapper[4689]: E1201 08:53:38.750439 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/test-operator-controller-manager-5854674fcc-vbkrn" podUID="f94d79da-740a-4080-81d0-ff3bf1867b3d" Dec 01 08:53:38 crc kubenswrapper[4689]: E1201 08:53:38.777361 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:7d66757c0af67104f0389e851a7cc0daa44443ad202d157417bd86bbb57cc385\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-prvxn" podUID="af92d0ca-8211-49a0-9362-bd5749143fff" Dec 01 08:53:39 crc kubenswrapper[4689]: I1201 08:53:39.086753 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4f43cf3a-d166-44ba-8d44-9e81b0666e0a-webhook-certs\") pod \"openstack-operator-controller-manager-6fc767d767-8r9dw\" (UID: \"4f43cf3a-d166-44ba-8d44-9e81b0666e0a\") " pod="openstack-operators/openstack-operator-controller-manager-6fc767d767-8r9dw" Dec 01 08:53:39 crc kubenswrapper[4689]: I1201 08:53:39.086828 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4f43cf3a-d166-44ba-8d44-9e81b0666e0a-metrics-certs\") pod \"openstack-operator-controller-manager-6fc767d767-8r9dw\" (UID: \"4f43cf3a-d166-44ba-8d44-9e81b0666e0a\") " pod="openstack-operators/openstack-operator-controller-manager-6fc767d767-8r9dw" Dec 01 08:53:39 crc kubenswrapper[4689]: E1201 08:53:39.087135 4689 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 01 08:53:39 crc kubenswrapper[4689]: E1201 08:53:39.087135 4689 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 01 08:53:39 crc kubenswrapper[4689]: E1201 08:53:39.087199 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f43cf3a-d166-44ba-8d44-9e81b0666e0a-metrics-certs podName:4f43cf3a-d166-44ba-8d44-9e81b0666e0a nodeName:}" failed. No retries permitted until 2025-12-01 08:53:43.087182166 +0000 UTC m=+903.159470070 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4f43cf3a-d166-44ba-8d44-9e81b0666e0a-metrics-certs") pod "openstack-operator-controller-manager-6fc767d767-8r9dw" (UID: "4f43cf3a-d166-44ba-8d44-9e81b0666e0a") : secret "metrics-server-cert" not found Dec 01 08:53:39 crc kubenswrapper[4689]: E1201 08:53:39.087213 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f43cf3a-d166-44ba-8d44-9e81b0666e0a-webhook-certs podName:4f43cf3a-d166-44ba-8d44-9e81b0666e0a nodeName:}" failed. No retries permitted until 2025-12-01 08:53:43.087208226 +0000 UTC m=+903.159496130 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4f43cf3a-d166-44ba-8d44-9e81b0666e0a-webhook-certs") pod "openstack-operator-controller-manager-6fc767d767-8r9dw" (UID: "4f43cf3a-d166-44ba-8d44-9e81b0666e0a") : secret "webhook-server-cert" not found Dec 01 08:53:39 crc kubenswrapper[4689]: I1201 08:53:39.146713 4689 patch_prober.go:28] interesting pod/machine-config-daemon-hmdnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 08:53:39 crc kubenswrapper[4689]: I1201 08:53:39.146805 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 08:53:42 crc kubenswrapper[4689]: I1201 08:53:42.337305 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e44ef73a-e172-4557-920d-42f84488390e-cert\") pod \"infra-operator-controller-manager-57548d458d-tgmx9\" (UID: \"e44ef73a-e172-4557-920d-42f84488390e\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-tgmx9" Dec 01 08:53:42 crc kubenswrapper[4689]: E1201 08:53:42.337551 4689 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 01 08:53:42 crc kubenswrapper[4689]: E1201 08:53:42.338171 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e44ef73a-e172-4557-920d-42f84488390e-cert podName:e44ef73a-e172-4557-920d-42f84488390e nodeName:}" failed. No retries permitted until 2025-12-01 08:53:50.338146389 +0000 UTC m=+910.410434293 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e44ef73a-e172-4557-920d-42f84488390e-cert") pod "infra-operator-controller-manager-57548d458d-tgmx9" (UID: "e44ef73a-e172-4557-920d-42f84488390e") : secret "infra-operator-webhook-server-cert" not found Dec 01 08:53:42 crc kubenswrapper[4689]: I1201 08:53:42.743730 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6d24ac0e-a14a-4644-ba9a-bc0a6bb0c538-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4sbfl9\" (UID: \"6d24ac0e-a14a-4644-ba9a-bc0a6bb0c538\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4sbfl9" Dec 01 08:53:42 crc kubenswrapper[4689]: E1201 08:53:42.744081 4689 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 01 08:53:42 crc kubenswrapper[4689]: E1201 08:53:42.744160 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d24ac0e-a14a-4644-ba9a-bc0a6bb0c538-cert podName:6d24ac0e-a14a-4644-ba9a-bc0a6bb0c538 nodeName:}" failed. No retries permitted until 2025-12-01 08:53:50.744140644 +0000 UTC m=+910.816428548 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6d24ac0e-a14a-4644-ba9a-bc0a6bb0c538-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4sbfl9" (UID: "6d24ac0e-a14a-4644-ba9a-bc0a6bb0c538") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 01 08:53:43 crc kubenswrapper[4689]: I1201 08:53:43.148532 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4f43cf3a-d166-44ba-8d44-9e81b0666e0a-metrics-certs\") pod \"openstack-operator-controller-manager-6fc767d767-8r9dw\" (UID: \"4f43cf3a-d166-44ba-8d44-9e81b0666e0a\") " pod="openstack-operators/openstack-operator-controller-manager-6fc767d767-8r9dw" Dec 01 08:53:43 crc kubenswrapper[4689]: I1201 08:53:43.148696 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4f43cf3a-d166-44ba-8d44-9e81b0666e0a-webhook-certs\") pod \"openstack-operator-controller-manager-6fc767d767-8r9dw\" (UID: \"4f43cf3a-d166-44ba-8d44-9e81b0666e0a\") " pod="openstack-operators/openstack-operator-controller-manager-6fc767d767-8r9dw" Dec 01 08:53:43 crc kubenswrapper[4689]: E1201 08:53:43.148699 4689 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 01 08:53:43 crc kubenswrapper[4689]: E1201 08:53:43.148799 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f43cf3a-d166-44ba-8d44-9e81b0666e0a-metrics-certs podName:4f43cf3a-d166-44ba-8d44-9e81b0666e0a nodeName:}" failed. No retries permitted until 2025-12-01 08:53:51.148780303 +0000 UTC m=+911.221068207 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4f43cf3a-d166-44ba-8d44-9e81b0666e0a-metrics-certs") pod "openstack-operator-controller-manager-6fc767d767-8r9dw" (UID: "4f43cf3a-d166-44ba-8d44-9e81b0666e0a") : secret "metrics-server-cert" not found Dec 01 08:53:43 crc kubenswrapper[4689]: E1201 08:53:43.148804 4689 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 01 08:53:43 crc kubenswrapper[4689]: E1201 08:53:43.148837 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f43cf3a-d166-44ba-8d44-9e81b0666e0a-webhook-certs podName:4f43cf3a-d166-44ba-8d44-9e81b0666e0a nodeName:}" failed. No retries permitted until 2025-12-01 08:53:51.148828204 +0000 UTC m=+911.221116108 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4f43cf3a-d166-44ba-8d44-9e81b0666e0a-webhook-certs") pod "openstack-operator-controller-manager-6fc767d767-8r9dw" (UID: "4f43cf3a-d166-44ba-8d44-9e81b0666e0a") : secret "webhook-server-cert" not found Dec 01 08:53:50 crc kubenswrapper[4689]: I1201 08:53:50.407339 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e44ef73a-e172-4557-920d-42f84488390e-cert\") pod \"infra-operator-controller-manager-57548d458d-tgmx9\" (UID: \"e44ef73a-e172-4557-920d-42f84488390e\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-tgmx9" Dec 01 08:53:50 crc kubenswrapper[4689]: I1201 08:53:50.419287 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e44ef73a-e172-4557-920d-42f84488390e-cert\") pod \"infra-operator-controller-manager-57548d458d-tgmx9\" (UID: \"e44ef73a-e172-4557-920d-42f84488390e\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-tgmx9" Dec 01 08:53:50 crc kubenswrapper[4689]: I1201 08:53:50.574234 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-dhvt8" Dec 01 08:53:50 crc kubenswrapper[4689]: I1201 08:53:50.582174 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-57548d458d-tgmx9" Dec 01 08:53:50 crc kubenswrapper[4689]: I1201 08:53:50.814201 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6d24ac0e-a14a-4644-ba9a-bc0a6bb0c538-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4sbfl9\" (UID: \"6d24ac0e-a14a-4644-ba9a-bc0a6bb0c538\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4sbfl9" Dec 01 08:53:50 crc kubenswrapper[4689]: I1201 08:53:50.820947 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6d24ac0e-a14a-4644-ba9a-bc0a6bb0c538-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4sbfl9\" (UID: \"6d24ac0e-a14a-4644-ba9a-bc0a6bb0c538\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4sbfl9" Dec 01 08:53:51 crc kubenswrapper[4689]: I1201 08:53:51.107928 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-wrfks" Dec 01 08:53:51 crc kubenswrapper[4689]: I1201 08:53:51.117113 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4sbfl9" Dec 01 08:53:51 crc kubenswrapper[4689]: I1201 08:53:51.155658 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4f43cf3a-d166-44ba-8d44-9e81b0666e0a-metrics-certs\") pod \"openstack-operator-controller-manager-6fc767d767-8r9dw\" (UID: \"4f43cf3a-d166-44ba-8d44-9e81b0666e0a\") " pod="openstack-operators/openstack-operator-controller-manager-6fc767d767-8r9dw" Dec 01 08:53:51 crc kubenswrapper[4689]: I1201 08:53:51.155991 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4f43cf3a-d166-44ba-8d44-9e81b0666e0a-webhook-certs\") pod \"openstack-operator-controller-manager-6fc767d767-8r9dw\" (UID: \"4f43cf3a-d166-44ba-8d44-9e81b0666e0a\") " pod="openstack-operators/openstack-operator-controller-manager-6fc767d767-8r9dw" Dec 01 08:53:51 crc kubenswrapper[4689]: I1201 08:53:51.165304 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4f43cf3a-d166-44ba-8d44-9e81b0666e0a-metrics-certs\") pod \"openstack-operator-controller-manager-6fc767d767-8r9dw\" (UID: \"4f43cf3a-d166-44ba-8d44-9e81b0666e0a\") " pod="openstack-operators/openstack-operator-controller-manager-6fc767d767-8r9dw" Dec 01 08:53:51 crc kubenswrapper[4689]: I1201 08:53:51.165321 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4f43cf3a-d166-44ba-8d44-9e81b0666e0a-webhook-certs\") pod \"openstack-operator-controller-manager-6fc767d767-8r9dw\" (UID: \"4f43cf3a-d166-44ba-8d44-9e81b0666e0a\") " pod="openstack-operators/openstack-operator-controller-manager-6fc767d767-8r9dw" Dec 01 08:53:51 crc kubenswrapper[4689]: I1201 08:53:51.317487 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-dvxk6" Dec 01 08:53:51 crc kubenswrapper[4689]: I1201 08:53:51.326941 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6fc767d767-8r9dw" Dec 01 08:53:56 crc kubenswrapper[4689]: E1201 08:53:56.102983 4689 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:c4abfc148600dfa85915f3dc911d988ea2335f26cb6b8d749fe79bfe53e5e429" Dec 01 08:53:56 crc kubenswrapper[4689]: E1201 08:53:56.104075 4689 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:c4abfc148600dfa85915f3dc911d988ea2335f26cb6b8d749fe79bfe53e5e429,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mprq5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-5f64f6f8bb-w6qx2_openstack-operators(fc02885a-340a-4800-bd0b-360c0476b456): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 01 08:53:56 crc kubenswrapper[4689]: E1201 08:53:56.645222 4689 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:9f68d7bc8c6bce38f46dee8a8272d5365c49fe7b32b2af52e8ac884e212f3a85" Dec 01 08:53:56 crc kubenswrapper[4689]: E1201 08:53:56.645815 4689 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:9f68d7bc8c6bce38f46dee8a8272d5365c49fe7b32b2af52e8ac884e212f3a85,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7pcl5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-78b4bc895b-25q6j_openstack-operators(2b35aff9-c66d-448c-9883-05e650f7f147): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 01 08:53:58 crc kubenswrapper[4689]: E1201 08:53:58.967751 4689 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:600ca007e493d3af0fcc2ebac92e8da5efd2afe812b62d7d3d4dd0115bdf05d7" Dec 01 08:53:58 crc kubenswrapper[4689]: E1201 08:53:58.968833 4689 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:600ca007e493d3af0fcc2ebac92e8da5efd2afe812b62d7d3d4dd0115bdf05d7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xx8kx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-56bbcc9d85-fm9bv_openstack-operators(0d311ded-de3a-42e8-87d3-23c50c4fbd8a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 01 08:54:00 crc kubenswrapper[4689]: E1201 08:54:00.373055 4689 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:ecf7be921850bdc04697ed1b332bab39ad2a64e4e45c2a445c04f9bae6ac61b5" Dec 01 08:54:00 crc kubenswrapper[4689]: E1201 08:54:00.374463 4689 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:ecf7be921850bdc04697ed1b332bab39ad2a64e4e45c2a445c04f9bae6ac61b5,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-56tkb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-6546668bfd-x722t_openstack-operators(3751be2a-8675-4b07-8198-101bfdd71d72): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 01 08:54:02 crc kubenswrapper[4689]: E1201 08:54:02.817259 4689 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670" Dec 01 08:54:02 crc kubenswrapper[4689]: E1201 08:54:02.817513 4689 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vvdjz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-697bc559fc-pssbg_openstack-operators(d4a1d78c-9486-4b3b-afac-2d51d2cb14df): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 01 08:54:09 crc kubenswrapper[4689]: I1201 08:54:09.146933 4689 patch_prober.go:28] interesting pod/machine-config-daemon-hmdnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 08:54:09 crc kubenswrapper[4689]: I1201 08:54:09.147495 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 08:54:09 crc kubenswrapper[4689]: E1201 08:54:09.916574 4689 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Dec 01 08:54:09 crc kubenswrapper[4689]: E1201 08:54:09.916875 4689 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7xn55,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-t56mz_openstack-operators(7085b604-e50c-4940-ac21-b6fe208c82cd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 01 08:54:09 crc kubenswrapper[4689]: E1201 08:54:09.918148 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-t56mz" podUID="7085b604-e50c-4940-ac21-b6fe208c82cd" Dec 01 08:54:10 crc kubenswrapper[4689]: E1201 08:54:10.062468 4689 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.38:5001/openstack-k8s-operators/keystone-operator:2823ce61c0258b2da2f4404e65427f19f6d0a18f" Dec 01 08:54:10 crc kubenswrapper[4689]: E1201 08:54:10.062893 4689 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.38:5001/openstack-k8s-operators/keystone-operator:2823ce61c0258b2da2f4404e65427f19f6d0a18f" Dec 01 08:54:10 crc kubenswrapper[4689]: E1201 08:54:10.065167 4689 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.38:5001/openstack-k8s-operators/keystone-operator:2823ce61c0258b2da2f4404e65427f19f6d0a18f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jfhqp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-758d67db86-z298n_openstack-operators(2974e300-3f26-4ec0-912a-9ee6b78f33ce): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 01 08:54:10 crc kubenswrapper[4689]: I1201 08:54:10.068831 4689 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 01 08:54:10 crc kubenswrapper[4689]: I1201 08:54:10.487868 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4sbfl9"] Dec 01 08:54:10 crc kubenswrapper[4689]: I1201 08:54:10.610334 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6fc767d767-8r9dw"] Dec 01 08:54:10 crc kubenswrapper[4689]: I1201 08:54:10.768863 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-tgmx9"] Dec 01 08:54:10 crc kubenswrapper[4689]: W1201 08:54:10.779611 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f43cf3a_d166_44ba_8d44_9e81b0666e0a.slice/crio-058d88f06847464a9c9f0ecaec49ea6b503f1444852d591351259dce632a15e3 WatchSource:0}: Error finding container 058d88f06847464a9c9f0ecaec49ea6b503f1444852d591351259dce632a15e3: Status 404 returned error can't find the container with id 058d88f06847464a9c9f0ecaec49ea6b503f1444852d591351259dce632a15e3 Dec 01 08:54:10 crc kubenswrapper[4689]: W1201 08:54:10.781514 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6d24ac0e_a14a_4644_ba9a_bc0a6bb0c538.slice/crio-b0f0626aa772a959f319ba27d7b27e9416db016ff3a8956669bb4b7f394f9fc1 WatchSource:0}: Error finding container b0f0626aa772a959f319ba27d7b27e9416db016ff3a8956669bb4b7f394f9fc1: Status 404 returned error can't find the container with id b0f0626aa772a959f319ba27d7b27e9416db016ff3a8956669bb4b7f394f9fc1 Dec 01 08:54:10 crc kubenswrapper[4689]: I1201 08:54:10.982960 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-nsnm9" event={"ID":"3e8aa0dc-ea41-48e6-b047-4bb71fd01f8a","Type":"ContainerStarted","Data":"e59fb6fb45cf29d075833a33d00c3b71fa2b98532303d400115ac9d6e827ef12"} Dec 01 08:54:10 crc kubenswrapper[4689]: I1201 08:54:10.984746 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-xhrp7" event={"ID":"ae47d16a-5025-44f4-8fa4-f5aa08b126b8","Type":"ContainerStarted","Data":"e5b0e1ca75714ff3f38148592fad2593531e37a12b4cf4cba6605f284f908d55"} Dec 01 08:54:10 crc kubenswrapper[4689]: I1201 08:54:10.986393 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-f7xtr" event={"ID":"ea3e4b08-090d-444e-ba53-a3df490fbaf8","Type":"ContainerStarted","Data":"d46890654ac3bf95905488cacc82bbc6875b199f3e7054d393c7023165d23dfe"} Dec 01 08:54:10 crc kubenswrapper[4689]: I1201 08:54:10.988835 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-tgmx9" event={"ID":"e44ef73a-e172-4557-920d-42f84488390e","Type":"ContainerStarted","Data":"a71acbc11bbb1436c44ae9f788aecd75cfadb9ac2e8a3e96c531d494976877bf"} Dec 01 08:54:10 crc kubenswrapper[4689]: I1201 08:54:10.991271 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4sbfl9" event={"ID":"6d24ac0e-a14a-4644-ba9a-bc0a6bb0c538","Type":"ContainerStarted","Data":"b0f0626aa772a959f319ba27d7b27e9416db016ff3a8956669bb4b7f394f9fc1"} Dec 01 08:54:10 crc kubenswrapper[4689]: I1201 08:54:10.993981 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6fc767d767-8r9dw" event={"ID":"4f43cf3a-d166-44ba-8d44-9e81b0666e0a","Type":"ContainerStarted","Data":"058d88f06847464a9c9f0ecaec49ea6b503f1444852d591351259dce632a15e3"} Dec 01 08:54:12 crc kubenswrapper[4689]: I1201 08:54:12.026583 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-dp8gl" event={"ID":"ffc5e400-7853-4b1d-ae11-d6ffa553093a","Type":"ContainerStarted","Data":"850d2e57e694b90d1a1651336d47ee83e893a902353ad0441565600133bf8b81"} Dec 01 08:54:12 crc kubenswrapper[4689]: I1201 08:54:12.032929 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-ghq5b" event={"ID":"4d923f8c-103b-4b12-b2e7-ea926440e5e7","Type":"ContainerStarted","Data":"93aab006f6281f8ee17b6f05c766568d383eda0fea5bcd10f6b4866513687905"} Dec 01 08:54:12 crc kubenswrapper[4689]: I1201 08:54:12.037938 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-7vrt5" event={"ID":"5266d333-3337-4481-9478-2e1df848bfa2","Type":"ContainerStarted","Data":"7a67bb518fb7bd48a319d7d948d0b42751dc40acc5391c3844cbecd6a50823c3"} Dec 01 08:54:12 crc kubenswrapper[4689]: I1201 08:54:12.041655 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-vbkrn" event={"ID":"f94d79da-740a-4080-81d0-ff3bf1867b3d","Type":"ContainerStarted","Data":"5c8a188ae4918f28252525d4f1ed4bce661c362aa0df7e4b9adf6aa4d4425e27"} Dec 01 08:54:12 crc kubenswrapper[4689]: I1201 08:54:12.045687 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-sfplx" event={"ID":"5f9861d6-2700-4af6-b385-e79220c14b2e","Type":"ContainerStarted","Data":"510706de463d3cc3ee305a707744841042c8c7dd65f83e004176b52779a99d8b"} Dec 01 08:54:12 crc kubenswrapper[4689]: I1201 08:54:12.047856 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-p296h" event={"ID":"b3049390-311d-46ed-b472-d32a22f2f8d2","Type":"ContainerStarted","Data":"03a5f1e4ba4e2fbd1999315d002f1864a6be92021362866ce0fdec62bfbb07fa"} Dec 01 08:54:12 crc kubenswrapper[4689]: I1201 08:54:12.053488 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-7vlqn" event={"ID":"7ce2f328-3ee3-4800-89e4-9141c841c258","Type":"ContainerStarted","Data":"7347f054335dc90e751a8ca26d7279f6c11dddb31bdeecf16e5d090060e7a37d"} Dec 01 08:54:13 crc kubenswrapper[4689]: I1201 08:54:13.085670 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-vfnzm" event={"ID":"12885cbd-1d3e-40c1-b7f5-73bdb6572db9","Type":"ContainerStarted","Data":"15fdcdf37e974a67377b06bd36ea61e169720e8196e73318493a1f93a71ab8f9"} Dec 01 08:54:13 crc kubenswrapper[4689]: I1201 08:54:13.863313 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-npf4h"] Dec 01 08:54:13 crc kubenswrapper[4689]: I1201 08:54:13.913866 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-npf4h"] Dec 01 08:54:13 crc kubenswrapper[4689]: I1201 08:54:13.914032 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-npf4h" Dec 01 08:54:14 crc kubenswrapper[4689]: I1201 08:54:14.015020 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bcb09c3-4edd-4caf-af39-a1b03319c91c-catalog-content\") pod \"community-operators-npf4h\" (UID: \"4bcb09c3-4edd-4caf-af39-a1b03319c91c\") " pod="openshift-marketplace/community-operators-npf4h" Dec 01 08:54:14 crc kubenswrapper[4689]: I1201 08:54:14.015204 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llq68\" (UniqueName: \"kubernetes.io/projected/4bcb09c3-4edd-4caf-af39-a1b03319c91c-kube-api-access-llq68\") pod \"community-operators-npf4h\" (UID: \"4bcb09c3-4edd-4caf-af39-a1b03319c91c\") " pod="openshift-marketplace/community-operators-npf4h" Dec 01 08:54:14 crc kubenswrapper[4689]: I1201 08:54:14.015311 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bcb09c3-4edd-4caf-af39-a1b03319c91c-utilities\") pod \"community-operators-npf4h\" (UID: \"4bcb09c3-4edd-4caf-af39-a1b03319c91c\") " pod="openshift-marketplace/community-operators-npf4h" Dec 01 08:54:14 crc kubenswrapper[4689]: I1201 08:54:14.094575 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6fc767d767-8r9dw" event={"ID":"4f43cf3a-d166-44ba-8d44-9e81b0666e0a","Type":"ContainerStarted","Data":"d483bddb39b7d383a7886426aa5489fec1a40e6807ae3ae4ea1d31b9334898a5"} Dec 01 08:54:14 crc kubenswrapper[4689]: I1201 08:54:14.097069 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-6fc767d767-8r9dw" Dec 01 08:54:14 crc kubenswrapper[4689]: I1201 08:54:14.107781 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-prvxn" event={"ID":"af92d0ca-8211-49a0-9362-bd5749143fff","Type":"ContainerStarted","Data":"82c462e52b8439d753bdc4006c7581c3634018c7793ad9dbf614d34a3df77478"} Dec 01 08:54:14 crc kubenswrapper[4689]: I1201 08:54:14.116246 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llq68\" (UniqueName: \"kubernetes.io/projected/4bcb09c3-4edd-4caf-af39-a1b03319c91c-kube-api-access-llq68\") pod \"community-operators-npf4h\" (UID: \"4bcb09c3-4edd-4caf-af39-a1b03319c91c\") " pod="openshift-marketplace/community-operators-npf4h" Dec 01 08:54:14 crc kubenswrapper[4689]: I1201 08:54:14.116405 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bcb09c3-4edd-4caf-af39-a1b03319c91c-utilities\") pod \"community-operators-npf4h\" (UID: \"4bcb09c3-4edd-4caf-af39-a1b03319c91c\") " pod="openshift-marketplace/community-operators-npf4h" Dec 01 08:54:14 crc kubenswrapper[4689]: I1201 08:54:14.116606 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bcb09c3-4edd-4caf-af39-a1b03319c91c-catalog-content\") pod \"community-operators-npf4h\" (UID: \"4bcb09c3-4edd-4caf-af39-a1b03319c91c\") " pod="openshift-marketplace/community-operators-npf4h" Dec 01 08:54:14 crc kubenswrapper[4689]: I1201 08:54:14.117080 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bcb09c3-4edd-4caf-af39-a1b03319c91c-catalog-content\") pod \"community-operators-npf4h\" (UID: \"4bcb09c3-4edd-4caf-af39-a1b03319c91c\") " pod="openshift-marketplace/community-operators-npf4h" Dec 01 08:54:14 crc kubenswrapper[4689]: I1201 08:54:14.120343 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bcb09c3-4edd-4caf-af39-a1b03319c91c-utilities\") pod \"community-operators-npf4h\" (UID: \"4bcb09c3-4edd-4caf-af39-a1b03319c91c\") " pod="openshift-marketplace/community-operators-npf4h" Dec 01 08:54:14 crc kubenswrapper[4689]: I1201 08:54:14.126139 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-6fc767d767-8r9dw" podStartSLOduration=39.126103429 podStartE2EDuration="39.126103429s" podCreationTimestamp="2025-12-01 08:53:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:54:14.124464944 +0000 UTC m=+934.196752848" watchObservedRunningTime="2025-12-01 08:54:14.126103429 +0000 UTC m=+934.198391333" Dec 01 08:54:14 crc kubenswrapper[4689]: I1201 08:54:14.154732 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llq68\" (UniqueName: \"kubernetes.io/projected/4bcb09c3-4edd-4caf-af39-a1b03319c91c-kube-api-access-llq68\") pod \"community-operators-npf4h\" (UID: \"4bcb09c3-4edd-4caf-af39-a1b03319c91c\") " pod="openshift-marketplace/community-operators-npf4h" Dec 01 08:54:14 crc kubenswrapper[4689]: I1201 08:54:14.242986 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-npf4h" Dec 01 08:54:16 crc kubenswrapper[4689]: I1201 08:54:16.127124 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-5d8x5" event={"ID":"8b33263b-a51c-49e4-b301-b975791e098a","Type":"ContainerStarted","Data":"ed2ea08912c28122f8907b80dcd93c9f150e2e1faadc823dfb674c7affe39859"} Dec 01 08:54:17 crc kubenswrapper[4689]: E1201 08:54:17.461849 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-fm9bv" podUID="0d311ded-de3a-42e8-87d3-23c50c4fbd8a" Dec 01 08:54:17 crc kubenswrapper[4689]: I1201 08:54:17.521413 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-npf4h"] Dec 01 08:54:17 crc kubenswrapper[4689]: E1201 08:54:17.534787 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-6546668bfd-x722t" podUID="3751be2a-8675-4b07-8198-101bfdd71d72" Dec 01 08:54:17 crc kubenswrapper[4689]: E1201 08:54:17.756077 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-25q6j" podUID="2b35aff9-c66d-448c-9883-05e650f7f147" Dec 01 08:54:17 crc kubenswrapper[4689]: E1201 08:54:17.762136 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-pssbg" podUID="d4a1d78c-9486-4b3b-afac-2d51d2cb14df" Dec 01 08:54:17 crc kubenswrapper[4689]: E1201 08:54:17.905506 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-w6qx2" podUID="fc02885a-340a-4800-bd0b-360c0476b456" Dec 01 08:54:18 crc kubenswrapper[4689]: I1201 08:54:18.217312 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-ghq5b" event={"ID":"4d923f8c-103b-4b12-b2e7-ea926440e5e7","Type":"ContainerStarted","Data":"1c584986463df187d5a715cb438f593889430edd6f8ebb078e3a8149189c262e"} Dec 01 08:54:18 crc kubenswrapper[4689]: I1201 08:54:18.237539 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-prvxn" Dec 01 08:54:18 crc kubenswrapper[4689]: I1201 08:54:18.246312 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-prvxn" Dec 01 08:54:18 crc kubenswrapper[4689]: I1201 08:54:18.256706 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-6546668bfd-x722t" event={"ID":"3751be2a-8675-4b07-8198-101bfdd71d72","Type":"ContainerStarted","Data":"5cade129ae5a0ef1b9f6b2256de682bf10aaa779857aa60714eda3427f8c2a60"} Dec 01 08:54:18 crc kubenswrapper[4689]: I1201 08:54:18.257285 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-ghq5b" podStartSLOduration=3.416649817 podStartE2EDuration="44.257253045s" podCreationTimestamp="2025-12-01 08:53:34 +0000 UTC" firstStartedPulling="2025-12-01 08:53:36.274208295 +0000 UTC m=+896.346496199" lastFinishedPulling="2025-12-01 08:54:17.114811513 +0000 UTC m=+937.187099427" observedRunningTime="2025-12-01 08:54:18.246878811 +0000 UTC m=+938.319166715" watchObservedRunningTime="2025-12-01 08:54:18.257253045 +0000 UTC m=+938.329540959" Dec 01 08:54:18 crc kubenswrapper[4689]: I1201 08:54:18.272259 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-vfnzm" event={"ID":"12885cbd-1d3e-40c1-b7f5-73bdb6572db9","Type":"ContainerStarted","Data":"065a1447d8de722d88a901828ba30d2ab9be58b4ee4878de99e24a2032963937"} Dec 01 08:54:18 crc kubenswrapper[4689]: I1201 08:54:18.274060 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-998648c74-vfnzm" Dec 01 08:54:18 crc kubenswrapper[4689]: I1201 08:54:18.277937 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-prvxn" podStartSLOduration=3.668362708 podStartE2EDuration="44.277918983s" podCreationTimestamp="2025-12-01 08:53:34 +0000 UTC" firstStartedPulling="2025-12-01 08:53:36.680140929 +0000 UTC m=+896.752428833" lastFinishedPulling="2025-12-01 08:54:17.289697204 +0000 UTC m=+937.361985108" observedRunningTime="2025-12-01 08:54:18.274146249 +0000 UTC m=+938.346434153" watchObservedRunningTime="2025-12-01 08:54:18.277918983 +0000 UTC m=+938.350206887" Dec 01 08:54:18 crc kubenswrapper[4689]: I1201 08:54:18.285222 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-998648c74-vfnzm" Dec 01 08:54:18 crc kubenswrapper[4689]: I1201 08:54:18.286283 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-7vrt5" event={"ID":"5266d333-3337-4481-9478-2e1df848bfa2","Type":"ContainerStarted","Data":"81b09db2a0e67f70d047f5af5dd6ee19c81ee4d064bd0b0af9d87a94395d45fc"} Dec 01 08:54:18 crc kubenswrapper[4689]: I1201 08:54:18.287122 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-7vrt5" Dec 01 08:54:18 crc kubenswrapper[4689]: I1201 08:54:18.291978 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-7vrt5" Dec 01 08:54:18 crc kubenswrapper[4689]: I1201 08:54:18.299703 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4sbfl9" event={"ID":"6d24ac0e-a14a-4644-ba9a-bc0a6bb0c538","Type":"ContainerStarted","Data":"9f0bbc7e2d5153cebd64da616c8eefaaba1deba76bf8cd281a7c8794eab90f8f"} Dec 01 08:54:18 crc kubenswrapper[4689]: I1201 08:54:18.321098 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-998648c74-vfnzm" podStartSLOduration=3.759936992 podStartE2EDuration="44.321077607s" podCreationTimestamp="2025-12-01 08:53:34 +0000 UTC" firstStartedPulling="2025-12-01 08:53:36.553688918 +0000 UTC m=+896.625976822" lastFinishedPulling="2025-12-01 08:54:17.114829523 +0000 UTC m=+937.187117437" observedRunningTime="2025-12-01 08:54:18.320957124 +0000 UTC m=+938.393245028" watchObservedRunningTime="2025-12-01 08:54:18.321077607 +0000 UTC m=+938.393365511" Dec 01 08:54:18 crc kubenswrapper[4689]: I1201 08:54:18.330212 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-f7xtr" event={"ID":"ea3e4b08-090d-444e-ba53-a3df490fbaf8","Type":"ContainerStarted","Data":"ea42ba32ec5b72cdf8fe1507cb8d04d8ee4f17affab670cbc46a8187f661be13"} Dec 01 08:54:18 crc kubenswrapper[4689]: I1201 08:54:18.331415 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-f7xtr" Dec 01 08:54:18 crc kubenswrapper[4689]: I1201 08:54:18.351752 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-f7xtr" Dec 01 08:54:18 crc kubenswrapper[4689]: I1201 08:54:18.351865 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-w6qx2" event={"ID":"fc02885a-340a-4800-bd0b-360c0476b456","Type":"ContainerStarted","Data":"f614b70a163a335dbf77e8703f6ba7e6a80940fbb99409c8749baa05ca76c084"} Dec 01 08:54:18 crc kubenswrapper[4689]: I1201 08:54:18.385531 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-npf4h" event={"ID":"4bcb09c3-4edd-4caf-af39-a1b03319c91c","Type":"ContainerStarted","Data":"e55a861b5c836a463edb2f6f8300c5f689fc8bea64fb6188c3f4b698f06b5b1c"} Dec 01 08:54:18 crc kubenswrapper[4689]: I1201 08:54:18.409116 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-pssbg" event={"ID":"d4a1d78c-9486-4b3b-afac-2d51d2cb14df","Type":"ContainerStarted","Data":"fd961c7e046b5da93b1ed06d7ef92d70499aaee9795b9d611fa2586d443b5ac7"} Dec 01 08:54:18 crc kubenswrapper[4689]: I1201 08:54:18.417379 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-25q6j" event={"ID":"2b35aff9-c66d-448c-9883-05e650f7f147","Type":"ContainerStarted","Data":"19efc071475596275401ee4577f0887abe63d68a9fbc02f83fe089e44d07eb7f"} Dec 01 08:54:18 crc kubenswrapper[4689]: I1201 08:54:18.480280 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-7vlqn" Dec 01 08:54:18 crc kubenswrapper[4689]: I1201 08:54:18.488950 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-tgmx9" event={"ID":"e44ef73a-e172-4557-920d-42f84488390e","Type":"ContainerStarted","Data":"b4446a22dda5c8c8d06cb6191791c18c7741f439351c31f950ba7c4a66a5f81e"} Dec 01 08:54:18 crc kubenswrapper[4689]: I1201 08:54:18.495827 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-f7xtr" podStartSLOduration=3.116533107 podStartE2EDuration="44.495806773s" podCreationTimestamp="2025-12-01 08:53:34 +0000 UTC" firstStartedPulling="2025-12-01 08:53:35.80448066 +0000 UTC m=+895.876768564" lastFinishedPulling="2025-12-01 08:54:17.183754326 +0000 UTC m=+937.256042230" observedRunningTime="2025-12-01 08:54:18.44393002 +0000 UTC m=+938.516217924" watchObservedRunningTime="2025-12-01 08:54:18.495806773 +0000 UTC m=+938.568094677" Dec 01 08:54:18 crc kubenswrapper[4689]: I1201 08:54:18.501898 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-7vrt5" podStartSLOduration=3.08673581 podStartE2EDuration="44.501875601s" podCreationTimestamp="2025-12-01 08:53:34 +0000 UTC" firstStartedPulling="2025-12-01 08:53:35.699960081 +0000 UTC m=+895.772247985" lastFinishedPulling="2025-12-01 08:54:17.115099872 +0000 UTC m=+937.187387776" observedRunningTime="2025-12-01 08:54:18.482084478 +0000 UTC m=+938.554372372" watchObservedRunningTime="2025-12-01 08:54:18.501875601 +0000 UTC m=+938.574163505" Dec 01 08:54:18 crc kubenswrapper[4689]: I1201 08:54:18.517253 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-fm9bv" event={"ID":"0d311ded-de3a-42e8-87d3-23c50c4fbd8a","Type":"ContainerStarted","Data":"237ee7d696e4eff8e681bb1e5c55950faaef027117a7d72d56a963a7f34d3021"} Dec 01 08:54:18 crc kubenswrapper[4689]: I1201 08:54:18.518282 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-7vlqn" Dec 01 08:54:18 crc kubenswrapper[4689]: I1201 08:54:18.538815 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-78f8948974-nsnm9" Dec 01 08:54:18 crc kubenswrapper[4689]: I1201 08:54:18.587821 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-78f8948974-nsnm9" Dec 01 08:54:18 crc kubenswrapper[4689]: I1201 08:54:18.623311 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-7vlqn" podStartSLOduration=3.329496684 podStartE2EDuration="44.623292433s" podCreationTimestamp="2025-12-01 08:53:34 +0000 UTC" firstStartedPulling="2025-12-01 08:53:35.889858524 +0000 UTC m=+895.962146428" lastFinishedPulling="2025-12-01 08:54:17.183654273 +0000 UTC m=+937.255942177" observedRunningTime="2025-12-01 08:54:18.615555021 +0000 UTC m=+938.687842925" watchObservedRunningTime="2025-12-01 08:54:18.623292433 +0000 UTC m=+938.695580337" Dec 01 08:54:18 crc kubenswrapper[4689]: I1201 08:54:18.672634 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-78f8948974-nsnm9" podStartSLOduration=4.128968121 podStartE2EDuration="44.672615618s" podCreationTimestamp="2025-12-01 08:53:34 +0000 UTC" firstStartedPulling="2025-12-01 08:53:36.550225662 +0000 UTC m=+896.622513566" lastFinishedPulling="2025-12-01 08:54:17.093873159 +0000 UTC m=+937.166161063" observedRunningTime="2025-12-01 08:54:18.669681706 +0000 UTC m=+938.741969610" watchObservedRunningTime="2025-12-01 08:54:18.672615618 +0000 UTC m=+938.744903522" Dec 01 08:54:18 crc kubenswrapper[4689]: E1201 08:54:18.822800 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-758d67db86-z298n" podUID="2974e300-3f26-4ec0-912a-9ee6b78f33ce" Dec 01 08:54:19 crc kubenswrapper[4689]: I1201 08:54:19.551395 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-npf4h" event={"ID":"4bcb09c3-4edd-4caf-af39-a1b03319c91c","Type":"ContainerDied","Data":"ccda3f4ab8e3582069b880423884ec4546ac7a799bf51f73b3d8ac0cfe2f6a8a"} Dec 01 08:54:19 crc kubenswrapper[4689]: I1201 08:54:19.551353 4689 generic.go:334] "Generic (PLEG): container finished" podID="4bcb09c3-4edd-4caf-af39-a1b03319c91c" containerID="ccda3f4ab8e3582069b880423884ec4546ac7a799bf51f73b3d8ac0cfe2f6a8a" exitCode=0 Dec 01 08:54:19 crc kubenswrapper[4689]: I1201 08:54:19.560777 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-pssbg" event={"ID":"d4a1d78c-9486-4b3b-afac-2d51d2cb14df","Type":"ContainerStarted","Data":"580ab1486fbeebc6a290a48124fc05016d87f3ed618dcb818814955c2c5f1fc7"} Dec 01 08:54:19 crc kubenswrapper[4689]: I1201 08:54:19.560948 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-pssbg" Dec 01 08:54:19 crc kubenswrapper[4689]: I1201 08:54:19.583833 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-nsnm9" event={"ID":"3e8aa0dc-ea41-48e6-b047-4bb71fd01f8a","Type":"ContainerStarted","Data":"8bb213df175455fea3a620dde28ce8e2dac6fbab5ca2b8e5bde530225fbcc34f"} Dec 01 08:54:19 crc kubenswrapper[4689]: I1201 08:54:19.588721 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-758d67db86-z298n" event={"ID":"2974e300-3f26-4ec0-912a-9ee6b78f33ce","Type":"ContainerStarted","Data":"4695c708b36dd34e2ea4bbb26b96269181b4c6535b80d5054c29d70d9ebdcb55"} Dec 01 08:54:19 crc kubenswrapper[4689]: E1201 08:54:19.589914 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.38:5001/openstack-k8s-operators/keystone-operator:2823ce61c0258b2da2f4404e65427f19f6d0a18f\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-758d67db86-z298n" podUID="2974e300-3f26-4ec0-912a-9ee6b78f33ce" Dec 01 08:54:19 crc kubenswrapper[4689]: I1201 08:54:19.591930 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-5d8x5" event={"ID":"8b33263b-a51c-49e4-b301-b975791e098a","Type":"ContainerStarted","Data":"da9057a6013291ca289d3d7cfde30b73069ca4a871b37b53217c8a4c05953659"} Dec 01 08:54:19 crc kubenswrapper[4689]: I1201 08:54:19.592025 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-5d8x5" Dec 01 08:54:19 crc kubenswrapper[4689]: I1201 08:54:19.593923 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-sfplx" event={"ID":"5f9861d6-2700-4af6-b385-e79220c14b2e","Type":"ContainerStarted","Data":"bb31c20d78c3764348ad56edca95ccf44b4db979842b6aafd58256bd884b1a25"} Dec 01 08:54:19 crc kubenswrapper[4689]: I1201 08:54:19.594123 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-sfplx" Dec 01 08:54:19 crc kubenswrapper[4689]: I1201 08:54:19.596566 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-sfplx" Dec 01 08:54:19 crc kubenswrapper[4689]: I1201 08:54:19.596824 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-p296h" event={"ID":"b3049390-311d-46ed-b472-d32a22f2f8d2","Type":"ContainerStarted","Data":"9c923e5d61760a6bfbb35f018850e7a2e06d950c29424c4c57b34360a14f5952"} Dec 01 08:54:19 crc kubenswrapper[4689]: I1201 08:54:19.597054 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-p296h" Dec 01 08:54:19 crc kubenswrapper[4689]: I1201 08:54:19.599014 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-p296h" Dec 01 08:54:19 crc kubenswrapper[4689]: I1201 08:54:19.599386 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-7vlqn" event={"ID":"7ce2f328-3ee3-4800-89e4-9141c841c258","Type":"ContainerStarted","Data":"f4872f8c7fe96d3e2afabd998224b903e5c32f995dad57b87ed0e9a19498991f"} Dec 01 08:54:19 crc kubenswrapper[4689]: I1201 08:54:19.601689 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-tgmx9" event={"ID":"e44ef73a-e172-4557-920d-42f84488390e","Type":"ContainerStarted","Data":"6d2e20d9ca4e986a3f3f36a5b7f06c2bcc5793bf3a1d365c32b42ac4a5795495"} Dec 01 08:54:19 crc kubenswrapper[4689]: I1201 08:54:19.601752 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-57548d458d-tgmx9" Dec 01 08:54:19 crc kubenswrapper[4689]: I1201 08:54:19.602960 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-fm9bv" event={"ID":"0d311ded-de3a-42e8-87d3-23c50c4fbd8a","Type":"ContainerStarted","Data":"30cce1a511301fd9f4cdb6c5e73063560df763e8e94ce72cd015af9364d80bb9"} Dec 01 08:54:19 crc kubenswrapper[4689]: I1201 08:54:19.603059 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-fm9bv" Dec 01 08:54:19 crc kubenswrapper[4689]: I1201 08:54:19.608337 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-pssbg" podStartSLOduration=3.008298458 podStartE2EDuration="45.608324365s" podCreationTimestamp="2025-12-01 08:53:34 +0000 UTC" firstStartedPulling="2025-12-01 08:53:36.321943966 +0000 UTC m=+896.394231870" lastFinishedPulling="2025-12-01 08:54:18.921969873 +0000 UTC m=+938.994257777" observedRunningTime="2025-12-01 08:54:19.605464456 +0000 UTC m=+939.677752360" watchObservedRunningTime="2025-12-01 08:54:19.608324365 +0000 UTC m=+939.680612269" Dec 01 08:54:19 crc kubenswrapper[4689]: I1201 08:54:19.619537 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4sbfl9" event={"ID":"6d24ac0e-a14a-4644-ba9a-bc0a6bb0c538","Type":"ContainerStarted","Data":"b1c31a4c3401b867c6665257208c9f68aa3527fa4f87c0db63d82bdc9703b0e8"} Dec 01 08:54:19 crc kubenswrapper[4689]: I1201 08:54:19.620078 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4sbfl9" Dec 01 08:54:19 crc kubenswrapper[4689]: I1201 08:54:19.626920 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-dp8gl" event={"ID":"ffc5e400-7853-4b1d-ae11-d6ffa553093a","Type":"ContainerStarted","Data":"4ccff18e867cd14eb182b4cc4d3c1de3efae44fc7fc13fb0fef0eeb75bdfb5c6"} Dec 01 08:54:19 crc kubenswrapper[4689]: I1201 08:54:19.627498 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-dp8gl" Dec 01 08:54:19 crc kubenswrapper[4689]: I1201 08:54:19.632966 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-dp8gl" Dec 01 08:54:19 crc kubenswrapper[4689]: I1201 08:54:19.640539 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-p296h" podStartSLOduration=4.997681689 podStartE2EDuration="45.640526948s" podCreationTimestamp="2025-12-01 08:53:34 +0000 UTC" firstStartedPulling="2025-12-01 08:53:36.49220246 +0000 UTC m=+896.564490364" lastFinishedPulling="2025-12-01 08:54:17.135047719 +0000 UTC m=+937.207335623" observedRunningTime="2025-12-01 08:54:19.637254348 +0000 UTC m=+939.709542252" watchObservedRunningTime="2025-12-01 08:54:19.640526948 +0000 UTC m=+939.712814842" Dec 01 08:54:19 crc kubenswrapper[4689]: I1201 08:54:19.643556 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-w6qx2" event={"ID":"fc02885a-340a-4800-bd0b-360c0476b456","Type":"ContainerStarted","Data":"4bfc28c452869dd01be53694f326b71644b275ca79ce9ed226ff50d415747351"} Dec 01 08:54:19 crc kubenswrapper[4689]: I1201 08:54:19.644030 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-w6qx2" Dec 01 08:54:19 crc kubenswrapper[4689]: I1201 08:54:19.678297 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-5d8x5" podStartSLOduration=5.066746355 podStartE2EDuration="45.678280135s" podCreationTimestamp="2025-12-01 08:53:34 +0000 UTC" firstStartedPulling="2025-12-01 08:53:36.678343229 +0000 UTC m=+896.750631133" lastFinishedPulling="2025-12-01 08:54:17.289877009 +0000 UTC m=+937.362164913" observedRunningTime="2025-12-01 08:54:19.673581246 +0000 UTC m=+939.745869160" watchObservedRunningTime="2025-12-01 08:54:19.678280135 +0000 UTC m=+939.750568039" Dec 01 08:54:19 crc kubenswrapper[4689]: I1201 08:54:19.679152 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-xhrp7" event={"ID":"ae47d16a-5025-44f4-8fa4-f5aa08b126b8","Type":"ContainerStarted","Data":"db1bec674a1896f7e39ede550cdb20785966c276e568262603dc78742e5134ec"} Dec 01 08:54:19 crc kubenswrapper[4689]: I1201 08:54:19.679967 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-xhrp7" Dec 01 08:54:19 crc kubenswrapper[4689]: I1201 08:54:19.710044 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-vbkrn" event={"ID":"f94d79da-740a-4080-81d0-ff3bf1867b3d","Type":"ContainerStarted","Data":"8d0a8608e9d4e62cf86478ddbf71ee3612f8d442f2aabbda7aea40a351a7e3f2"} Dec 01 08:54:19 crc kubenswrapper[4689]: I1201 08:54:19.710114 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-xhrp7" Dec 01 08:54:19 crc kubenswrapper[4689]: I1201 08:54:19.711037 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5854674fcc-vbkrn" Dec 01 08:54:19 crc kubenswrapper[4689]: I1201 08:54:19.718570 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5854674fcc-vbkrn" Dec 01 08:54:19 crc kubenswrapper[4689]: I1201 08:54:19.721451 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-6546668bfd-x722t" event={"ID":"3751be2a-8675-4b07-8198-101bfdd71d72","Type":"ContainerStarted","Data":"9c68b68d67785316e6f6553dfce056ace95416dc5b9b69feb61079e214f7a7d8"} Dec 01 08:54:19 crc kubenswrapper[4689]: I1201 08:54:19.721505 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-6546668bfd-x722t" Dec 01 08:54:19 crc kubenswrapper[4689]: I1201 08:54:19.742021 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-prvxn" event={"ID":"af92d0ca-8211-49a0-9362-bd5749143fff","Type":"ContainerStarted","Data":"3ef438b8959c3781699868eabb687903f97729863dd1d1827e6b1e060558844d"} Dec 01 08:54:19 crc kubenswrapper[4689]: I1201 08:54:19.742713 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-ghq5b" Dec 01 08:54:19 crc kubenswrapper[4689]: I1201 08:54:19.767092 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-ghq5b" Dec 01 08:54:19 crc kubenswrapper[4689]: I1201 08:54:19.772447 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-sfplx" podStartSLOduration=5.251510687 podStartE2EDuration="45.772421329s" podCreationTimestamp="2025-12-01 08:53:34 +0000 UTC" firstStartedPulling="2025-12-01 08:53:36.671596834 +0000 UTC m=+896.743884738" lastFinishedPulling="2025-12-01 08:54:17.192507476 +0000 UTC m=+937.264795380" observedRunningTime="2025-12-01 08:54:19.765812068 +0000 UTC m=+939.838099972" watchObservedRunningTime="2025-12-01 08:54:19.772421329 +0000 UTC m=+939.844709233" Dec 01 08:54:19 crc kubenswrapper[4689]: I1201 08:54:19.830490 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-fm9bv" podStartSLOduration=3.055369269 podStartE2EDuration="45.830464752s" podCreationTimestamp="2025-12-01 08:53:34 +0000 UTC" firstStartedPulling="2025-12-01 08:53:36.311518639 +0000 UTC m=+896.383806543" lastFinishedPulling="2025-12-01 08:54:19.086614122 +0000 UTC m=+939.158902026" observedRunningTime="2025-12-01 08:54:19.829541337 +0000 UTC m=+939.901829251" watchObservedRunningTime="2025-12-01 08:54:19.830464752 +0000 UTC m=+939.902752646" Dec 01 08:54:19 crc kubenswrapper[4689]: I1201 08:54:19.834079 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-57548d458d-tgmx9" podStartSLOduration=39.680978578 podStartE2EDuration="45.834067091s" podCreationTimestamp="2025-12-01 08:53:34 +0000 UTC" firstStartedPulling="2025-12-01 08:54:10.802745887 +0000 UTC m=+930.875033791" lastFinishedPulling="2025-12-01 08:54:16.9558344 +0000 UTC m=+937.028122304" observedRunningTime="2025-12-01 08:54:19.804120099 +0000 UTC m=+939.876408003" watchObservedRunningTime="2025-12-01 08:54:19.834067091 +0000 UTC m=+939.906354995" Dec 01 08:54:19 crc kubenswrapper[4689]: I1201 08:54:19.855076 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5854674fcc-vbkrn" podStartSLOduration=5.323302509 podStartE2EDuration="45.855054058s" podCreationTimestamp="2025-12-01 08:53:34 +0000 UTC" firstStartedPulling="2025-12-01 08:53:36.67507976 +0000 UTC m=+896.747367664" lastFinishedPulling="2025-12-01 08:54:17.206831309 +0000 UTC m=+937.279119213" observedRunningTime="2025-12-01 08:54:19.852760354 +0000 UTC m=+939.925048258" watchObservedRunningTime="2025-12-01 08:54:19.855054058 +0000 UTC m=+939.927341962" Dec 01 08:54:19 crc kubenswrapper[4689]: I1201 08:54:19.921095 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-w6qx2" podStartSLOduration=2.967640381 podStartE2EDuration="45.921073089s" podCreationTimestamp="2025-12-01 08:53:34 +0000 UTC" firstStartedPulling="2025-12-01 08:53:36.135446236 +0000 UTC m=+896.207734140" lastFinishedPulling="2025-12-01 08:54:19.088878944 +0000 UTC m=+939.161166848" observedRunningTime="2025-12-01 08:54:19.917580164 +0000 UTC m=+939.989868068" watchObservedRunningTime="2025-12-01 08:54:19.921073089 +0000 UTC m=+939.993360993" Dec 01 08:54:20 crc kubenswrapper[4689]: I1201 08:54:20.036484 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-xhrp7" podStartSLOduration=4.706533296 podStartE2EDuration="46.036458667s" podCreationTimestamp="2025-12-01 08:53:34 +0000 UTC" firstStartedPulling="2025-12-01 08:53:36.139635071 +0000 UTC m=+896.211922975" lastFinishedPulling="2025-12-01 08:54:17.469560442 +0000 UTC m=+937.541848346" observedRunningTime="2025-12-01 08:54:19.956631296 +0000 UTC m=+940.028919200" watchObservedRunningTime="2025-12-01 08:54:20.036458667 +0000 UTC m=+940.108746571" Dec 01 08:54:20 crc kubenswrapper[4689]: E1201 08:54:20.053412 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-t56mz" podUID="7085b604-e50c-4940-ac21-b6fe208c82cd" Dec 01 08:54:20 crc kubenswrapper[4689]: I1201 08:54:20.080634 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4sbfl9" podStartSLOduration=39.909343398 podStartE2EDuration="46.080614419s" podCreationTimestamp="2025-12-01 08:53:34 +0000 UTC" firstStartedPulling="2025-12-01 08:54:10.784159428 +0000 UTC m=+930.856447322" lastFinishedPulling="2025-12-01 08:54:16.955430439 +0000 UTC m=+937.027718343" observedRunningTime="2025-12-01 08:54:20.036256221 +0000 UTC m=+940.108544125" watchObservedRunningTime="2025-12-01 08:54:20.080614419 +0000 UTC m=+940.152902323" Dec 01 08:54:20 crc kubenswrapper[4689]: I1201 08:54:20.084325 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-6546668bfd-x722t" podStartSLOduration=3.264745535 podStartE2EDuration="46.08431273s" podCreationTimestamp="2025-12-01 08:53:34 +0000 UTC" firstStartedPulling="2025-12-01 08:53:36.102644055 +0000 UTC m=+896.174931959" lastFinishedPulling="2025-12-01 08:54:18.92221125 +0000 UTC m=+938.994499154" observedRunningTime="2025-12-01 08:54:20.075830827 +0000 UTC m=+940.148118731" watchObservedRunningTime="2025-12-01 08:54:20.08431273 +0000 UTC m=+940.156600634" Dec 01 08:54:20 crc kubenswrapper[4689]: I1201 08:54:20.107865 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-dp8gl" podStartSLOduration=5.081319754 podStartE2EDuration="46.107830256s" podCreationTimestamp="2025-12-01 08:53:34 +0000 UTC" firstStartedPulling="2025-12-01 08:53:36.311398176 +0000 UTC m=+896.383686080" lastFinishedPulling="2025-12-01 08:54:17.337908678 +0000 UTC m=+937.410196582" observedRunningTime="2025-12-01 08:54:20.105931814 +0000 UTC m=+940.178219718" watchObservedRunningTime="2025-12-01 08:54:20.107830256 +0000 UTC m=+940.180118160" Dec 01 08:54:20 crc kubenswrapper[4689]: I1201 08:54:20.749194 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-npf4h" event={"ID":"4bcb09c3-4edd-4caf-af39-a1b03319c91c","Type":"ContainerStarted","Data":"0acf03fc0cd0936a5b6ff89694849b9e7a61bb873fcc2a16ffbd28d8309c8f13"} Dec 01 08:54:20 crc kubenswrapper[4689]: I1201 08:54:20.752046 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-25q6j" event={"ID":"2b35aff9-c66d-448c-9883-05e650f7f147","Type":"ContainerStarted","Data":"c429f383ed77e11c77005095654b6db14147322cb0315849c05cc5511bbcaf4e"} Dec 01 08:54:20 crc kubenswrapper[4689]: I1201 08:54:20.755402 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-25q6j" Dec 01 08:54:20 crc kubenswrapper[4689]: I1201 08:54:20.757937 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-5d8x5" Dec 01 08:54:20 crc kubenswrapper[4689]: I1201 08:54:20.830344 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-25q6j" podStartSLOduration=3.33082279 podStartE2EDuration="46.8303149s" podCreationTimestamp="2025-12-01 08:53:34 +0000 UTC" firstStartedPulling="2025-12-01 08:53:35.777139899 +0000 UTC m=+895.849427803" lastFinishedPulling="2025-12-01 08:54:19.276632009 +0000 UTC m=+939.348919913" observedRunningTime="2025-12-01 08:54:20.818244298 +0000 UTC m=+940.890532202" watchObservedRunningTime="2025-12-01 08:54:20.8303149 +0000 UTC m=+940.902602804" Dec 01 08:54:21 crc kubenswrapper[4689]: I1201 08:54:21.335479 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-6fc767d767-8r9dw" Dec 01 08:54:21 crc kubenswrapper[4689]: I1201 08:54:21.760047 4689 generic.go:334] "Generic (PLEG): container finished" podID="4bcb09c3-4edd-4caf-af39-a1b03319c91c" containerID="0acf03fc0cd0936a5b6ff89694849b9e7a61bb873fcc2a16ffbd28d8309c8f13" exitCode=0 Dec 01 08:54:21 crc kubenswrapper[4689]: I1201 08:54:21.760139 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-npf4h" event={"ID":"4bcb09c3-4edd-4caf-af39-a1b03319c91c","Type":"ContainerDied","Data":"0acf03fc0cd0936a5b6ff89694849b9e7a61bb873fcc2a16ffbd28d8309c8f13"} Dec 01 08:54:21 crc kubenswrapper[4689]: I1201 08:54:21.765069 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-758d67db86-z298n" event={"ID":"2974e300-3f26-4ec0-912a-9ee6b78f33ce","Type":"ContainerStarted","Data":"f0bd44809a243032f4330eb55da2d873db0cefecf714240eb92fe0d9ad165ac2"} Dec 01 08:54:21 crc kubenswrapper[4689]: I1201 08:54:21.765527 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-758d67db86-z298n" Dec 01 08:54:21 crc kubenswrapper[4689]: I1201 08:54:21.801232 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-758d67db86-z298n" podStartSLOduration=2.602039324 podStartE2EDuration="47.801206652s" podCreationTimestamp="2025-12-01 08:53:34 +0000 UTC" firstStartedPulling="2025-12-01 08:53:35.80448161 +0000 UTC m=+895.876769514" lastFinishedPulling="2025-12-01 08:54:21.003648938 +0000 UTC m=+941.075936842" observedRunningTime="2025-12-01 08:54:21.797955703 +0000 UTC m=+941.870243607" watchObservedRunningTime="2025-12-01 08:54:21.801206652 +0000 UTC m=+941.873494556" Dec 01 08:54:22 crc kubenswrapper[4689]: I1201 08:54:22.772226 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-npf4h" event={"ID":"4bcb09c3-4edd-4caf-af39-a1b03319c91c","Type":"ContainerStarted","Data":"e7ccda4b6218ac6470c15bcbfbdc05f6814fcb6831e702db12559a65dd33333c"} Dec 01 08:54:22 crc kubenswrapper[4689]: I1201 08:54:22.812156 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-npf4h" podStartSLOduration=7.042675528 podStartE2EDuration="9.812133133s" podCreationTimestamp="2025-12-01 08:54:13 +0000 UTC" firstStartedPulling="2025-12-01 08:54:19.555184726 +0000 UTC m=+939.627472630" lastFinishedPulling="2025-12-01 08:54:22.324642331 +0000 UTC m=+942.396930235" observedRunningTime="2025-12-01 08:54:22.806876269 +0000 UTC m=+942.879164193" watchObservedRunningTime="2025-12-01 08:54:22.812133133 +0000 UTC m=+942.884421037" Dec 01 08:54:23 crc kubenswrapper[4689]: I1201 08:54:23.970732 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-d4bhr"] Dec 01 08:54:23 crc kubenswrapper[4689]: I1201 08:54:23.972891 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d4bhr" Dec 01 08:54:23 crc kubenswrapper[4689]: I1201 08:54:23.996633 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d4bhr"] Dec 01 08:54:24 crc kubenswrapper[4689]: I1201 08:54:24.136052 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e56d18b9-f976-4d6c-81da-5791c07cc79f-utilities\") pod \"certified-operators-d4bhr\" (UID: \"e56d18b9-f976-4d6c-81da-5791c07cc79f\") " pod="openshift-marketplace/certified-operators-d4bhr" Dec 01 08:54:24 crc kubenswrapper[4689]: I1201 08:54:24.136096 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e56d18b9-f976-4d6c-81da-5791c07cc79f-catalog-content\") pod \"certified-operators-d4bhr\" (UID: \"e56d18b9-f976-4d6c-81da-5791c07cc79f\") " pod="openshift-marketplace/certified-operators-d4bhr" Dec 01 08:54:24 crc kubenswrapper[4689]: I1201 08:54:24.136417 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-524fc\" (UniqueName: \"kubernetes.io/projected/e56d18b9-f976-4d6c-81da-5791c07cc79f-kube-api-access-524fc\") pod \"certified-operators-d4bhr\" (UID: \"e56d18b9-f976-4d6c-81da-5791c07cc79f\") " pod="openshift-marketplace/certified-operators-d4bhr" Dec 01 08:54:24 crc kubenswrapper[4689]: I1201 08:54:24.238674 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e56d18b9-f976-4d6c-81da-5791c07cc79f-utilities\") pod \"certified-operators-d4bhr\" (UID: \"e56d18b9-f976-4d6c-81da-5791c07cc79f\") " pod="openshift-marketplace/certified-operators-d4bhr" Dec 01 08:54:24 crc kubenswrapper[4689]: I1201 08:54:24.238741 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e56d18b9-f976-4d6c-81da-5791c07cc79f-catalog-content\") pod \"certified-operators-d4bhr\" (UID: \"e56d18b9-f976-4d6c-81da-5791c07cc79f\") " pod="openshift-marketplace/certified-operators-d4bhr" Dec 01 08:54:24 crc kubenswrapper[4689]: I1201 08:54:24.238813 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-524fc\" (UniqueName: \"kubernetes.io/projected/e56d18b9-f976-4d6c-81da-5791c07cc79f-kube-api-access-524fc\") pod \"certified-operators-d4bhr\" (UID: \"e56d18b9-f976-4d6c-81da-5791c07cc79f\") " pod="openshift-marketplace/certified-operators-d4bhr" Dec 01 08:54:24 crc kubenswrapper[4689]: I1201 08:54:24.239231 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e56d18b9-f976-4d6c-81da-5791c07cc79f-utilities\") pod \"certified-operators-d4bhr\" (UID: \"e56d18b9-f976-4d6c-81da-5791c07cc79f\") " pod="openshift-marketplace/certified-operators-d4bhr" Dec 01 08:54:24 crc kubenswrapper[4689]: I1201 08:54:24.239689 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e56d18b9-f976-4d6c-81da-5791c07cc79f-catalog-content\") pod \"certified-operators-d4bhr\" (UID: \"e56d18b9-f976-4d6c-81da-5791c07cc79f\") " pod="openshift-marketplace/certified-operators-d4bhr" Dec 01 08:54:24 crc kubenswrapper[4689]: I1201 08:54:24.244341 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-npf4h" Dec 01 08:54:24 crc kubenswrapper[4689]: I1201 08:54:24.244426 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-npf4h" Dec 01 08:54:24 crc kubenswrapper[4689]: I1201 08:54:24.259538 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-524fc\" (UniqueName: \"kubernetes.io/projected/e56d18b9-f976-4d6c-81da-5791c07cc79f-kube-api-access-524fc\") pod \"certified-operators-d4bhr\" (UID: \"e56d18b9-f976-4d6c-81da-5791c07cc79f\") " pod="openshift-marketplace/certified-operators-d4bhr" Dec 01 08:54:24 crc kubenswrapper[4689]: I1201 08:54:24.337298 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d4bhr" Dec 01 08:54:24 crc kubenswrapper[4689]: I1201 08:54:24.560036 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-25q6j" Dec 01 08:54:24 crc kubenswrapper[4689]: I1201 08:54:24.874031 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-w6qx2" Dec 01 08:54:24 crc kubenswrapper[4689]: I1201 08:54:24.909851 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-6546668bfd-x722t" Dec 01 08:54:24 crc kubenswrapper[4689]: I1201 08:54:24.968453 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-fm9bv" Dec 01 08:54:25 crc kubenswrapper[4689]: I1201 08:54:25.085032 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d4bhr"] Dec 01 08:54:25 crc kubenswrapper[4689]: I1201 08:54:25.085134 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-pssbg" Dec 01 08:54:25 crc kubenswrapper[4689]: I1201 08:54:25.339571 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-npf4h" podUID="4bcb09c3-4edd-4caf-af39-a1b03319c91c" containerName="registry-server" probeResult="failure" output=< Dec 01 08:54:25 crc kubenswrapper[4689]: timeout: failed to connect service ":50051" within 1s Dec 01 08:54:25 crc kubenswrapper[4689]: > Dec 01 08:54:25 crc kubenswrapper[4689]: I1201 08:54:25.791677 4689 generic.go:334] "Generic (PLEG): container finished" podID="e56d18b9-f976-4d6c-81da-5791c07cc79f" containerID="b979203e2690c9e6ee708ad564ab3d3030434e080976b76115148ec7df9724a0" exitCode=0 Dec 01 08:54:25 crc kubenswrapper[4689]: I1201 08:54:25.791786 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d4bhr" event={"ID":"e56d18b9-f976-4d6c-81da-5791c07cc79f","Type":"ContainerDied","Data":"b979203e2690c9e6ee708ad564ab3d3030434e080976b76115148ec7df9724a0"} Dec 01 08:54:25 crc kubenswrapper[4689]: I1201 08:54:25.791974 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d4bhr" event={"ID":"e56d18b9-f976-4d6c-81da-5791c07cc79f","Type":"ContainerStarted","Data":"f89f027839fde6a6ebee1c06b868089fc1f269c6f48ad9553120b05c948934d6"} Dec 01 08:54:27 crc kubenswrapper[4689]: I1201 08:54:27.816644 4689 generic.go:334] "Generic (PLEG): container finished" podID="e56d18b9-f976-4d6c-81da-5791c07cc79f" containerID="fa9c6d7c8c5afb40cf7251371eca5d075749a70f23c7ce4d9a4adef6d646b653" exitCode=0 Dec 01 08:54:27 crc kubenswrapper[4689]: I1201 08:54:27.816756 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d4bhr" event={"ID":"e56d18b9-f976-4d6c-81da-5791c07cc79f","Type":"ContainerDied","Data":"fa9c6d7c8c5afb40cf7251371eca5d075749a70f23c7ce4d9a4adef6d646b653"} Dec 01 08:54:28 crc kubenswrapper[4689]: I1201 08:54:28.186731 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rrrjc"] Dec 01 08:54:28 crc kubenswrapper[4689]: I1201 08:54:28.188570 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rrrjc" Dec 01 08:54:28 crc kubenswrapper[4689]: I1201 08:54:28.214495 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rrrjc"] Dec 01 08:54:28 crc kubenswrapper[4689]: I1201 08:54:28.302047 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ace97617-7cba-4be0-98d9-09227611793a-utilities\") pod \"redhat-marketplace-rrrjc\" (UID: \"ace97617-7cba-4be0-98d9-09227611793a\") " pod="openshift-marketplace/redhat-marketplace-rrrjc" Dec 01 08:54:28 crc kubenswrapper[4689]: I1201 08:54:28.302130 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ace97617-7cba-4be0-98d9-09227611793a-catalog-content\") pod \"redhat-marketplace-rrrjc\" (UID: \"ace97617-7cba-4be0-98d9-09227611793a\") " pod="openshift-marketplace/redhat-marketplace-rrrjc" Dec 01 08:54:28 crc kubenswrapper[4689]: I1201 08:54:28.302156 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpns6\" (UniqueName: \"kubernetes.io/projected/ace97617-7cba-4be0-98d9-09227611793a-kube-api-access-mpns6\") pod \"redhat-marketplace-rrrjc\" (UID: \"ace97617-7cba-4be0-98d9-09227611793a\") " pod="openshift-marketplace/redhat-marketplace-rrrjc" Dec 01 08:54:28 crc kubenswrapper[4689]: I1201 08:54:28.403569 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ace97617-7cba-4be0-98d9-09227611793a-catalog-content\") pod \"redhat-marketplace-rrrjc\" (UID: \"ace97617-7cba-4be0-98d9-09227611793a\") " pod="openshift-marketplace/redhat-marketplace-rrrjc" Dec 01 08:54:28 crc kubenswrapper[4689]: I1201 08:54:28.403615 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpns6\" (UniqueName: \"kubernetes.io/projected/ace97617-7cba-4be0-98d9-09227611793a-kube-api-access-mpns6\") pod \"redhat-marketplace-rrrjc\" (UID: \"ace97617-7cba-4be0-98d9-09227611793a\") " pod="openshift-marketplace/redhat-marketplace-rrrjc" Dec 01 08:54:28 crc kubenswrapper[4689]: I1201 08:54:28.403674 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ace97617-7cba-4be0-98d9-09227611793a-utilities\") pod \"redhat-marketplace-rrrjc\" (UID: \"ace97617-7cba-4be0-98d9-09227611793a\") " pod="openshift-marketplace/redhat-marketplace-rrrjc" Dec 01 08:54:28 crc kubenswrapper[4689]: I1201 08:54:28.404101 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ace97617-7cba-4be0-98d9-09227611793a-utilities\") pod \"redhat-marketplace-rrrjc\" (UID: \"ace97617-7cba-4be0-98d9-09227611793a\") " pod="openshift-marketplace/redhat-marketplace-rrrjc" Dec 01 08:54:28 crc kubenswrapper[4689]: I1201 08:54:28.404309 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ace97617-7cba-4be0-98d9-09227611793a-catalog-content\") pod \"redhat-marketplace-rrrjc\" (UID: \"ace97617-7cba-4be0-98d9-09227611793a\") " pod="openshift-marketplace/redhat-marketplace-rrrjc" Dec 01 08:54:28 crc kubenswrapper[4689]: I1201 08:54:28.428220 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpns6\" (UniqueName: \"kubernetes.io/projected/ace97617-7cba-4be0-98d9-09227611793a-kube-api-access-mpns6\") pod \"redhat-marketplace-rrrjc\" (UID: \"ace97617-7cba-4be0-98d9-09227611793a\") " pod="openshift-marketplace/redhat-marketplace-rrrjc" Dec 01 08:54:28 crc kubenswrapper[4689]: I1201 08:54:28.504981 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rrrjc" Dec 01 08:54:28 crc kubenswrapper[4689]: I1201 08:54:28.879594 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d4bhr" event={"ID":"e56d18b9-f976-4d6c-81da-5791c07cc79f","Type":"ContainerStarted","Data":"d1568652044d893d19eff9c07ba062509a7dc236da5e995df463b438ac8b1073"} Dec 01 08:54:28 crc kubenswrapper[4689]: I1201 08:54:28.905786 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-d4bhr" podStartSLOduration=3.335174936 podStartE2EDuration="5.905766843s" podCreationTimestamp="2025-12-01 08:54:23 +0000 UTC" firstStartedPulling="2025-12-01 08:54:25.793213188 +0000 UTC m=+945.865501092" lastFinishedPulling="2025-12-01 08:54:28.363805095 +0000 UTC m=+948.436092999" observedRunningTime="2025-12-01 08:54:28.905647309 +0000 UTC m=+948.977935223" watchObservedRunningTime="2025-12-01 08:54:28.905766843 +0000 UTC m=+948.978054747" Dec 01 08:54:28 crc kubenswrapper[4689]: W1201 08:54:28.919168 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podace97617_7cba_4be0_98d9_09227611793a.slice/crio-f75fa368834bf1dbc1197588d7bfea790d8f737eb8cd7afdee6e6a85eb3bf5ac WatchSource:0}: Error finding container f75fa368834bf1dbc1197588d7bfea790d8f737eb8cd7afdee6e6a85eb3bf5ac: Status 404 returned error can't find the container with id f75fa368834bf1dbc1197588d7bfea790d8f737eb8cd7afdee6e6a85eb3bf5ac Dec 01 08:54:28 crc kubenswrapper[4689]: I1201 08:54:28.925304 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rrrjc"] Dec 01 08:54:29 crc kubenswrapper[4689]: I1201 08:54:29.889747 4689 generic.go:334] "Generic (PLEG): container finished" podID="ace97617-7cba-4be0-98d9-09227611793a" containerID="25c0c3e3a35f8530cd707f58f1193f806e41e6606feda8ffc9d95a65ca732cbc" exitCode=0 Dec 01 08:54:29 crc kubenswrapper[4689]: I1201 08:54:29.889842 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rrrjc" event={"ID":"ace97617-7cba-4be0-98d9-09227611793a","Type":"ContainerDied","Data":"25c0c3e3a35f8530cd707f58f1193f806e41e6606feda8ffc9d95a65ca732cbc"} Dec 01 08:54:29 crc kubenswrapper[4689]: I1201 08:54:29.889869 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rrrjc" event={"ID":"ace97617-7cba-4be0-98d9-09227611793a","Type":"ContainerStarted","Data":"f75fa368834bf1dbc1197588d7bfea790d8f737eb8cd7afdee6e6a85eb3bf5ac"} Dec 01 08:54:30 crc kubenswrapper[4689]: I1201 08:54:30.591896 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-57548d458d-tgmx9" Dec 01 08:54:30 crc kubenswrapper[4689]: I1201 08:54:30.898972 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rrrjc" event={"ID":"ace97617-7cba-4be0-98d9-09227611793a","Type":"ContainerStarted","Data":"968a1127339876820218e6fdf20bf2e841c5b46f5567e2f590530b60dc5c8f6b"} Dec 01 08:54:31 crc kubenswrapper[4689]: I1201 08:54:31.124459 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4sbfl9" Dec 01 08:54:31 crc kubenswrapper[4689]: I1201 08:54:31.909284 4689 generic.go:334] "Generic (PLEG): container finished" podID="ace97617-7cba-4be0-98d9-09227611793a" containerID="968a1127339876820218e6fdf20bf2e841c5b46f5567e2f590530b60dc5c8f6b" exitCode=0 Dec 01 08:54:31 crc kubenswrapper[4689]: I1201 08:54:31.909347 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rrrjc" event={"ID":"ace97617-7cba-4be0-98d9-09227611793a","Type":"ContainerDied","Data":"968a1127339876820218e6fdf20bf2e841c5b46f5567e2f590530b60dc5c8f6b"} Dec 01 08:54:33 crc kubenswrapper[4689]: I1201 08:54:33.924646 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rrrjc" event={"ID":"ace97617-7cba-4be0-98d9-09227611793a","Type":"ContainerStarted","Data":"91a7d2c153c7228e6c7162afe12245451583e5dc07343c92eca41940b281439c"} Dec 01 08:54:33 crc kubenswrapper[4689]: I1201 08:54:33.956655 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rrrjc" podStartSLOduration=3.044284099 podStartE2EDuration="5.956627746s" podCreationTimestamp="2025-12-01 08:54:28 +0000 UTC" firstStartedPulling="2025-12-01 08:54:29.89246666 +0000 UTC m=+949.964754614" lastFinishedPulling="2025-12-01 08:54:32.804810347 +0000 UTC m=+952.877098261" observedRunningTime="2025-12-01 08:54:33.955301621 +0000 UTC m=+954.027589525" watchObservedRunningTime="2025-12-01 08:54:33.956627746 +0000 UTC m=+954.028915660" Dec 01 08:54:34 crc kubenswrapper[4689]: I1201 08:54:34.288249 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-npf4h" Dec 01 08:54:34 crc kubenswrapper[4689]: I1201 08:54:34.338276 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-d4bhr" Dec 01 08:54:34 crc kubenswrapper[4689]: I1201 08:54:34.338693 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-d4bhr" Dec 01 08:54:34 crc kubenswrapper[4689]: I1201 08:54:34.356232 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-npf4h" Dec 01 08:54:34 crc kubenswrapper[4689]: I1201 08:54:34.393968 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-d4bhr" Dec 01 08:54:34 crc kubenswrapper[4689]: I1201 08:54:34.733006 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-758d67db86-z298n" Dec 01 08:54:34 crc kubenswrapper[4689]: I1201 08:54:34.931831 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-t56mz" event={"ID":"7085b604-e50c-4940-ac21-b6fe208c82cd","Type":"ContainerStarted","Data":"b444725f23a637106d845d7ad3c9b06ca2f998ada5bc009210dbada3c38c6417"} Dec 01 08:54:34 crc kubenswrapper[4689]: I1201 08:54:34.991736 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-d4bhr" Dec 01 08:54:35 crc kubenswrapper[4689]: I1201 08:54:35.010653 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-t56mz" podStartSLOduration=2.124167219 podStartE2EDuration="1m0.010624912s" podCreationTimestamp="2025-12-01 08:53:35 +0000 UTC" firstStartedPulling="2025-12-01 08:53:36.675350877 +0000 UTC m=+896.747638781" lastFinishedPulling="2025-12-01 08:54:34.56180856 +0000 UTC m=+954.634096474" observedRunningTime="2025-12-01 08:54:34.954755638 +0000 UTC m=+955.027043552" watchObservedRunningTime="2025-12-01 08:54:35.010624912 +0000 UTC m=+955.082912816" Dec 01 08:54:36 crc kubenswrapper[4689]: I1201 08:54:36.548946 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-npf4h"] Dec 01 08:54:36 crc kubenswrapper[4689]: I1201 08:54:36.549595 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-npf4h" podUID="4bcb09c3-4edd-4caf-af39-a1b03319c91c" containerName="registry-server" containerID="cri-o://e7ccda4b6218ac6470c15bcbfbdc05f6814fcb6831e702db12559a65dd33333c" gracePeriod=2 Dec 01 08:54:36 crc kubenswrapper[4689]: I1201 08:54:36.753449 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d4bhr"] Dec 01 08:54:36 crc kubenswrapper[4689]: I1201 08:54:36.948087 4689 generic.go:334] "Generic (PLEG): container finished" podID="4bcb09c3-4edd-4caf-af39-a1b03319c91c" containerID="e7ccda4b6218ac6470c15bcbfbdc05f6814fcb6831e702db12559a65dd33333c" exitCode=0 Dec 01 08:54:36 crc kubenswrapper[4689]: I1201 08:54:36.948693 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-npf4h" event={"ID":"4bcb09c3-4edd-4caf-af39-a1b03319c91c","Type":"ContainerDied","Data":"e7ccda4b6218ac6470c15bcbfbdc05f6814fcb6831e702db12559a65dd33333c"} Dec 01 08:54:37 crc kubenswrapper[4689]: I1201 08:54:37.467819 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-npf4h" Dec 01 08:54:37 crc kubenswrapper[4689]: I1201 08:54:37.547470 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-llq68\" (UniqueName: \"kubernetes.io/projected/4bcb09c3-4edd-4caf-af39-a1b03319c91c-kube-api-access-llq68\") pod \"4bcb09c3-4edd-4caf-af39-a1b03319c91c\" (UID: \"4bcb09c3-4edd-4caf-af39-a1b03319c91c\") " Dec 01 08:54:37 crc kubenswrapper[4689]: I1201 08:54:37.547563 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bcb09c3-4edd-4caf-af39-a1b03319c91c-catalog-content\") pod \"4bcb09c3-4edd-4caf-af39-a1b03319c91c\" (UID: \"4bcb09c3-4edd-4caf-af39-a1b03319c91c\") " Dec 01 08:54:37 crc kubenswrapper[4689]: I1201 08:54:37.547613 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bcb09c3-4edd-4caf-af39-a1b03319c91c-utilities\") pod \"4bcb09c3-4edd-4caf-af39-a1b03319c91c\" (UID: \"4bcb09c3-4edd-4caf-af39-a1b03319c91c\") " Dec 01 08:54:37 crc kubenswrapper[4689]: I1201 08:54:37.548468 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4bcb09c3-4edd-4caf-af39-a1b03319c91c-utilities" (OuterVolumeSpecName: "utilities") pod "4bcb09c3-4edd-4caf-af39-a1b03319c91c" (UID: "4bcb09c3-4edd-4caf-af39-a1b03319c91c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 08:54:37 crc kubenswrapper[4689]: I1201 08:54:37.553239 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bcb09c3-4edd-4caf-af39-a1b03319c91c-kube-api-access-llq68" (OuterVolumeSpecName: "kube-api-access-llq68") pod "4bcb09c3-4edd-4caf-af39-a1b03319c91c" (UID: "4bcb09c3-4edd-4caf-af39-a1b03319c91c"). InnerVolumeSpecName "kube-api-access-llq68". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:54:37 crc kubenswrapper[4689]: I1201 08:54:37.597280 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4bcb09c3-4edd-4caf-af39-a1b03319c91c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4bcb09c3-4edd-4caf-af39-a1b03319c91c" (UID: "4bcb09c3-4edd-4caf-af39-a1b03319c91c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 08:54:37 crc kubenswrapper[4689]: I1201 08:54:37.648779 4689 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bcb09c3-4edd-4caf-af39-a1b03319c91c-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 08:54:37 crc kubenswrapper[4689]: I1201 08:54:37.648812 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-llq68\" (UniqueName: \"kubernetes.io/projected/4bcb09c3-4edd-4caf-af39-a1b03319c91c-kube-api-access-llq68\") on node \"crc\" DevicePath \"\"" Dec 01 08:54:37 crc kubenswrapper[4689]: I1201 08:54:37.648823 4689 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bcb09c3-4edd-4caf-af39-a1b03319c91c-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 08:54:37 crc kubenswrapper[4689]: I1201 08:54:37.957093 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-npf4h" event={"ID":"4bcb09c3-4edd-4caf-af39-a1b03319c91c","Type":"ContainerDied","Data":"e55a861b5c836a463edb2f6f8300c5f689fc8bea64fb6188c3f4b698f06b5b1c"} Dec 01 08:54:37 crc kubenswrapper[4689]: I1201 08:54:37.957169 4689 scope.go:117] "RemoveContainer" containerID="e7ccda4b6218ac6470c15bcbfbdc05f6814fcb6831e702db12559a65dd33333c" Dec 01 08:54:37 crc kubenswrapper[4689]: I1201 08:54:37.957237 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-d4bhr" podUID="e56d18b9-f976-4d6c-81da-5791c07cc79f" containerName="registry-server" containerID="cri-o://d1568652044d893d19eff9c07ba062509a7dc236da5e995df463b438ac8b1073" gracePeriod=2 Dec 01 08:54:37 crc kubenswrapper[4689]: I1201 08:54:37.957320 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-npf4h" Dec 01 08:54:37 crc kubenswrapper[4689]: I1201 08:54:37.997164 4689 scope.go:117] "RemoveContainer" containerID="0acf03fc0cd0936a5b6ff89694849b9e7a61bb873fcc2a16ffbd28d8309c8f13" Dec 01 08:54:37 crc kubenswrapper[4689]: I1201 08:54:37.997442 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-npf4h"] Dec 01 08:54:38 crc kubenswrapper[4689]: I1201 08:54:38.006739 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-npf4h"] Dec 01 08:54:38 crc kubenswrapper[4689]: I1201 08:54:38.031601 4689 scope.go:117] "RemoveContainer" containerID="ccda3f4ab8e3582069b880423884ec4546ac7a799bf51f73b3d8ac0cfe2f6a8a" Dec 01 08:54:38 crc kubenswrapper[4689]: I1201 08:54:38.506000 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rrrjc" Dec 01 08:54:38 crc kubenswrapper[4689]: I1201 08:54:38.506072 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rrrjc" Dec 01 08:54:38 crc kubenswrapper[4689]: I1201 08:54:38.556752 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rrrjc" Dec 01 08:54:38 crc kubenswrapper[4689]: I1201 08:54:38.886079 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d4bhr" Dec 01 08:54:38 crc kubenswrapper[4689]: I1201 08:54:38.993100 4689 generic.go:334] "Generic (PLEG): container finished" podID="e56d18b9-f976-4d6c-81da-5791c07cc79f" containerID="d1568652044d893d19eff9c07ba062509a7dc236da5e995df463b438ac8b1073" exitCode=0 Dec 01 08:54:38 crc kubenswrapper[4689]: I1201 08:54:38.994107 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d4bhr" Dec 01 08:54:38 crc kubenswrapper[4689]: I1201 08:54:38.994549 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d4bhr" event={"ID":"e56d18b9-f976-4d6c-81da-5791c07cc79f","Type":"ContainerDied","Data":"d1568652044d893d19eff9c07ba062509a7dc236da5e995df463b438ac8b1073"} Dec 01 08:54:38 crc kubenswrapper[4689]: I1201 08:54:38.994580 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d4bhr" event={"ID":"e56d18b9-f976-4d6c-81da-5791c07cc79f","Type":"ContainerDied","Data":"f89f027839fde6a6ebee1c06b868089fc1f269c6f48ad9553120b05c948934d6"} Dec 01 08:54:38 crc kubenswrapper[4689]: I1201 08:54:38.994602 4689 scope.go:117] "RemoveContainer" containerID="d1568652044d893d19eff9c07ba062509a7dc236da5e995df463b438ac8b1073" Dec 01 08:54:39 crc kubenswrapper[4689]: I1201 08:54:39.018966 4689 scope.go:117] "RemoveContainer" containerID="fa9c6d7c8c5afb40cf7251371eca5d075749a70f23c7ce4d9a4adef6d646b653" Dec 01 08:54:39 crc kubenswrapper[4689]: I1201 08:54:39.051772 4689 scope.go:117] "RemoveContainer" containerID="b979203e2690c9e6ee708ad564ab3d3030434e080976b76115148ec7df9724a0" Dec 01 08:54:39 crc kubenswrapper[4689]: I1201 08:54:39.060274 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bcb09c3-4edd-4caf-af39-a1b03319c91c" path="/var/lib/kubelet/pods/4bcb09c3-4edd-4caf-af39-a1b03319c91c/volumes" Dec 01 08:54:39 crc kubenswrapper[4689]: I1201 08:54:39.067504 4689 scope.go:117] "RemoveContainer" containerID="d1568652044d893d19eff9c07ba062509a7dc236da5e995df463b438ac8b1073" Dec 01 08:54:39 crc kubenswrapper[4689]: E1201 08:54:39.068010 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1568652044d893d19eff9c07ba062509a7dc236da5e995df463b438ac8b1073\": container with ID starting with d1568652044d893d19eff9c07ba062509a7dc236da5e995df463b438ac8b1073 not found: ID does not exist" containerID="d1568652044d893d19eff9c07ba062509a7dc236da5e995df463b438ac8b1073" Dec 01 08:54:39 crc kubenswrapper[4689]: I1201 08:54:39.068060 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1568652044d893d19eff9c07ba062509a7dc236da5e995df463b438ac8b1073"} err="failed to get container status \"d1568652044d893d19eff9c07ba062509a7dc236da5e995df463b438ac8b1073\": rpc error: code = NotFound desc = could not find container \"d1568652044d893d19eff9c07ba062509a7dc236da5e995df463b438ac8b1073\": container with ID starting with d1568652044d893d19eff9c07ba062509a7dc236da5e995df463b438ac8b1073 not found: ID does not exist" Dec 01 08:54:39 crc kubenswrapper[4689]: I1201 08:54:39.068091 4689 scope.go:117] "RemoveContainer" containerID="fa9c6d7c8c5afb40cf7251371eca5d075749a70f23c7ce4d9a4adef6d646b653" Dec 01 08:54:39 crc kubenswrapper[4689]: E1201 08:54:39.068342 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa9c6d7c8c5afb40cf7251371eca5d075749a70f23c7ce4d9a4adef6d646b653\": container with ID starting with fa9c6d7c8c5afb40cf7251371eca5d075749a70f23c7ce4d9a4adef6d646b653 not found: ID does not exist" containerID="fa9c6d7c8c5afb40cf7251371eca5d075749a70f23c7ce4d9a4adef6d646b653" Dec 01 08:54:39 crc kubenswrapper[4689]: I1201 08:54:39.068379 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa9c6d7c8c5afb40cf7251371eca5d075749a70f23c7ce4d9a4adef6d646b653"} err="failed to get container status \"fa9c6d7c8c5afb40cf7251371eca5d075749a70f23c7ce4d9a4adef6d646b653\": rpc error: code = NotFound desc = could not find container \"fa9c6d7c8c5afb40cf7251371eca5d075749a70f23c7ce4d9a4adef6d646b653\": container with ID starting with fa9c6d7c8c5afb40cf7251371eca5d075749a70f23c7ce4d9a4adef6d646b653 not found: ID does not exist" Dec 01 08:54:39 crc kubenswrapper[4689]: I1201 08:54:39.068391 4689 scope.go:117] "RemoveContainer" containerID="b979203e2690c9e6ee708ad564ab3d3030434e080976b76115148ec7df9724a0" Dec 01 08:54:39 crc kubenswrapper[4689]: E1201 08:54:39.068632 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b979203e2690c9e6ee708ad564ab3d3030434e080976b76115148ec7df9724a0\": container with ID starting with b979203e2690c9e6ee708ad564ab3d3030434e080976b76115148ec7df9724a0 not found: ID does not exist" containerID="b979203e2690c9e6ee708ad564ab3d3030434e080976b76115148ec7df9724a0" Dec 01 08:54:39 crc kubenswrapper[4689]: I1201 08:54:39.068653 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b979203e2690c9e6ee708ad564ab3d3030434e080976b76115148ec7df9724a0"} err="failed to get container status \"b979203e2690c9e6ee708ad564ab3d3030434e080976b76115148ec7df9724a0\": rpc error: code = NotFound desc = could not find container \"b979203e2690c9e6ee708ad564ab3d3030434e080976b76115148ec7df9724a0\": container with ID starting with b979203e2690c9e6ee708ad564ab3d3030434e080976b76115148ec7df9724a0 not found: ID does not exist" Dec 01 08:54:39 crc kubenswrapper[4689]: I1201 08:54:39.080890 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e56d18b9-f976-4d6c-81da-5791c07cc79f-utilities\") pod \"e56d18b9-f976-4d6c-81da-5791c07cc79f\" (UID: \"e56d18b9-f976-4d6c-81da-5791c07cc79f\") " Dec 01 08:54:39 crc kubenswrapper[4689]: I1201 08:54:39.080950 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-524fc\" (UniqueName: \"kubernetes.io/projected/e56d18b9-f976-4d6c-81da-5791c07cc79f-kube-api-access-524fc\") pod \"e56d18b9-f976-4d6c-81da-5791c07cc79f\" (UID: \"e56d18b9-f976-4d6c-81da-5791c07cc79f\") " Dec 01 08:54:39 crc kubenswrapper[4689]: I1201 08:54:39.081002 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e56d18b9-f976-4d6c-81da-5791c07cc79f-catalog-content\") pod \"e56d18b9-f976-4d6c-81da-5791c07cc79f\" (UID: \"e56d18b9-f976-4d6c-81da-5791c07cc79f\") " Dec 01 08:54:39 crc kubenswrapper[4689]: I1201 08:54:39.084942 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e56d18b9-f976-4d6c-81da-5791c07cc79f-utilities" (OuterVolumeSpecName: "utilities") pod "e56d18b9-f976-4d6c-81da-5791c07cc79f" (UID: "e56d18b9-f976-4d6c-81da-5791c07cc79f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 08:54:39 crc kubenswrapper[4689]: I1201 08:54:39.085816 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rrrjc" Dec 01 08:54:39 crc kubenswrapper[4689]: I1201 08:54:39.087524 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e56d18b9-f976-4d6c-81da-5791c07cc79f-kube-api-access-524fc" (OuterVolumeSpecName: "kube-api-access-524fc") pod "e56d18b9-f976-4d6c-81da-5791c07cc79f" (UID: "e56d18b9-f976-4d6c-81da-5791c07cc79f"). InnerVolumeSpecName "kube-api-access-524fc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:54:39 crc kubenswrapper[4689]: I1201 08:54:39.143067 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e56d18b9-f976-4d6c-81da-5791c07cc79f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e56d18b9-f976-4d6c-81da-5791c07cc79f" (UID: "e56d18b9-f976-4d6c-81da-5791c07cc79f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 08:54:39 crc kubenswrapper[4689]: I1201 08:54:39.152138 4689 patch_prober.go:28] interesting pod/machine-config-daemon-hmdnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 08:54:39 crc kubenswrapper[4689]: I1201 08:54:39.152209 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 08:54:39 crc kubenswrapper[4689]: I1201 08:54:39.152264 4689 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" Dec 01 08:54:39 crc kubenswrapper[4689]: I1201 08:54:39.152866 4689 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"dc69fc59569a57f3230435206eb87de05390f897bd389b5558c6be2f4c0990e0"} pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 01 08:54:39 crc kubenswrapper[4689]: I1201 08:54:39.152910 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" containerName="machine-config-daemon" containerID="cri-o://dc69fc59569a57f3230435206eb87de05390f897bd389b5558c6be2f4c0990e0" gracePeriod=600 Dec 01 08:54:39 crc kubenswrapper[4689]: I1201 08:54:39.182832 4689 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e56d18b9-f976-4d6c-81da-5791c07cc79f-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 08:54:39 crc kubenswrapper[4689]: I1201 08:54:39.182877 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-524fc\" (UniqueName: \"kubernetes.io/projected/e56d18b9-f976-4d6c-81da-5791c07cc79f-kube-api-access-524fc\") on node \"crc\" DevicePath \"\"" Dec 01 08:54:39 crc kubenswrapper[4689]: I1201 08:54:39.182937 4689 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e56d18b9-f976-4d6c-81da-5791c07cc79f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 08:54:39 crc kubenswrapper[4689]: I1201 08:54:39.336147 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d4bhr"] Dec 01 08:54:39 crc kubenswrapper[4689]: I1201 08:54:39.352250 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-d4bhr"] Dec 01 08:54:40 crc kubenswrapper[4689]: I1201 08:54:40.016323 4689 generic.go:334] "Generic (PLEG): container finished" podID="3947625d-75bf-4332-a233-1491b2ee9d96" containerID="dc69fc59569a57f3230435206eb87de05390f897bd389b5558c6be2f4c0990e0" exitCode=0 Dec 01 08:54:40 crc kubenswrapper[4689]: I1201 08:54:40.016419 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" event={"ID":"3947625d-75bf-4332-a233-1491b2ee9d96","Type":"ContainerDied","Data":"dc69fc59569a57f3230435206eb87de05390f897bd389b5558c6be2f4c0990e0"} Dec 01 08:54:40 crc kubenswrapper[4689]: I1201 08:54:40.016842 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" event={"ID":"3947625d-75bf-4332-a233-1491b2ee9d96","Type":"ContainerStarted","Data":"d1e70c73c88326989d073faf6067f98f45b064162bc9402e3b9575ef624c63ae"} Dec 01 08:54:40 crc kubenswrapper[4689]: I1201 08:54:40.016877 4689 scope.go:117] "RemoveContainer" containerID="74b1ead9c91ab196fa5f6493d6eb41ab2d35580a1ad359148d766458297d4a15" Dec 01 08:54:41 crc kubenswrapper[4689]: I1201 08:54:41.063324 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e56d18b9-f976-4d6c-81da-5791c07cc79f" path="/var/lib/kubelet/pods/e56d18b9-f976-4d6c-81da-5791c07cc79f/volumes" Dec 01 08:54:41 crc kubenswrapper[4689]: I1201 08:54:41.546607 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rrrjc"] Dec 01 08:54:41 crc kubenswrapper[4689]: I1201 08:54:41.547155 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rrrjc" podUID="ace97617-7cba-4be0-98d9-09227611793a" containerName="registry-server" containerID="cri-o://91a7d2c153c7228e6c7162afe12245451583e5dc07343c92eca41940b281439c" gracePeriod=2 Dec 01 08:54:41 crc kubenswrapper[4689]: I1201 08:54:41.968457 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rrrjc" Dec 01 08:54:42 crc kubenswrapper[4689]: I1201 08:54:42.034983 4689 generic.go:334] "Generic (PLEG): container finished" podID="ace97617-7cba-4be0-98d9-09227611793a" containerID="91a7d2c153c7228e6c7162afe12245451583e5dc07343c92eca41940b281439c" exitCode=0 Dec 01 08:54:42 crc kubenswrapper[4689]: I1201 08:54:42.035056 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rrrjc" event={"ID":"ace97617-7cba-4be0-98d9-09227611793a","Type":"ContainerDied","Data":"91a7d2c153c7228e6c7162afe12245451583e5dc07343c92eca41940b281439c"} Dec 01 08:54:42 crc kubenswrapper[4689]: I1201 08:54:42.035074 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rrrjc" Dec 01 08:54:42 crc kubenswrapper[4689]: I1201 08:54:42.035106 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rrrjc" event={"ID":"ace97617-7cba-4be0-98d9-09227611793a","Type":"ContainerDied","Data":"f75fa368834bf1dbc1197588d7bfea790d8f737eb8cd7afdee6e6a85eb3bf5ac"} Dec 01 08:54:42 crc kubenswrapper[4689]: I1201 08:54:42.035128 4689 scope.go:117] "RemoveContainer" containerID="91a7d2c153c7228e6c7162afe12245451583e5dc07343c92eca41940b281439c" Dec 01 08:54:42 crc kubenswrapper[4689]: I1201 08:54:42.050498 4689 scope.go:117] "RemoveContainer" containerID="968a1127339876820218e6fdf20bf2e841c5b46f5567e2f590530b60dc5c8f6b" Dec 01 08:54:42 crc kubenswrapper[4689]: I1201 08:54:42.050916 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ace97617-7cba-4be0-98d9-09227611793a-utilities\") pod \"ace97617-7cba-4be0-98d9-09227611793a\" (UID: \"ace97617-7cba-4be0-98d9-09227611793a\") " Dec 01 08:54:42 crc kubenswrapper[4689]: I1201 08:54:42.051800 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ace97617-7cba-4be0-98d9-09227611793a-utilities" (OuterVolumeSpecName: "utilities") pod "ace97617-7cba-4be0-98d9-09227611793a" (UID: "ace97617-7cba-4be0-98d9-09227611793a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 08:54:42 crc kubenswrapper[4689]: I1201 08:54:42.053200 4689 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ace97617-7cba-4be0-98d9-09227611793a-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 08:54:42 crc kubenswrapper[4689]: I1201 08:54:42.074579 4689 scope.go:117] "RemoveContainer" containerID="25c0c3e3a35f8530cd707f58f1193f806e41e6606feda8ffc9d95a65ca732cbc" Dec 01 08:54:42 crc kubenswrapper[4689]: I1201 08:54:42.110094 4689 scope.go:117] "RemoveContainer" containerID="91a7d2c153c7228e6c7162afe12245451583e5dc07343c92eca41940b281439c" Dec 01 08:54:42 crc kubenswrapper[4689]: E1201 08:54:42.110580 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91a7d2c153c7228e6c7162afe12245451583e5dc07343c92eca41940b281439c\": container with ID starting with 91a7d2c153c7228e6c7162afe12245451583e5dc07343c92eca41940b281439c not found: ID does not exist" containerID="91a7d2c153c7228e6c7162afe12245451583e5dc07343c92eca41940b281439c" Dec 01 08:54:42 crc kubenswrapper[4689]: I1201 08:54:42.110628 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91a7d2c153c7228e6c7162afe12245451583e5dc07343c92eca41940b281439c"} err="failed to get container status \"91a7d2c153c7228e6c7162afe12245451583e5dc07343c92eca41940b281439c\": rpc error: code = NotFound desc = could not find container \"91a7d2c153c7228e6c7162afe12245451583e5dc07343c92eca41940b281439c\": container with ID starting with 91a7d2c153c7228e6c7162afe12245451583e5dc07343c92eca41940b281439c not found: ID does not exist" Dec 01 08:54:42 crc kubenswrapper[4689]: I1201 08:54:42.110687 4689 scope.go:117] "RemoveContainer" containerID="968a1127339876820218e6fdf20bf2e841c5b46f5567e2f590530b60dc5c8f6b" Dec 01 08:54:42 crc kubenswrapper[4689]: E1201 08:54:42.111400 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"968a1127339876820218e6fdf20bf2e841c5b46f5567e2f590530b60dc5c8f6b\": container with ID starting with 968a1127339876820218e6fdf20bf2e841c5b46f5567e2f590530b60dc5c8f6b not found: ID does not exist" containerID="968a1127339876820218e6fdf20bf2e841c5b46f5567e2f590530b60dc5c8f6b" Dec 01 08:54:42 crc kubenswrapper[4689]: I1201 08:54:42.111426 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"968a1127339876820218e6fdf20bf2e841c5b46f5567e2f590530b60dc5c8f6b"} err="failed to get container status \"968a1127339876820218e6fdf20bf2e841c5b46f5567e2f590530b60dc5c8f6b\": rpc error: code = NotFound desc = could not find container \"968a1127339876820218e6fdf20bf2e841c5b46f5567e2f590530b60dc5c8f6b\": container with ID starting with 968a1127339876820218e6fdf20bf2e841c5b46f5567e2f590530b60dc5c8f6b not found: ID does not exist" Dec 01 08:54:42 crc kubenswrapper[4689]: I1201 08:54:42.111442 4689 scope.go:117] "RemoveContainer" containerID="25c0c3e3a35f8530cd707f58f1193f806e41e6606feda8ffc9d95a65ca732cbc" Dec 01 08:54:42 crc kubenswrapper[4689]: E1201 08:54:42.120813 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25c0c3e3a35f8530cd707f58f1193f806e41e6606feda8ffc9d95a65ca732cbc\": container with ID starting with 25c0c3e3a35f8530cd707f58f1193f806e41e6606feda8ffc9d95a65ca732cbc not found: ID does not exist" containerID="25c0c3e3a35f8530cd707f58f1193f806e41e6606feda8ffc9d95a65ca732cbc" Dec 01 08:54:42 crc kubenswrapper[4689]: I1201 08:54:42.120860 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25c0c3e3a35f8530cd707f58f1193f806e41e6606feda8ffc9d95a65ca732cbc"} err="failed to get container status \"25c0c3e3a35f8530cd707f58f1193f806e41e6606feda8ffc9d95a65ca732cbc\": rpc error: code = NotFound desc = could not find container \"25c0c3e3a35f8530cd707f58f1193f806e41e6606feda8ffc9d95a65ca732cbc\": container with ID starting with 25c0c3e3a35f8530cd707f58f1193f806e41e6606feda8ffc9d95a65ca732cbc not found: ID does not exist" Dec 01 08:54:42 crc kubenswrapper[4689]: I1201 08:54:42.153560 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mpns6\" (UniqueName: \"kubernetes.io/projected/ace97617-7cba-4be0-98d9-09227611793a-kube-api-access-mpns6\") pod \"ace97617-7cba-4be0-98d9-09227611793a\" (UID: \"ace97617-7cba-4be0-98d9-09227611793a\") " Dec 01 08:54:42 crc kubenswrapper[4689]: I1201 08:54:42.153631 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ace97617-7cba-4be0-98d9-09227611793a-catalog-content\") pod \"ace97617-7cba-4be0-98d9-09227611793a\" (UID: \"ace97617-7cba-4be0-98d9-09227611793a\") " Dec 01 08:54:42 crc kubenswrapper[4689]: I1201 08:54:42.158824 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ace97617-7cba-4be0-98d9-09227611793a-kube-api-access-mpns6" (OuterVolumeSpecName: "kube-api-access-mpns6") pod "ace97617-7cba-4be0-98d9-09227611793a" (UID: "ace97617-7cba-4be0-98d9-09227611793a"). InnerVolumeSpecName "kube-api-access-mpns6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:54:42 crc kubenswrapper[4689]: I1201 08:54:42.171984 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ace97617-7cba-4be0-98d9-09227611793a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ace97617-7cba-4be0-98d9-09227611793a" (UID: "ace97617-7cba-4be0-98d9-09227611793a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 08:54:42 crc kubenswrapper[4689]: I1201 08:54:42.254325 4689 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ace97617-7cba-4be0-98d9-09227611793a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 08:54:42 crc kubenswrapper[4689]: I1201 08:54:42.254411 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mpns6\" (UniqueName: \"kubernetes.io/projected/ace97617-7cba-4be0-98d9-09227611793a-kube-api-access-mpns6\") on node \"crc\" DevicePath \"\"" Dec 01 08:54:42 crc kubenswrapper[4689]: I1201 08:54:42.384661 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rrrjc"] Dec 01 08:54:42 crc kubenswrapper[4689]: I1201 08:54:42.390930 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rrrjc"] Dec 01 08:54:43 crc kubenswrapper[4689]: I1201 08:54:43.056015 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ace97617-7cba-4be0-98d9-09227611793a" path="/var/lib/kubelet/pods/ace97617-7cba-4be0-98d9-09227611793a/volumes" Dec 01 08:54:48 crc kubenswrapper[4689]: I1201 08:54:48.261122 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-6hl9x"] Dec 01 08:54:48 crc kubenswrapper[4689]: E1201 08:54:48.261938 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bcb09c3-4edd-4caf-af39-a1b03319c91c" containerName="registry-server" Dec 01 08:54:48 crc kubenswrapper[4689]: I1201 08:54:48.261962 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bcb09c3-4edd-4caf-af39-a1b03319c91c" containerName="registry-server" Dec 01 08:54:48 crc kubenswrapper[4689]: E1201 08:54:48.261990 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e56d18b9-f976-4d6c-81da-5791c07cc79f" containerName="extract-utilities" Dec 01 08:54:48 crc kubenswrapper[4689]: I1201 08:54:48.261997 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="e56d18b9-f976-4d6c-81da-5791c07cc79f" containerName="extract-utilities" Dec 01 08:54:48 crc kubenswrapper[4689]: E1201 08:54:48.262006 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e56d18b9-f976-4d6c-81da-5791c07cc79f" containerName="registry-server" Dec 01 08:54:48 crc kubenswrapper[4689]: I1201 08:54:48.262012 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="e56d18b9-f976-4d6c-81da-5791c07cc79f" containerName="registry-server" Dec 01 08:54:48 crc kubenswrapper[4689]: E1201 08:54:48.262025 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bcb09c3-4edd-4caf-af39-a1b03319c91c" containerName="extract-utilities" Dec 01 08:54:48 crc kubenswrapper[4689]: I1201 08:54:48.262031 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bcb09c3-4edd-4caf-af39-a1b03319c91c" containerName="extract-utilities" Dec 01 08:54:48 crc kubenswrapper[4689]: E1201 08:54:48.262044 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ace97617-7cba-4be0-98d9-09227611793a" containerName="registry-server" Dec 01 08:54:48 crc kubenswrapper[4689]: I1201 08:54:48.262050 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="ace97617-7cba-4be0-98d9-09227611793a" containerName="registry-server" Dec 01 08:54:48 crc kubenswrapper[4689]: E1201 08:54:48.262065 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ace97617-7cba-4be0-98d9-09227611793a" containerName="extract-content" Dec 01 08:54:48 crc kubenswrapper[4689]: I1201 08:54:48.262073 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="ace97617-7cba-4be0-98d9-09227611793a" containerName="extract-content" Dec 01 08:54:48 crc kubenswrapper[4689]: E1201 08:54:48.262082 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bcb09c3-4edd-4caf-af39-a1b03319c91c" containerName="extract-content" Dec 01 08:54:48 crc kubenswrapper[4689]: I1201 08:54:48.262088 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bcb09c3-4edd-4caf-af39-a1b03319c91c" containerName="extract-content" Dec 01 08:54:48 crc kubenswrapper[4689]: E1201 08:54:48.262100 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e56d18b9-f976-4d6c-81da-5791c07cc79f" containerName="extract-content" Dec 01 08:54:48 crc kubenswrapper[4689]: I1201 08:54:48.262106 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="e56d18b9-f976-4d6c-81da-5791c07cc79f" containerName="extract-content" Dec 01 08:54:48 crc kubenswrapper[4689]: E1201 08:54:48.262118 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ace97617-7cba-4be0-98d9-09227611793a" containerName="extract-utilities" Dec 01 08:54:48 crc kubenswrapper[4689]: I1201 08:54:48.262124 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="ace97617-7cba-4be0-98d9-09227611793a" containerName="extract-utilities" Dec 01 08:54:48 crc kubenswrapper[4689]: I1201 08:54:48.262276 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="e56d18b9-f976-4d6c-81da-5791c07cc79f" containerName="registry-server" Dec 01 08:54:48 crc kubenswrapper[4689]: I1201 08:54:48.262289 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bcb09c3-4edd-4caf-af39-a1b03319c91c" containerName="registry-server" Dec 01 08:54:48 crc kubenswrapper[4689]: I1201 08:54:48.262300 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="ace97617-7cba-4be0-98d9-09227611793a" containerName="registry-server" Dec 01 08:54:48 crc kubenswrapper[4689]: I1201 08:54:48.266544 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-6hl9x" Dec 01 08:54:48 crc kubenswrapper[4689]: I1201 08:54:48.274130 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Dec 01 08:54:48 crc kubenswrapper[4689]: I1201 08:54:48.279570 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Dec 01 08:54:48 crc kubenswrapper[4689]: I1201 08:54:48.279759 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Dec 01 08:54:48 crc kubenswrapper[4689]: I1201 08:54:48.279973 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-n5xqh" Dec 01 08:54:48 crc kubenswrapper[4689]: I1201 08:54:48.288889 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-6hl9x"] Dec 01 08:54:48 crc kubenswrapper[4689]: I1201 08:54:48.341027 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-c8mdm"] Dec 01 08:54:48 crc kubenswrapper[4689]: I1201 08:54:48.342288 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-c8mdm" Dec 01 08:54:48 crc kubenswrapper[4689]: I1201 08:54:48.343250 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b01a2e6b-975d-4968-abf2-62137709ab4e-config\") pod \"dnsmasq-dns-675f4bcbfc-6hl9x\" (UID: \"b01a2e6b-975d-4968-abf2-62137709ab4e\") " pod="openstack/dnsmasq-dns-675f4bcbfc-6hl9x" Dec 01 08:54:48 crc kubenswrapper[4689]: I1201 08:54:48.343295 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snrrk\" (UniqueName: \"kubernetes.io/projected/b01a2e6b-975d-4968-abf2-62137709ab4e-kube-api-access-snrrk\") pod \"dnsmasq-dns-675f4bcbfc-6hl9x\" (UID: \"b01a2e6b-975d-4968-abf2-62137709ab4e\") " pod="openstack/dnsmasq-dns-675f4bcbfc-6hl9x" Dec 01 08:54:48 crc kubenswrapper[4689]: I1201 08:54:48.348909 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Dec 01 08:54:48 crc kubenswrapper[4689]: I1201 08:54:48.363727 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-c8mdm"] Dec 01 08:54:48 crc kubenswrapper[4689]: I1201 08:54:48.444834 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f74987d3-25fa-4744-860a-e4c272305c81-config\") pod \"dnsmasq-dns-78dd6ddcc-c8mdm\" (UID: \"f74987d3-25fa-4744-860a-e4c272305c81\") " pod="openstack/dnsmasq-dns-78dd6ddcc-c8mdm" Dec 01 08:54:48 crc kubenswrapper[4689]: I1201 08:54:48.445252 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ws4c2\" (UniqueName: \"kubernetes.io/projected/f74987d3-25fa-4744-860a-e4c272305c81-kube-api-access-ws4c2\") pod \"dnsmasq-dns-78dd6ddcc-c8mdm\" (UID: \"f74987d3-25fa-4744-860a-e4c272305c81\") " pod="openstack/dnsmasq-dns-78dd6ddcc-c8mdm" Dec 01 08:54:48 crc kubenswrapper[4689]: I1201 08:54:48.445329 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b01a2e6b-975d-4968-abf2-62137709ab4e-config\") pod \"dnsmasq-dns-675f4bcbfc-6hl9x\" (UID: \"b01a2e6b-975d-4968-abf2-62137709ab4e\") " pod="openstack/dnsmasq-dns-675f4bcbfc-6hl9x" Dec 01 08:54:48 crc kubenswrapper[4689]: I1201 08:54:48.445383 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snrrk\" (UniqueName: \"kubernetes.io/projected/b01a2e6b-975d-4968-abf2-62137709ab4e-kube-api-access-snrrk\") pod \"dnsmasq-dns-675f4bcbfc-6hl9x\" (UID: \"b01a2e6b-975d-4968-abf2-62137709ab4e\") " pod="openstack/dnsmasq-dns-675f4bcbfc-6hl9x" Dec 01 08:54:48 crc kubenswrapper[4689]: I1201 08:54:48.445409 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f74987d3-25fa-4744-860a-e4c272305c81-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-c8mdm\" (UID: \"f74987d3-25fa-4744-860a-e4c272305c81\") " pod="openstack/dnsmasq-dns-78dd6ddcc-c8mdm" Dec 01 08:54:48 crc kubenswrapper[4689]: I1201 08:54:48.446457 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b01a2e6b-975d-4968-abf2-62137709ab4e-config\") pod \"dnsmasq-dns-675f4bcbfc-6hl9x\" (UID: \"b01a2e6b-975d-4968-abf2-62137709ab4e\") " pod="openstack/dnsmasq-dns-675f4bcbfc-6hl9x" Dec 01 08:54:48 crc kubenswrapper[4689]: I1201 08:54:48.475932 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snrrk\" (UniqueName: \"kubernetes.io/projected/b01a2e6b-975d-4968-abf2-62137709ab4e-kube-api-access-snrrk\") pod \"dnsmasq-dns-675f4bcbfc-6hl9x\" (UID: \"b01a2e6b-975d-4968-abf2-62137709ab4e\") " pod="openstack/dnsmasq-dns-675f4bcbfc-6hl9x" Dec 01 08:54:48 crc kubenswrapper[4689]: I1201 08:54:48.546535 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f74987d3-25fa-4744-860a-e4c272305c81-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-c8mdm\" (UID: \"f74987d3-25fa-4744-860a-e4c272305c81\") " pod="openstack/dnsmasq-dns-78dd6ddcc-c8mdm" Dec 01 08:54:48 crc kubenswrapper[4689]: I1201 08:54:48.546649 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f74987d3-25fa-4744-860a-e4c272305c81-config\") pod \"dnsmasq-dns-78dd6ddcc-c8mdm\" (UID: \"f74987d3-25fa-4744-860a-e4c272305c81\") " pod="openstack/dnsmasq-dns-78dd6ddcc-c8mdm" Dec 01 08:54:48 crc kubenswrapper[4689]: I1201 08:54:48.546672 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ws4c2\" (UniqueName: \"kubernetes.io/projected/f74987d3-25fa-4744-860a-e4c272305c81-kube-api-access-ws4c2\") pod \"dnsmasq-dns-78dd6ddcc-c8mdm\" (UID: \"f74987d3-25fa-4744-860a-e4c272305c81\") " pod="openstack/dnsmasq-dns-78dd6ddcc-c8mdm" Dec 01 08:54:48 crc kubenswrapper[4689]: I1201 08:54:48.547600 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f74987d3-25fa-4744-860a-e4c272305c81-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-c8mdm\" (UID: \"f74987d3-25fa-4744-860a-e4c272305c81\") " pod="openstack/dnsmasq-dns-78dd6ddcc-c8mdm" Dec 01 08:54:48 crc kubenswrapper[4689]: I1201 08:54:48.547683 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f74987d3-25fa-4744-860a-e4c272305c81-config\") pod \"dnsmasq-dns-78dd6ddcc-c8mdm\" (UID: \"f74987d3-25fa-4744-860a-e4c272305c81\") " pod="openstack/dnsmasq-dns-78dd6ddcc-c8mdm" Dec 01 08:54:48 crc kubenswrapper[4689]: I1201 08:54:48.565435 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ws4c2\" (UniqueName: \"kubernetes.io/projected/f74987d3-25fa-4744-860a-e4c272305c81-kube-api-access-ws4c2\") pod \"dnsmasq-dns-78dd6ddcc-c8mdm\" (UID: \"f74987d3-25fa-4744-860a-e4c272305c81\") " pod="openstack/dnsmasq-dns-78dd6ddcc-c8mdm" Dec 01 08:54:48 crc kubenswrapper[4689]: I1201 08:54:48.641892 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-6hl9x" Dec 01 08:54:48 crc kubenswrapper[4689]: I1201 08:54:48.671061 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-c8mdm" Dec 01 08:54:49 crc kubenswrapper[4689]: I1201 08:54:49.333426 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-6hl9x"] Dec 01 08:54:49 crc kubenswrapper[4689]: W1201 08:54:49.341312 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb01a2e6b_975d_4968_abf2_62137709ab4e.slice/crio-d78ed2bd7c6580f6eea60c4c70b41786d78b2099a6eb9eca3b763d30cc60f10f WatchSource:0}: Error finding container d78ed2bd7c6580f6eea60c4c70b41786d78b2099a6eb9eca3b763d30cc60f10f: Status 404 returned error can't find the container with id d78ed2bd7c6580f6eea60c4c70b41786d78b2099a6eb9eca3b763d30cc60f10f Dec 01 08:54:49 crc kubenswrapper[4689]: W1201 08:54:49.438545 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf74987d3_25fa_4744_860a_e4c272305c81.slice/crio-6bf7e8c0859e6111d38040655317249741c3d35d0f65d7672ba230855710639b WatchSource:0}: Error finding container 6bf7e8c0859e6111d38040655317249741c3d35d0f65d7672ba230855710639b: Status 404 returned error can't find the container with id 6bf7e8c0859e6111d38040655317249741c3d35d0f65d7672ba230855710639b Dec 01 08:54:49 crc kubenswrapper[4689]: I1201 08:54:49.461951 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-c8mdm"] Dec 01 08:54:50 crc kubenswrapper[4689]: I1201 08:54:50.101637 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-6hl9x" event={"ID":"b01a2e6b-975d-4968-abf2-62137709ab4e","Type":"ContainerStarted","Data":"d78ed2bd7c6580f6eea60c4c70b41786d78b2099a6eb9eca3b763d30cc60f10f"} Dec 01 08:54:50 crc kubenswrapper[4689]: I1201 08:54:50.104622 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-c8mdm" event={"ID":"f74987d3-25fa-4744-860a-e4c272305c81","Type":"ContainerStarted","Data":"6bf7e8c0859e6111d38040655317249741c3d35d0f65d7672ba230855710639b"} Dec 01 08:54:51 crc kubenswrapper[4689]: I1201 08:54:51.315998 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-6hl9x"] Dec 01 08:54:51 crc kubenswrapper[4689]: I1201 08:54:51.363346 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-xvz5v"] Dec 01 08:54:51 crc kubenswrapper[4689]: I1201 08:54:51.365093 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-xvz5v" Dec 01 08:54:51 crc kubenswrapper[4689]: I1201 08:54:51.398427 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-xvz5v"] Dec 01 08:54:51 crc kubenswrapper[4689]: I1201 08:54:51.404230 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8957703f-79dd-4f15-bad3-d7be659b8de6-dns-svc\") pod \"dnsmasq-dns-666b6646f7-xvz5v\" (UID: \"8957703f-79dd-4f15-bad3-d7be659b8de6\") " pod="openstack/dnsmasq-dns-666b6646f7-xvz5v" Dec 01 08:54:51 crc kubenswrapper[4689]: I1201 08:54:51.404306 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvrjn\" (UniqueName: \"kubernetes.io/projected/8957703f-79dd-4f15-bad3-d7be659b8de6-kube-api-access-hvrjn\") pod \"dnsmasq-dns-666b6646f7-xvz5v\" (UID: \"8957703f-79dd-4f15-bad3-d7be659b8de6\") " pod="openstack/dnsmasq-dns-666b6646f7-xvz5v" Dec 01 08:54:51 crc kubenswrapper[4689]: I1201 08:54:51.404334 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8957703f-79dd-4f15-bad3-d7be659b8de6-config\") pod \"dnsmasq-dns-666b6646f7-xvz5v\" (UID: \"8957703f-79dd-4f15-bad3-d7be659b8de6\") " pod="openstack/dnsmasq-dns-666b6646f7-xvz5v" Dec 01 08:54:51 crc kubenswrapper[4689]: I1201 08:54:51.508076 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8957703f-79dd-4f15-bad3-d7be659b8de6-dns-svc\") pod \"dnsmasq-dns-666b6646f7-xvz5v\" (UID: \"8957703f-79dd-4f15-bad3-d7be659b8de6\") " pod="openstack/dnsmasq-dns-666b6646f7-xvz5v" Dec 01 08:54:51 crc kubenswrapper[4689]: I1201 08:54:51.508147 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvrjn\" (UniqueName: \"kubernetes.io/projected/8957703f-79dd-4f15-bad3-d7be659b8de6-kube-api-access-hvrjn\") pod \"dnsmasq-dns-666b6646f7-xvz5v\" (UID: \"8957703f-79dd-4f15-bad3-d7be659b8de6\") " pod="openstack/dnsmasq-dns-666b6646f7-xvz5v" Dec 01 08:54:51 crc kubenswrapper[4689]: I1201 08:54:51.508172 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8957703f-79dd-4f15-bad3-d7be659b8de6-config\") pod \"dnsmasq-dns-666b6646f7-xvz5v\" (UID: \"8957703f-79dd-4f15-bad3-d7be659b8de6\") " pod="openstack/dnsmasq-dns-666b6646f7-xvz5v" Dec 01 08:54:51 crc kubenswrapper[4689]: I1201 08:54:51.509012 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8957703f-79dd-4f15-bad3-d7be659b8de6-config\") pod \"dnsmasq-dns-666b6646f7-xvz5v\" (UID: \"8957703f-79dd-4f15-bad3-d7be659b8de6\") " pod="openstack/dnsmasq-dns-666b6646f7-xvz5v" Dec 01 08:54:51 crc kubenswrapper[4689]: I1201 08:54:51.509517 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8957703f-79dd-4f15-bad3-d7be659b8de6-dns-svc\") pod \"dnsmasq-dns-666b6646f7-xvz5v\" (UID: \"8957703f-79dd-4f15-bad3-d7be659b8de6\") " pod="openstack/dnsmasq-dns-666b6646f7-xvz5v" Dec 01 08:54:51 crc kubenswrapper[4689]: I1201 08:54:51.549049 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvrjn\" (UniqueName: \"kubernetes.io/projected/8957703f-79dd-4f15-bad3-d7be659b8de6-kube-api-access-hvrjn\") pod \"dnsmasq-dns-666b6646f7-xvz5v\" (UID: \"8957703f-79dd-4f15-bad3-d7be659b8de6\") " pod="openstack/dnsmasq-dns-666b6646f7-xvz5v" Dec 01 08:54:51 crc kubenswrapper[4689]: I1201 08:54:51.684356 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-xvz5v" Dec 01 08:54:51 crc kubenswrapper[4689]: I1201 08:54:51.710759 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-c8mdm"] Dec 01 08:54:51 crc kubenswrapper[4689]: I1201 08:54:51.779298 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-v6tfg"] Dec 01 08:54:51 crc kubenswrapper[4689]: I1201 08:54:51.780966 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-v6tfg" Dec 01 08:54:51 crc kubenswrapper[4689]: I1201 08:54:51.795044 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-v6tfg"] Dec 01 08:54:51 crc kubenswrapper[4689]: I1201 08:54:51.812405 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cac79dc0-0c0d-4fc8-b148-5b0ae546eba6-config\") pod \"dnsmasq-dns-57d769cc4f-v6tfg\" (UID: \"cac79dc0-0c0d-4fc8-b148-5b0ae546eba6\") " pod="openstack/dnsmasq-dns-57d769cc4f-v6tfg" Dec 01 08:54:51 crc kubenswrapper[4689]: I1201 08:54:51.812452 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cac79dc0-0c0d-4fc8-b148-5b0ae546eba6-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-v6tfg\" (UID: \"cac79dc0-0c0d-4fc8-b148-5b0ae546eba6\") " pod="openstack/dnsmasq-dns-57d769cc4f-v6tfg" Dec 01 08:54:51 crc kubenswrapper[4689]: I1201 08:54:51.812479 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhg6f\" (UniqueName: \"kubernetes.io/projected/cac79dc0-0c0d-4fc8-b148-5b0ae546eba6-kube-api-access-fhg6f\") pod \"dnsmasq-dns-57d769cc4f-v6tfg\" (UID: \"cac79dc0-0c0d-4fc8-b148-5b0ae546eba6\") " pod="openstack/dnsmasq-dns-57d769cc4f-v6tfg" Dec 01 08:54:51 crc kubenswrapper[4689]: I1201 08:54:51.919675 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cac79dc0-0c0d-4fc8-b148-5b0ae546eba6-config\") pod \"dnsmasq-dns-57d769cc4f-v6tfg\" (UID: \"cac79dc0-0c0d-4fc8-b148-5b0ae546eba6\") " pod="openstack/dnsmasq-dns-57d769cc4f-v6tfg" Dec 01 08:54:51 crc kubenswrapper[4689]: I1201 08:54:51.919712 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cac79dc0-0c0d-4fc8-b148-5b0ae546eba6-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-v6tfg\" (UID: \"cac79dc0-0c0d-4fc8-b148-5b0ae546eba6\") " pod="openstack/dnsmasq-dns-57d769cc4f-v6tfg" Dec 01 08:54:51 crc kubenswrapper[4689]: I1201 08:54:51.919750 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhg6f\" (UniqueName: \"kubernetes.io/projected/cac79dc0-0c0d-4fc8-b148-5b0ae546eba6-kube-api-access-fhg6f\") pod \"dnsmasq-dns-57d769cc4f-v6tfg\" (UID: \"cac79dc0-0c0d-4fc8-b148-5b0ae546eba6\") " pod="openstack/dnsmasq-dns-57d769cc4f-v6tfg" Dec 01 08:54:51 crc kubenswrapper[4689]: I1201 08:54:51.921176 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cac79dc0-0c0d-4fc8-b148-5b0ae546eba6-config\") pod \"dnsmasq-dns-57d769cc4f-v6tfg\" (UID: \"cac79dc0-0c0d-4fc8-b148-5b0ae546eba6\") " pod="openstack/dnsmasq-dns-57d769cc4f-v6tfg" Dec 01 08:54:51 crc kubenswrapper[4689]: I1201 08:54:51.927853 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cac79dc0-0c0d-4fc8-b148-5b0ae546eba6-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-v6tfg\" (UID: \"cac79dc0-0c0d-4fc8-b148-5b0ae546eba6\") " pod="openstack/dnsmasq-dns-57d769cc4f-v6tfg" Dec 01 08:54:51 crc kubenswrapper[4689]: I1201 08:54:51.977192 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhg6f\" (UniqueName: \"kubernetes.io/projected/cac79dc0-0c0d-4fc8-b148-5b0ae546eba6-kube-api-access-fhg6f\") pod \"dnsmasq-dns-57d769cc4f-v6tfg\" (UID: \"cac79dc0-0c0d-4fc8-b148-5b0ae546eba6\") " pod="openstack/dnsmasq-dns-57d769cc4f-v6tfg" Dec 01 08:54:52 crc kubenswrapper[4689]: I1201 08:54:52.138958 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-v6tfg" Dec 01 08:54:52 crc kubenswrapper[4689]: I1201 08:54:52.343335 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-xvz5v"] Dec 01 08:54:52 crc kubenswrapper[4689]: I1201 08:54:52.548899 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Dec 01 08:54:52 crc kubenswrapper[4689]: I1201 08:54:52.558612 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 01 08:54:52 crc kubenswrapper[4689]: I1201 08:54:52.562258 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Dec 01 08:54:52 crc kubenswrapper[4689]: I1201 08:54:52.562451 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Dec 01 08:54:52 crc kubenswrapper[4689]: I1201 08:54:52.562607 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-jgdkg" Dec 01 08:54:52 crc kubenswrapper[4689]: I1201 08:54:52.562723 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Dec 01 08:54:52 crc kubenswrapper[4689]: I1201 08:54:52.562829 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Dec 01 08:54:52 crc kubenswrapper[4689]: I1201 08:54:52.562933 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Dec 01 08:54:52 crc kubenswrapper[4689]: I1201 08:54:52.563035 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Dec 01 08:54:52 crc kubenswrapper[4689]: I1201 08:54:52.568650 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 01 08:54:52 crc kubenswrapper[4689]: I1201 08:54:52.641402 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxzqp\" (UniqueName: \"kubernetes.io/projected/edc6a475-296b-4f29-a48b-6876138662fd-kube-api-access-bxzqp\") pod \"rabbitmq-server-0\" (UID: \"edc6a475-296b-4f29-a48b-6876138662fd\") " pod="openstack/rabbitmq-server-0" Dec 01 08:54:52 crc kubenswrapper[4689]: I1201 08:54:52.641456 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/edc6a475-296b-4f29-a48b-6876138662fd-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"edc6a475-296b-4f29-a48b-6876138662fd\") " pod="openstack/rabbitmq-server-0" Dec 01 08:54:52 crc kubenswrapper[4689]: I1201 08:54:52.641494 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"edc6a475-296b-4f29-a48b-6876138662fd\") " pod="openstack/rabbitmq-server-0" Dec 01 08:54:52 crc kubenswrapper[4689]: I1201 08:54:52.641520 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/edc6a475-296b-4f29-a48b-6876138662fd-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"edc6a475-296b-4f29-a48b-6876138662fd\") " pod="openstack/rabbitmq-server-0" Dec 01 08:54:52 crc kubenswrapper[4689]: I1201 08:54:52.641547 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/edc6a475-296b-4f29-a48b-6876138662fd-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"edc6a475-296b-4f29-a48b-6876138662fd\") " pod="openstack/rabbitmq-server-0" Dec 01 08:54:52 crc kubenswrapper[4689]: I1201 08:54:52.641564 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/edc6a475-296b-4f29-a48b-6876138662fd-pod-info\") pod \"rabbitmq-server-0\" (UID: \"edc6a475-296b-4f29-a48b-6876138662fd\") " pod="openstack/rabbitmq-server-0" Dec 01 08:54:52 crc kubenswrapper[4689]: I1201 08:54:52.641598 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/edc6a475-296b-4f29-a48b-6876138662fd-server-conf\") pod \"rabbitmq-server-0\" (UID: \"edc6a475-296b-4f29-a48b-6876138662fd\") " pod="openstack/rabbitmq-server-0" Dec 01 08:54:52 crc kubenswrapper[4689]: I1201 08:54:52.641660 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/edc6a475-296b-4f29-a48b-6876138662fd-config-data\") pod \"rabbitmq-server-0\" (UID: \"edc6a475-296b-4f29-a48b-6876138662fd\") " pod="openstack/rabbitmq-server-0" Dec 01 08:54:52 crc kubenswrapper[4689]: I1201 08:54:52.642339 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/edc6a475-296b-4f29-a48b-6876138662fd-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"edc6a475-296b-4f29-a48b-6876138662fd\") " pod="openstack/rabbitmq-server-0" Dec 01 08:54:52 crc kubenswrapper[4689]: I1201 08:54:52.642399 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/edc6a475-296b-4f29-a48b-6876138662fd-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"edc6a475-296b-4f29-a48b-6876138662fd\") " pod="openstack/rabbitmq-server-0" Dec 01 08:54:52 crc kubenswrapper[4689]: I1201 08:54:52.642493 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/edc6a475-296b-4f29-a48b-6876138662fd-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"edc6a475-296b-4f29-a48b-6876138662fd\") " pod="openstack/rabbitmq-server-0" Dec 01 08:54:52 crc kubenswrapper[4689]: I1201 08:54:52.746202 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/edc6a475-296b-4f29-a48b-6876138662fd-config-data\") pod \"rabbitmq-server-0\" (UID: \"edc6a475-296b-4f29-a48b-6876138662fd\") " pod="openstack/rabbitmq-server-0" Dec 01 08:54:52 crc kubenswrapper[4689]: I1201 08:54:52.746253 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/edc6a475-296b-4f29-a48b-6876138662fd-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"edc6a475-296b-4f29-a48b-6876138662fd\") " pod="openstack/rabbitmq-server-0" Dec 01 08:54:52 crc kubenswrapper[4689]: I1201 08:54:52.746282 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/edc6a475-296b-4f29-a48b-6876138662fd-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"edc6a475-296b-4f29-a48b-6876138662fd\") " pod="openstack/rabbitmq-server-0" Dec 01 08:54:52 crc kubenswrapper[4689]: I1201 08:54:52.746305 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/edc6a475-296b-4f29-a48b-6876138662fd-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"edc6a475-296b-4f29-a48b-6876138662fd\") " pod="openstack/rabbitmq-server-0" Dec 01 08:54:52 crc kubenswrapper[4689]: I1201 08:54:52.746322 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxzqp\" (UniqueName: \"kubernetes.io/projected/edc6a475-296b-4f29-a48b-6876138662fd-kube-api-access-bxzqp\") pod \"rabbitmq-server-0\" (UID: \"edc6a475-296b-4f29-a48b-6876138662fd\") " pod="openstack/rabbitmq-server-0" Dec 01 08:54:52 crc kubenswrapper[4689]: I1201 08:54:52.746339 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/edc6a475-296b-4f29-a48b-6876138662fd-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"edc6a475-296b-4f29-a48b-6876138662fd\") " pod="openstack/rabbitmq-server-0" Dec 01 08:54:52 crc kubenswrapper[4689]: I1201 08:54:52.746383 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"edc6a475-296b-4f29-a48b-6876138662fd\") " pod="openstack/rabbitmq-server-0" Dec 01 08:54:52 crc kubenswrapper[4689]: I1201 08:54:52.746405 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/edc6a475-296b-4f29-a48b-6876138662fd-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"edc6a475-296b-4f29-a48b-6876138662fd\") " pod="openstack/rabbitmq-server-0" Dec 01 08:54:52 crc kubenswrapper[4689]: I1201 08:54:52.746426 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/edc6a475-296b-4f29-a48b-6876138662fd-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"edc6a475-296b-4f29-a48b-6876138662fd\") " pod="openstack/rabbitmq-server-0" Dec 01 08:54:52 crc kubenswrapper[4689]: I1201 08:54:52.746445 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/edc6a475-296b-4f29-a48b-6876138662fd-pod-info\") pod \"rabbitmq-server-0\" (UID: \"edc6a475-296b-4f29-a48b-6876138662fd\") " pod="openstack/rabbitmq-server-0" Dec 01 08:54:52 crc kubenswrapper[4689]: I1201 08:54:52.746477 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/edc6a475-296b-4f29-a48b-6876138662fd-server-conf\") pod \"rabbitmq-server-0\" (UID: \"edc6a475-296b-4f29-a48b-6876138662fd\") " pod="openstack/rabbitmq-server-0" Dec 01 08:54:52 crc kubenswrapper[4689]: I1201 08:54:52.748012 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-v6tfg"] Dec 01 08:54:52 crc kubenswrapper[4689]: I1201 08:54:52.749109 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/edc6a475-296b-4f29-a48b-6876138662fd-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"edc6a475-296b-4f29-a48b-6876138662fd\") " pod="openstack/rabbitmq-server-0" Dec 01 08:54:52 crc kubenswrapper[4689]: I1201 08:54:52.749812 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/edc6a475-296b-4f29-a48b-6876138662fd-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"edc6a475-296b-4f29-a48b-6876138662fd\") " pod="openstack/rabbitmq-server-0" Dec 01 08:54:52 crc kubenswrapper[4689]: I1201 08:54:52.750500 4689 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"edc6a475-296b-4f29-a48b-6876138662fd\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-server-0" Dec 01 08:54:52 crc kubenswrapper[4689]: I1201 08:54:52.752939 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/edc6a475-296b-4f29-a48b-6876138662fd-server-conf\") pod \"rabbitmq-server-0\" (UID: \"edc6a475-296b-4f29-a48b-6876138662fd\") " pod="openstack/rabbitmq-server-0" Dec 01 08:54:52 crc kubenswrapper[4689]: I1201 08:54:52.753918 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/edc6a475-296b-4f29-a48b-6876138662fd-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"edc6a475-296b-4f29-a48b-6876138662fd\") " pod="openstack/rabbitmq-server-0" Dec 01 08:54:52 crc kubenswrapper[4689]: I1201 08:54:52.754397 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/edc6a475-296b-4f29-a48b-6876138662fd-config-data\") pod \"rabbitmq-server-0\" (UID: \"edc6a475-296b-4f29-a48b-6876138662fd\") " pod="openstack/rabbitmq-server-0" Dec 01 08:54:52 crc kubenswrapper[4689]: I1201 08:54:52.755859 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/edc6a475-296b-4f29-a48b-6876138662fd-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"edc6a475-296b-4f29-a48b-6876138662fd\") " pod="openstack/rabbitmq-server-0" Dec 01 08:54:52 crc kubenswrapper[4689]: I1201 08:54:52.759911 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/edc6a475-296b-4f29-a48b-6876138662fd-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"edc6a475-296b-4f29-a48b-6876138662fd\") " pod="openstack/rabbitmq-server-0" Dec 01 08:54:52 crc kubenswrapper[4689]: I1201 08:54:52.770901 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxzqp\" (UniqueName: \"kubernetes.io/projected/edc6a475-296b-4f29-a48b-6876138662fd-kube-api-access-bxzqp\") pod \"rabbitmq-server-0\" (UID: \"edc6a475-296b-4f29-a48b-6876138662fd\") " pod="openstack/rabbitmq-server-0" Dec 01 08:54:52 crc kubenswrapper[4689]: I1201 08:54:52.775124 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/edc6a475-296b-4f29-a48b-6876138662fd-pod-info\") pod \"rabbitmq-server-0\" (UID: \"edc6a475-296b-4f29-a48b-6876138662fd\") " pod="openstack/rabbitmq-server-0" Dec 01 08:54:52 crc kubenswrapper[4689]: I1201 08:54:52.804778 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/edc6a475-296b-4f29-a48b-6876138662fd-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"edc6a475-296b-4f29-a48b-6876138662fd\") " pod="openstack/rabbitmq-server-0" Dec 01 08:54:52 crc kubenswrapper[4689]: I1201 08:54:52.848507 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"edc6a475-296b-4f29-a48b-6876138662fd\") " pod="openstack/rabbitmq-server-0" Dec 01 08:54:52 crc kubenswrapper[4689]: I1201 08:54:52.901709 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 01 08:54:52 crc kubenswrapper[4689]: I1201 08:54:52.908331 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 01 08:54:52 crc kubenswrapper[4689]: I1201 08:54:52.909643 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 01 08:54:52 crc kubenswrapper[4689]: I1201 08:54:52.920140 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Dec 01 08:54:52 crc kubenswrapper[4689]: I1201 08:54:52.920390 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Dec 01 08:54:52 crc kubenswrapper[4689]: I1201 08:54:52.920448 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Dec 01 08:54:52 crc kubenswrapper[4689]: I1201 08:54:52.920538 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Dec 01 08:54:52 crc kubenswrapper[4689]: I1201 08:54:52.920624 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Dec 01 08:54:52 crc kubenswrapper[4689]: I1201 08:54:52.920796 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-rgwmj" Dec 01 08:54:52 crc kubenswrapper[4689]: I1201 08:54:52.920969 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Dec 01 08:54:52 crc kubenswrapper[4689]: I1201 08:54:52.921711 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 01 08:54:52 crc kubenswrapper[4689]: I1201 08:54:52.954706 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvbxm\" (UniqueName: \"kubernetes.io/projected/50bb385d-f9f3-4a0d-8d26-c0a69a6eba87-kube-api-access-nvbxm\") pod \"rabbitmq-cell1-server-0\" (UID: \"50bb385d-f9f3-4a0d-8d26-c0a69a6eba87\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 08:54:52 crc kubenswrapper[4689]: I1201 08:54:52.954750 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/50bb385d-f9f3-4a0d-8d26-c0a69a6eba87-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"50bb385d-f9f3-4a0d-8d26-c0a69a6eba87\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 08:54:52 crc kubenswrapper[4689]: I1201 08:54:52.954770 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/50bb385d-f9f3-4a0d-8d26-c0a69a6eba87-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"50bb385d-f9f3-4a0d-8d26-c0a69a6eba87\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 08:54:52 crc kubenswrapper[4689]: I1201 08:54:52.954788 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/50bb385d-f9f3-4a0d-8d26-c0a69a6eba87-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"50bb385d-f9f3-4a0d-8d26-c0a69a6eba87\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 08:54:52 crc kubenswrapper[4689]: I1201 08:54:52.954812 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/50bb385d-f9f3-4a0d-8d26-c0a69a6eba87-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"50bb385d-f9f3-4a0d-8d26-c0a69a6eba87\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 08:54:52 crc kubenswrapper[4689]: I1201 08:54:52.954826 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/50bb385d-f9f3-4a0d-8d26-c0a69a6eba87-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"50bb385d-f9f3-4a0d-8d26-c0a69a6eba87\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 08:54:52 crc kubenswrapper[4689]: I1201 08:54:52.954859 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/50bb385d-f9f3-4a0d-8d26-c0a69a6eba87-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"50bb385d-f9f3-4a0d-8d26-c0a69a6eba87\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 08:54:52 crc kubenswrapper[4689]: I1201 08:54:52.954899 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/50bb385d-f9f3-4a0d-8d26-c0a69a6eba87-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"50bb385d-f9f3-4a0d-8d26-c0a69a6eba87\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 08:54:52 crc kubenswrapper[4689]: I1201 08:54:52.954921 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"50bb385d-f9f3-4a0d-8d26-c0a69a6eba87\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 08:54:52 crc kubenswrapper[4689]: I1201 08:54:52.954942 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/50bb385d-f9f3-4a0d-8d26-c0a69a6eba87-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"50bb385d-f9f3-4a0d-8d26-c0a69a6eba87\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 08:54:52 crc kubenswrapper[4689]: I1201 08:54:52.954955 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/50bb385d-f9f3-4a0d-8d26-c0a69a6eba87-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"50bb385d-f9f3-4a0d-8d26-c0a69a6eba87\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 08:54:53 crc kubenswrapper[4689]: I1201 08:54:53.055989 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/50bb385d-f9f3-4a0d-8d26-c0a69a6eba87-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"50bb385d-f9f3-4a0d-8d26-c0a69a6eba87\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 08:54:53 crc kubenswrapper[4689]: I1201 08:54:53.056034 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"50bb385d-f9f3-4a0d-8d26-c0a69a6eba87\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 08:54:53 crc kubenswrapper[4689]: I1201 08:54:53.056061 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/50bb385d-f9f3-4a0d-8d26-c0a69a6eba87-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"50bb385d-f9f3-4a0d-8d26-c0a69a6eba87\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 08:54:53 crc kubenswrapper[4689]: I1201 08:54:53.056078 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/50bb385d-f9f3-4a0d-8d26-c0a69a6eba87-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"50bb385d-f9f3-4a0d-8d26-c0a69a6eba87\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 08:54:53 crc kubenswrapper[4689]: I1201 08:54:53.056135 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvbxm\" (UniqueName: \"kubernetes.io/projected/50bb385d-f9f3-4a0d-8d26-c0a69a6eba87-kube-api-access-nvbxm\") pod \"rabbitmq-cell1-server-0\" (UID: \"50bb385d-f9f3-4a0d-8d26-c0a69a6eba87\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 08:54:53 crc kubenswrapper[4689]: I1201 08:54:53.056158 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/50bb385d-f9f3-4a0d-8d26-c0a69a6eba87-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"50bb385d-f9f3-4a0d-8d26-c0a69a6eba87\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 08:54:53 crc kubenswrapper[4689]: I1201 08:54:53.056179 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/50bb385d-f9f3-4a0d-8d26-c0a69a6eba87-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"50bb385d-f9f3-4a0d-8d26-c0a69a6eba87\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 08:54:53 crc kubenswrapper[4689]: I1201 08:54:53.056200 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/50bb385d-f9f3-4a0d-8d26-c0a69a6eba87-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"50bb385d-f9f3-4a0d-8d26-c0a69a6eba87\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 08:54:53 crc kubenswrapper[4689]: I1201 08:54:53.056221 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/50bb385d-f9f3-4a0d-8d26-c0a69a6eba87-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"50bb385d-f9f3-4a0d-8d26-c0a69a6eba87\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 08:54:53 crc kubenswrapper[4689]: I1201 08:54:53.056239 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/50bb385d-f9f3-4a0d-8d26-c0a69a6eba87-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"50bb385d-f9f3-4a0d-8d26-c0a69a6eba87\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 08:54:53 crc kubenswrapper[4689]: I1201 08:54:53.056262 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/50bb385d-f9f3-4a0d-8d26-c0a69a6eba87-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"50bb385d-f9f3-4a0d-8d26-c0a69a6eba87\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 08:54:53 crc kubenswrapper[4689]: I1201 08:54:53.057630 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/50bb385d-f9f3-4a0d-8d26-c0a69a6eba87-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"50bb385d-f9f3-4a0d-8d26-c0a69a6eba87\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 08:54:53 crc kubenswrapper[4689]: I1201 08:54:53.058726 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/50bb385d-f9f3-4a0d-8d26-c0a69a6eba87-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"50bb385d-f9f3-4a0d-8d26-c0a69a6eba87\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 08:54:53 crc kubenswrapper[4689]: I1201 08:54:53.059190 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/50bb385d-f9f3-4a0d-8d26-c0a69a6eba87-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"50bb385d-f9f3-4a0d-8d26-c0a69a6eba87\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 08:54:53 crc kubenswrapper[4689]: I1201 08:54:53.059321 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/50bb385d-f9f3-4a0d-8d26-c0a69a6eba87-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"50bb385d-f9f3-4a0d-8d26-c0a69a6eba87\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 08:54:53 crc kubenswrapper[4689]: I1201 08:54:53.059659 4689 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"50bb385d-f9f3-4a0d-8d26-c0a69a6eba87\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-cell1-server-0" Dec 01 08:54:53 crc kubenswrapper[4689]: I1201 08:54:53.061099 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/50bb385d-f9f3-4a0d-8d26-c0a69a6eba87-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"50bb385d-f9f3-4a0d-8d26-c0a69a6eba87\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 08:54:53 crc kubenswrapper[4689]: I1201 08:54:53.067624 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/50bb385d-f9f3-4a0d-8d26-c0a69a6eba87-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"50bb385d-f9f3-4a0d-8d26-c0a69a6eba87\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 08:54:53 crc kubenswrapper[4689]: I1201 08:54:53.088741 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/50bb385d-f9f3-4a0d-8d26-c0a69a6eba87-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"50bb385d-f9f3-4a0d-8d26-c0a69a6eba87\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 08:54:53 crc kubenswrapper[4689]: I1201 08:54:53.089647 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/50bb385d-f9f3-4a0d-8d26-c0a69a6eba87-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"50bb385d-f9f3-4a0d-8d26-c0a69a6eba87\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 08:54:53 crc kubenswrapper[4689]: I1201 08:54:53.094958 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/50bb385d-f9f3-4a0d-8d26-c0a69a6eba87-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"50bb385d-f9f3-4a0d-8d26-c0a69a6eba87\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 08:54:53 crc kubenswrapper[4689]: I1201 08:54:53.106730 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvbxm\" (UniqueName: \"kubernetes.io/projected/50bb385d-f9f3-4a0d-8d26-c0a69a6eba87-kube-api-access-nvbxm\") pod \"rabbitmq-cell1-server-0\" (UID: \"50bb385d-f9f3-4a0d-8d26-c0a69a6eba87\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 08:54:53 crc kubenswrapper[4689]: I1201 08:54:53.114834 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"50bb385d-f9f3-4a0d-8d26-c0a69a6eba87\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 08:54:53 crc kubenswrapper[4689]: I1201 08:54:53.162673 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-v6tfg" event={"ID":"cac79dc0-0c0d-4fc8-b148-5b0ae546eba6","Type":"ContainerStarted","Data":"958955d70b5b5ebd403995bde8a821a2c8216da345755bcfbfcd113713b57257"} Dec 01 08:54:53 crc kubenswrapper[4689]: I1201 08:54:53.170173 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-xvz5v" event={"ID":"8957703f-79dd-4f15-bad3-d7be659b8de6","Type":"ContainerStarted","Data":"381a1c27f4ccc911e074d5bcdb746df5973ce1da3aac3310f0f2ea3373edcbd7"} Dec 01 08:54:53 crc kubenswrapper[4689]: I1201 08:54:53.238874 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 01 08:54:53 crc kubenswrapper[4689]: I1201 08:54:53.350717 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 01 08:54:53 crc kubenswrapper[4689]: W1201 08:54:53.365206 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podedc6a475_296b_4f29_a48b_6876138662fd.slice/crio-a4015cc93f4b621890027a1de825002152a9a951ffa58be6dc55220aaa0a725c WatchSource:0}: Error finding container a4015cc93f4b621890027a1de825002152a9a951ffa58be6dc55220aaa0a725c: Status 404 returned error can't find the container with id a4015cc93f4b621890027a1de825002152a9a951ffa58be6dc55220aaa0a725c Dec 01 08:54:53 crc kubenswrapper[4689]: I1201 08:54:53.727141 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 01 08:54:53 crc kubenswrapper[4689]: W1201 08:54:53.740573 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod50bb385d_f9f3_4a0d_8d26_c0a69a6eba87.slice/crio-553c3448bc49ceb428b6d46962082fe472b898b74e385eacaf8726dcf2345e34 WatchSource:0}: Error finding container 553c3448bc49ceb428b6d46962082fe472b898b74e385eacaf8726dcf2345e34: Status 404 returned error can't find the container with id 553c3448bc49ceb428b6d46962082fe472b898b74e385eacaf8726dcf2345e34 Dec 01 08:54:54 crc kubenswrapper[4689]: I1201 08:54:54.125805 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Dec 01 08:54:54 crc kubenswrapper[4689]: I1201 08:54:54.127704 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Dec 01 08:54:54 crc kubenswrapper[4689]: I1201 08:54:54.133517 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-gx5x2" Dec 01 08:54:54 crc kubenswrapper[4689]: I1201 08:54:54.133944 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Dec 01 08:54:54 crc kubenswrapper[4689]: I1201 08:54:54.134261 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Dec 01 08:54:54 crc kubenswrapper[4689]: I1201 08:54:54.135011 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Dec 01 08:54:54 crc kubenswrapper[4689]: I1201 08:54:54.145842 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Dec 01 08:54:54 crc kubenswrapper[4689]: I1201 08:54:54.151419 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Dec 01 08:54:54 crc kubenswrapper[4689]: I1201 08:54:54.181080 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/555543d8-21bb-4dba-9c08-ab82e90ea894-config-data-generated\") pod \"openstack-galera-0\" (UID: \"555543d8-21bb-4dba-9c08-ab82e90ea894\") " pod="openstack/openstack-galera-0" Dec 01 08:54:54 crc kubenswrapper[4689]: I1201 08:54:54.181130 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/555543d8-21bb-4dba-9c08-ab82e90ea894-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"555543d8-21bb-4dba-9c08-ab82e90ea894\") " pod="openstack/openstack-galera-0" Dec 01 08:54:54 crc kubenswrapper[4689]: I1201 08:54:54.181347 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/555543d8-21bb-4dba-9c08-ab82e90ea894-kolla-config\") pod \"openstack-galera-0\" (UID: \"555543d8-21bb-4dba-9c08-ab82e90ea894\") " pod="openstack/openstack-galera-0" Dec 01 08:54:54 crc kubenswrapper[4689]: I1201 08:54:54.181620 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/555543d8-21bb-4dba-9c08-ab82e90ea894-operator-scripts\") pod \"openstack-galera-0\" (UID: \"555543d8-21bb-4dba-9c08-ab82e90ea894\") " pod="openstack/openstack-galera-0" Dec 01 08:54:54 crc kubenswrapper[4689]: I1201 08:54:54.181684 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-0\" (UID: \"555543d8-21bb-4dba-9c08-ab82e90ea894\") " pod="openstack/openstack-galera-0" Dec 01 08:54:54 crc kubenswrapper[4689]: I1201 08:54:54.181705 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/555543d8-21bb-4dba-9c08-ab82e90ea894-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"555543d8-21bb-4dba-9c08-ab82e90ea894\") " pod="openstack/openstack-galera-0" Dec 01 08:54:54 crc kubenswrapper[4689]: I1201 08:54:54.181749 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/555543d8-21bb-4dba-9c08-ab82e90ea894-config-data-default\") pod \"openstack-galera-0\" (UID: \"555543d8-21bb-4dba-9c08-ab82e90ea894\") " pod="openstack/openstack-galera-0" Dec 01 08:54:54 crc kubenswrapper[4689]: I1201 08:54:54.181816 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzssz\" (UniqueName: \"kubernetes.io/projected/555543d8-21bb-4dba-9c08-ab82e90ea894-kube-api-access-qzssz\") pod \"openstack-galera-0\" (UID: \"555543d8-21bb-4dba-9c08-ab82e90ea894\") " pod="openstack/openstack-galera-0" Dec 01 08:54:54 crc kubenswrapper[4689]: I1201 08:54:54.264687 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"50bb385d-f9f3-4a0d-8d26-c0a69a6eba87","Type":"ContainerStarted","Data":"553c3448bc49ceb428b6d46962082fe472b898b74e385eacaf8726dcf2345e34"} Dec 01 08:54:54 crc kubenswrapper[4689]: I1201 08:54:54.273097 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"edc6a475-296b-4f29-a48b-6876138662fd","Type":"ContainerStarted","Data":"a4015cc93f4b621890027a1de825002152a9a951ffa58be6dc55220aaa0a725c"} Dec 01 08:54:54 crc kubenswrapper[4689]: I1201 08:54:54.287293 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/555543d8-21bb-4dba-9c08-ab82e90ea894-operator-scripts\") pod \"openstack-galera-0\" (UID: \"555543d8-21bb-4dba-9c08-ab82e90ea894\") " pod="openstack/openstack-galera-0" Dec 01 08:54:54 crc kubenswrapper[4689]: I1201 08:54:54.287350 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-0\" (UID: \"555543d8-21bb-4dba-9c08-ab82e90ea894\") " pod="openstack/openstack-galera-0" Dec 01 08:54:54 crc kubenswrapper[4689]: I1201 08:54:54.287379 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/555543d8-21bb-4dba-9c08-ab82e90ea894-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"555543d8-21bb-4dba-9c08-ab82e90ea894\") " pod="openstack/openstack-galera-0" Dec 01 08:54:54 crc kubenswrapper[4689]: I1201 08:54:54.287399 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/555543d8-21bb-4dba-9c08-ab82e90ea894-config-data-default\") pod \"openstack-galera-0\" (UID: \"555543d8-21bb-4dba-9c08-ab82e90ea894\") " pod="openstack/openstack-galera-0" Dec 01 08:54:54 crc kubenswrapper[4689]: I1201 08:54:54.287434 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzssz\" (UniqueName: \"kubernetes.io/projected/555543d8-21bb-4dba-9c08-ab82e90ea894-kube-api-access-qzssz\") pod \"openstack-galera-0\" (UID: \"555543d8-21bb-4dba-9c08-ab82e90ea894\") " pod="openstack/openstack-galera-0" Dec 01 08:54:54 crc kubenswrapper[4689]: I1201 08:54:54.287451 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/555543d8-21bb-4dba-9c08-ab82e90ea894-config-data-generated\") pod \"openstack-galera-0\" (UID: \"555543d8-21bb-4dba-9c08-ab82e90ea894\") " pod="openstack/openstack-galera-0" Dec 01 08:54:54 crc kubenswrapper[4689]: I1201 08:54:54.287473 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/555543d8-21bb-4dba-9c08-ab82e90ea894-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"555543d8-21bb-4dba-9c08-ab82e90ea894\") " pod="openstack/openstack-galera-0" Dec 01 08:54:54 crc kubenswrapper[4689]: I1201 08:54:54.287514 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/555543d8-21bb-4dba-9c08-ab82e90ea894-kolla-config\") pod \"openstack-galera-0\" (UID: \"555543d8-21bb-4dba-9c08-ab82e90ea894\") " pod="openstack/openstack-galera-0" Dec 01 08:54:54 crc kubenswrapper[4689]: I1201 08:54:54.287798 4689 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-0\" (UID: \"555543d8-21bb-4dba-9c08-ab82e90ea894\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/openstack-galera-0" Dec 01 08:54:54 crc kubenswrapper[4689]: I1201 08:54:54.288099 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/555543d8-21bb-4dba-9c08-ab82e90ea894-config-data-generated\") pod \"openstack-galera-0\" (UID: \"555543d8-21bb-4dba-9c08-ab82e90ea894\") " pod="openstack/openstack-galera-0" Dec 01 08:54:54 crc kubenswrapper[4689]: I1201 08:54:54.289087 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/555543d8-21bb-4dba-9c08-ab82e90ea894-operator-scripts\") pod \"openstack-galera-0\" (UID: \"555543d8-21bb-4dba-9c08-ab82e90ea894\") " pod="openstack/openstack-galera-0" Dec 01 08:54:54 crc kubenswrapper[4689]: I1201 08:54:54.292681 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/555543d8-21bb-4dba-9c08-ab82e90ea894-kolla-config\") pod \"openstack-galera-0\" (UID: \"555543d8-21bb-4dba-9c08-ab82e90ea894\") " pod="openstack/openstack-galera-0" Dec 01 08:54:54 crc kubenswrapper[4689]: I1201 08:54:54.297858 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/555543d8-21bb-4dba-9c08-ab82e90ea894-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"555543d8-21bb-4dba-9c08-ab82e90ea894\") " pod="openstack/openstack-galera-0" Dec 01 08:54:54 crc kubenswrapper[4689]: I1201 08:54:54.298451 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/555543d8-21bb-4dba-9c08-ab82e90ea894-config-data-default\") pod \"openstack-galera-0\" (UID: \"555543d8-21bb-4dba-9c08-ab82e90ea894\") " pod="openstack/openstack-galera-0" Dec 01 08:54:54 crc kubenswrapper[4689]: I1201 08:54:54.311387 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/555543d8-21bb-4dba-9c08-ab82e90ea894-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"555543d8-21bb-4dba-9c08-ab82e90ea894\") " pod="openstack/openstack-galera-0" Dec 01 08:54:54 crc kubenswrapper[4689]: I1201 08:54:54.320598 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzssz\" (UniqueName: \"kubernetes.io/projected/555543d8-21bb-4dba-9c08-ab82e90ea894-kube-api-access-qzssz\") pod \"openstack-galera-0\" (UID: \"555543d8-21bb-4dba-9c08-ab82e90ea894\") " pod="openstack/openstack-galera-0" Dec 01 08:54:54 crc kubenswrapper[4689]: I1201 08:54:54.344484 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-0\" (UID: \"555543d8-21bb-4dba-9c08-ab82e90ea894\") " pod="openstack/openstack-galera-0" Dec 01 08:54:54 crc kubenswrapper[4689]: I1201 08:54:54.458888 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Dec 01 08:54:55 crc kubenswrapper[4689]: I1201 08:54:55.342860 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-95kdv"] Dec 01 08:54:55 crc kubenswrapper[4689]: I1201 08:54:55.407716 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-95kdv"] Dec 01 08:54:55 crc kubenswrapper[4689]: I1201 08:54:55.407936 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-95kdv" Dec 01 08:54:55 crc kubenswrapper[4689]: I1201 08:54:55.514045 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9368a0f8-4d7e-49e3-b575-7f901ed6464c-utilities\") pod \"redhat-operators-95kdv\" (UID: \"9368a0f8-4d7e-49e3-b575-7f901ed6464c\") " pod="openshift-marketplace/redhat-operators-95kdv" Dec 01 08:54:55 crc kubenswrapper[4689]: I1201 08:54:55.516014 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9368a0f8-4d7e-49e3-b575-7f901ed6464c-catalog-content\") pod \"redhat-operators-95kdv\" (UID: \"9368a0f8-4d7e-49e3-b575-7f901ed6464c\") " pod="openshift-marketplace/redhat-operators-95kdv" Dec 01 08:54:55 crc kubenswrapper[4689]: I1201 08:54:55.516238 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6l22\" (UniqueName: \"kubernetes.io/projected/9368a0f8-4d7e-49e3-b575-7f901ed6464c-kube-api-access-h6l22\") pod \"redhat-operators-95kdv\" (UID: \"9368a0f8-4d7e-49e3-b575-7f901ed6464c\") " pod="openshift-marketplace/redhat-operators-95kdv" Dec 01 08:54:55 crc kubenswrapper[4689]: I1201 08:54:55.624013 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9368a0f8-4d7e-49e3-b575-7f901ed6464c-utilities\") pod \"redhat-operators-95kdv\" (UID: \"9368a0f8-4d7e-49e3-b575-7f901ed6464c\") " pod="openshift-marketplace/redhat-operators-95kdv" Dec 01 08:54:55 crc kubenswrapper[4689]: I1201 08:54:55.624076 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9368a0f8-4d7e-49e3-b575-7f901ed6464c-catalog-content\") pod \"redhat-operators-95kdv\" (UID: \"9368a0f8-4d7e-49e3-b575-7f901ed6464c\") " pod="openshift-marketplace/redhat-operators-95kdv" Dec 01 08:54:55 crc kubenswrapper[4689]: I1201 08:54:55.624100 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6l22\" (UniqueName: \"kubernetes.io/projected/9368a0f8-4d7e-49e3-b575-7f901ed6464c-kube-api-access-h6l22\") pod \"redhat-operators-95kdv\" (UID: \"9368a0f8-4d7e-49e3-b575-7f901ed6464c\") " pod="openshift-marketplace/redhat-operators-95kdv" Dec 01 08:54:55 crc kubenswrapper[4689]: I1201 08:54:55.624771 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9368a0f8-4d7e-49e3-b575-7f901ed6464c-utilities\") pod \"redhat-operators-95kdv\" (UID: \"9368a0f8-4d7e-49e3-b575-7f901ed6464c\") " pod="openshift-marketplace/redhat-operators-95kdv" Dec 01 08:54:55 crc kubenswrapper[4689]: I1201 08:54:55.624993 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9368a0f8-4d7e-49e3-b575-7f901ed6464c-catalog-content\") pod \"redhat-operators-95kdv\" (UID: \"9368a0f8-4d7e-49e3-b575-7f901ed6464c\") " pod="openshift-marketplace/redhat-operators-95kdv" Dec 01 08:54:55 crc kubenswrapper[4689]: I1201 08:54:55.628223 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 01 08:54:55 crc kubenswrapper[4689]: I1201 08:54:55.634130 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 01 08:54:55 crc kubenswrapper[4689]: I1201 08:54:55.641648 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Dec 01 08:54:55 crc kubenswrapper[4689]: I1201 08:54:55.641754 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-cgvxs" Dec 01 08:54:55 crc kubenswrapper[4689]: I1201 08:54:55.642825 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Dec 01 08:54:55 crc kubenswrapper[4689]: I1201 08:54:55.643129 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Dec 01 08:54:55 crc kubenswrapper[4689]: I1201 08:54:55.666656 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6l22\" (UniqueName: \"kubernetes.io/projected/9368a0f8-4d7e-49e3-b575-7f901ed6464c-kube-api-access-h6l22\") pod \"redhat-operators-95kdv\" (UID: \"9368a0f8-4d7e-49e3-b575-7f901ed6464c\") " pod="openshift-marketplace/redhat-operators-95kdv" Dec 01 08:54:55 crc kubenswrapper[4689]: I1201 08:54:55.671223 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 01 08:54:55 crc kubenswrapper[4689]: I1201 08:54:55.689339 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Dec 01 08:54:55 crc kubenswrapper[4689]: I1201 08:54:55.725564 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/bc1ecd4c-eede-492c-ac97-071c42545607-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"bc1ecd4c-eede-492c-ac97-071c42545607\") " pod="openstack/openstack-cell1-galera-0" Dec 01 08:54:55 crc kubenswrapper[4689]: I1201 08:54:55.725610 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"bc1ecd4c-eede-492c-ac97-071c42545607\") " pod="openstack/openstack-cell1-galera-0" Dec 01 08:54:55 crc kubenswrapper[4689]: I1201 08:54:55.725683 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/bc1ecd4c-eede-492c-ac97-071c42545607-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"bc1ecd4c-eede-492c-ac97-071c42545607\") " pod="openstack/openstack-cell1-galera-0" Dec 01 08:54:55 crc kubenswrapper[4689]: I1201 08:54:55.725703 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g96gm\" (UniqueName: \"kubernetes.io/projected/bc1ecd4c-eede-492c-ac97-071c42545607-kube-api-access-g96gm\") pod \"openstack-cell1-galera-0\" (UID: \"bc1ecd4c-eede-492c-ac97-071c42545607\") " pod="openstack/openstack-cell1-galera-0" Dec 01 08:54:55 crc kubenswrapper[4689]: I1201 08:54:55.725741 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc1ecd4c-eede-492c-ac97-071c42545607-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"bc1ecd4c-eede-492c-ac97-071c42545607\") " pod="openstack/openstack-cell1-galera-0" Dec 01 08:54:55 crc kubenswrapper[4689]: I1201 08:54:55.725767 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc1ecd4c-eede-492c-ac97-071c42545607-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"bc1ecd4c-eede-492c-ac97-071c42545607\") " pod="openstack/openstack-cell1-galera-0" Dec 01 08:54:55 crc kubenswrapper[4689]: I1201 08:54:55.725793 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/bc1ecd4c-eede-492c-ac97-071c42545607-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"bc1ecd4c-eede-492c-ac97-071c42545607\") " pod="openstack/openstack-cell1-galera-0" Dec 01 08:54:55 crc kubenswrapper[4689]: I1201 08:54:55.725817 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc1ecd4c-eede-492c-ac97-071c42545607-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"bc1ecd4c-eede-492c-ac97-071c42545607\") " pod="openstack/openstack-cell1-galera-0" Dec 01 08:54:55 crc kubenswrapper[4689]: I1201 08:54:55.761755 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-95kdv" Dec 01 08:54:55 crc kubenswrapper[4689]: I1201 08:54:55.830475 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/bc1ecd4c-eede-492c-ac97-071c42545607-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"bc1ecd4c-eede-492c-ac97-071c42545607\") " pod="openstack/openstack-cell1-galera-0" Dec 01 08:54:55 crc kubenswrapper[4689]: I1201 08:54:55.830528 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc1ecd4c-eede-492c-ac97-071c42545607-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"bc1ecd4c-eede-492c-ac97-071c42545607\") " pod="openstack/openstack-cell1-galera-0" Dec 01 08:54:55 crc kubenswrapper[4689]: I1201 08:54:55.830583 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/bc1ecd4c-eede-492c-ac97-071c42545607-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"bc1ecd4c-eede-492c-ac97-071c42545607\") " pod="openstack/openstack-cell1-galera-0" Dec 01 08:54:55 crc kubenswrapper[4689]: I1201 08:54:55.830608 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"bc1ecd4c-eede-492c-ac97-071c42545607\") " pod="openstack/openstack-cell1-galera-0" Dec 01 08:54:55 crc kubenswrapper[4689]: I1201 08:54:55.830647 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/bc1ecd4c-eede-492c-ac97-071c42545607-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"bc1ecd4c-eede-492c-ac97-071c42545607\") " pod="openstack/openstack-cell1-galera-0" Dec 01 08:54:55 crc kubenswrapper[4689]: I1201 08:54:55.830666 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g96gm\" (UniqueName: \"kubernetes.io/projected/bc1ecd4c-eede-492c-ac97-071c42545607-kube-api-access-g96gm\") pod \"openstack-cell1-galera-0\" (UID: \"bc1ecd4c-eede-492c-ac97-071c42545607\") " pod="openstack/openstack-cell1-galera-0" Dec 01 08:54:55 crc kubenswrapper[4689]: I1201 08:54:55.830694 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc1ecd4c-eede-492c-ac97-071c42545607-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"bc1ecd4c-eede-492c-ac97-071c42545607\") " pod="openstack/openstack-cell1-galera-0" Dec 01 08:54:55 crc kubenswrapper[4689]: I1201 08:54:55.830715 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc1ecd4c-eede-492c-ac97-071c42545607-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"bc1ecd4c-eede-492c-ac97-071c42545607\") " pod="openstack/openstack-cell1-galera-0" Dec 01 08:54:55 crc kubenswrapper[4689]: I1201 08:54:55.833670 4689 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"bc1ecd4c-eede-492c-ac97-071c42545607\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/openstack-cell1-galera-0" Dec 01 08:54:55 crc kubenswrapper[4689]: I1201 08:54:55.834403 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/bc1ecd4c-eede-492c-ac97-071c42545607-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"bc1ecd4c-eede-492c-ac97-071c42545607\") " pod="openstack/openstack-cell1-galera-0" Dec 01 08:54:55 crc kubenswrapper[4689]: I1201 08:54:55.835410 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc1ecd4c-eede-492c-ac97-071c42545607-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"bc1ecd4c-eede-492c-ac97-071c42545607\") " pod="openstack/openstack-cell1-galera-0" Dec 01 08:54:55 crc kubenswrapper[4689]: I1201 08:54:55.835800 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/bc1ecd4c-eede-492c-ac97-071c42545607-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"bc1ecd4c-eede-492c-ac97-071c42545607\") " pod="openstack/openstack-cell1-galera-0" Dec 01 08:54:55 crc kubenswrapper[4689]: I1201 08:54:55.836110 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/bc1ecd4c-eede-492c-ac97-071c42545607-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"bc1ecd4c-eede-492c-ac97-071c42545607\") " pod="openstack/openstack-cell1-galera-0" Dec 01 08:54:55 crc kubenswrapper[4689]: I1201 08:54:55.844837 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc1ecd4c-eede-492c-ac97-071c42545607-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"bc1ecd4c-eede-492c-ac97-071c42545607\") " pod="openstack/openstack-cell1-galera-0" Dec 01 08:54:55 crc kubenswrapper[4689]: I1201 08:54:55.858129 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc1ecd4c-eede-492c-ac97-071c42545607-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"bc1ecd4c-eede-492c-ac97-071c42545607\") " pod="openstack/openstack-cell1-galera-0" Dec 01 08:54:55 crc kubenswrapper[4689]: I1201 08:54:55.860173 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g96gm\" (UniqueName: \"kubernetes.io/projected/bc1ecd4c-eede-492c-ac97-071c42545607-kube-api-access-g96gm\") pod \"openstack-cell1-galera-0\" (UID: \"bc1ecd4c-eede-492c-ac97-071c42545607\") " pod="openstack/openstack-cell1-galera-0" Dec 01 08:54:55 crc kubenswrapper[4689]: I1201 08:54:55.903281 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"bc1ecd4c-eede-492c-ac97-071c42545607\") " pod="openstack/openstack-cell1-galera-0" Dec 01 08:54:56 crc kubenswrapper[4689]: I1201 08:54:56.008696 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Dec 01 08:54:56 crc kubenswrapper[4689]: I1201 08:54:56.016648 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Dec 01 08:54:56 crc kubenswrapper[4689]: I1201 08:54:56.016821 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 01 08:54:56 crc kubenswrapper[4689]: I1201 08:54:56.021298 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-d94d2" Dec 01 08:54:56 crc kubenswrapper[4689]: I1201 08:54:56.021443 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Dec 01 08:54:56 crc kubenswrapper[4689]: I1201 08:54:56.021782 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Dec 01 08:54:56 crc kubenswrapper[4689]: I1201 08:54:56.038888 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Dec 01 08:54:56 crc kubenswrapper[4689]: I1201 08:54:56.138157 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2t4h\" (UniqueName: \"kubernetes.io/projected/f04989a7-e9bc-4d0b-a7a1-efe12657bd2b-kube-api-access-g2t4h\") pod \"memcached-0\" (UID: \"f04989a7-e9bc-4d0b-a7a1-efe12657bd2b\") " pod="openstack/memcached-0" Dec 01 08:54:56 crc kubenswrapper[4689]: I1201 08:54:56.138207 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f04989a7-e9bc-4d0b-a7a1-efe12657bd2b-config-data\") pod \"memcached-0\" (UID: \"f04989a7-e9bc-4d0b-a7a1-efe12657bd2b\") " pod="openstack/memcached-0" Dec 01 08:54:56 crc kubenswrapper[4689]: I1201 08:54:56.138230 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f04989a7-e9bc-4d0b-a7a1-efe12657bd2b-combined-ca-bundle\") pod \"memcached-0\" (UID: \"f04989a7-e9bc-4d0b-a7a1-efe12657bd2b\") " pod="openstack/memcached-0" Dec 01 08:54:56 crc kubenswrapper[4689]: I1201 08:54:56.138322 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f04989a7-e9bc-4d0b-a7a1-efe12657bd2b-kolla-config\") pod \"memcached-0\" (UID: \"f04989a7-e9bc-4d0b-a7a1-efe12657bd2b\") " pod="openstack/memcached-0" Dec 01 08:54:56 crc kubenswrapper[4689]: I1201 08:54:56.138341 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/f04989a7-e9bc-4d0b-a7a1-efe12657bd2b-memcached-tls-certs\") pod \"memcached-0\" (UID: \"f04989a7-e9bc-4d0b-a7a1-efe12657bd2b\") " pod="openstack/memcached-0" Dec 01 08:54:56 crc kubenswrapper[4689]: I1201 08:54:56.240119 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2t4h\" (UniqueName: \"kubernetes.io/projected/f04989a7-e9bc-4d0b-a7a1-efe12657bd2b-kube-api-access-g2t4h\") pod \"memcached-0\" (UID: \"f04989a7-e9bc-4d0b-a7a1-efe12657bd2b\") " pod="openstack/memcached-0" Dec 01 08:54:56 crc kubenswrapper[4689]: I1201 08:54:56.240172 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f04989a7-e9bc-4d0b-a7a1-efe12657bd2b-config-data\") pod \"memcached-0\" (UID: \"f04989a7-e9bc-4d0b-a7a1-efe12657bd2b\") " pod="openstack/memcached-0" Dec 01 08:54:56 crc kubenswrapper[4689]: I1201 08:54:56.240199 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f04989a7-e9bc-4d0b-a7a1-efe12657bd2b-combined-ca-bundle\") pod \"memcached-0\" (UID: \"f04989a7-e9bc-4d0b-a7a1-efe12657bd2b\") " pod="openstack/memcached-0" Dec 01 08:54:56 crc kubenswrapper[4689]: I1201 08:54:56.240271 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f04989a7-e9bc-4d0b-a7a1-efe12657bd2b-kolla-config\") pod \"memcached-0\" (UID: \"f04989a7-e9bc-4d0b-a7a1-efe12657bd2b\") " pod="openstack/memcached-0" Dec 01 08:54:56 crc kubenswrapper[4689]: I1201 08:54:56.240293 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/f04989a7-e9bc-4d0b-a7a1-efe12657bd2b-memcached-tls-certs\") pod \"memcached-0\" (UID: \"f04989a7-e9bc-4d0b-a7a1-efe12657bd2b\") " pod="openstack/memcached-0" Dec 01 08:54:56 crc kubenswrapper[4689]: I1201 08:54:56.241114 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f04989a7-e9bc-4d0b-a7a1-efe12657bd2b-kolla-config\") pod \"memcached-0\" (UID: \"f04989a7-e9bc-4d0b-a7a1-efe12657bd2b\") " pod="openstack/memcached-0" Dec 01 08:54:56 crc kubenswrapper[4689]: I1201 08:54:56.241163 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f04989a7-e9bc-4d0b-a7a1-efe12657bd2b-config-data\") pod \"memcached-0\" (UID: \"f04989a7-e9bc-4d0b-a7a1-efe12657bd2b\") " pod="openstack/memcached-0" Dec 01 08:54:56 crc kubenswrapper[4689]: I1201 08:54:56.251396 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/f04989a7-e9bc-4d0b-a7a1-efe12657bd2b-memcached-tls-certs\") pod \"memcached-0\" (UID: \"f04989a7-e9bc-4d0b-a7a1-efe12657bd2b\") " pod="openstack/memcached-0" Dec 01 08:54:56 crc kubenswrapper[4689]: I1201 08:54:56.251965 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f04989a7-e9bc-4d0b-a7a1-efe12657bd2b-combined-ca-bundle\") pod \"memcached-0\" (UID: \"f04989a7-e9bc-4d0b-a7a1-efe12657bd2b\") " pod="openstack/memcached-0" Dec 01 08:54:56 crc kubenswrapper[4689]: I1201 08:54:56.265456 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2t4h\" (UniqueName: \"kubernetes.io/projected/f04989a7-e9bc-4d0b-a7a1-efe12657bd2b-kube-api-access-g2t4h\") pod \"memcached-0\" (UID: \"f04989a7-e9bc-4d0b-a7a1-efe12657bd2b\") " pod="openstack/memcached-0" Dec 01 08:54:56 crc kubenswrapper[4689]: I1201 08:54:56.324421 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"555543d8-21bb-4dba-9c08-ab82e90ea894","Type":"ContainerStarted","Data":"886e2eff31943172235a62aba2a2d4ea68a8b7db72d9019ae885225477b95388"} Dec 01 08:54:56 crc kubenswrapper[4689]: I1201 08:54:56.367656 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Dec 01 08:54:57 crc kubenswrapper[4689]: I1201 08:54:56.822540 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-95kdv"] Dec 01 08:54:57 crc kubenswrapper[4689]: W1201 08:54:56.842082 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9368a0f8_4d7e_49e3_b575_7f901ed6464c.slice/crio-ee7bb79a4ce94b2df00b36c91c18a4a0006ccfec49caa9fe1c53446b3699b4e7 WatchSource:0}: Error finding container ee7bb79a4ce94b2df00b36c91c18a4a0006ccfec49caa9fe1c53446b3699b4e7: Status 404 returned error can't find the container with id ee7bb79a4ce94b2df00b36c91c18a4a0006ccfec49caa9fe1c53446b3699b4e7 Dec 01 08:54:57 crc kubenswrapper[4689]: I1201 08:54:57.005589 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 01 08:54:57 crc kubenswrapper[4689]: I1201 08:54:57.377624 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"bc1ecd4c-eede-492c-ac97-071c42545607","Type":"ContainerStarted","Data":"ec594caf28d7cf5fc219c4355511db6d6867aa34df7e516af8e6bdecbd398dc0"} Dec 01 08:54:57 crc kubenswrapper[4689]: I1201 08:54:57.384996 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-95kdv" event={"ID":"9368a0f8-4d7e-49e3-b575-7f901ed6464c","Type":"ContainerStarted","Data":"fa6b6254448234f062352e365d6e3a30058292040cd9bec337cf12ea3a1264ab"} Dec 01 08:54:57 crc kubenswrapper[4689]: I1201 08:54:57.385038 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-95kdv" event={"ID":"9368a0f8-4d7e-49e3-b575-7f901ed6464c","Type":"ContainerStarted","Data":"ee7bb79a4ce94b2df00b36c91c18a4a0006ccfec49caa9fe1c53446b3699b4e7"} Dec 01 08:54:57 crc kubenswrapper[4689]: I1201 08:54:57.710256 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Dec 01 08:54:58 crc kubenswrapper[4689]: I1201 08:54:58.425681 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Dec 01 08:54:58 crc kubenswrapper[4689]: I1201 08:54:58.426645 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 01 08:54:58 crc kubenswrapper[4689]: I1201 08:54:58.444963 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-mgvfj" Dec 01 08:54:58 crc kubenswrapper[4689]: I1201 08:54:58.447042 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 01 08:54:58 crc kubenswrapper[4689]: I1201 08:54:58.490797 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"f04989a7-e9bc-4d0b-a7a1-efe12657bd2b","Type":"ContainerStarted","Data":"728df44ac198bfef21d47170a4a9350d98fcc1d4f2f01a8977b1978bd9db74e7"} Dec 01 08:54:58 crc kubenswrapper[4689]: I1201 08:54:58.493643 4689 generic.go:334] "Generic (PLEG): container finished" podID="9368a0f8-4d7e-49e3-b575-7f901ed6464c" containerID="fa6b6254448234f062352e365d6e3a30058292040cd9bec337cf12ea3a1264ab" exitCode=0 Dec 01 08:54:58 crc kubenswrapper[4689]: I1201 08:54:58.493670 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-95kdv" event={"ID":"9368a0f8-4d7e-49e3-b575-7f901ed6464c","Type":"ContainerDied","Data":"fa6b6254448234f062352e365d6e3a30058292040cd9bec337cf12ea3a1264ab"} Dec 01 08:54:58 crc kubenswrapper[4689]: I1201 08:54:58.595838 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fq4xn\" (UniqueName: \"kubernetes.io/projected/06aa5768-7753-4a2d-8e40-96cea62d055c-kube-api-access-fq4xn\") pod \"kube-state-metrics-0\" (UID: \"06aa5768-7753-4a2d-8e40-96cea62d055c\") " pod="openstack/kube-state-metrics-0" Dec 01 08:54:58 crc kubenswrapper[4689]: I1201 08:54:58.763028 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fq4xn\" (UniqueName: \"kubernetes.io/projected/06aa5768-7753-4a2d-8e40-96cea62d055c-kube-api-access-fq4xn\") pod \"kube-state-metrics-0\" (UID: \"06aa5768-7753-4a2d-8e40-96cea62d055c\") " pod="openstack/kube-state-metrics-0" Dec 01 08:54:58 crc kubenswrapper[4689]: I1201 08:54:58.799206 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fq4xn\" (UniqueName: \"kubernetes.io/projected/06aa5768-7753-4a2d-8e40-96cea62d055c-kube-api-access-fq4xn\") pod \"kube-state-metrics-0\" (UID: \"06aa5768-7753-4a2d-8e40-96cea62d055c\") " pod="openstack/kube-state-metrics-0" Dec 01 08:54:59 crc kubenswrapper[4689]: I1201 08:54:59.085130 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 01 08:54:59 crc kubenswrapper[4689]: I1201 08:54:59.281914 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-48955"] Dec 01 08:54:59 crc kubenswrapper[4689]: I1201 08:54:59.305567 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-48955"] Dec 01 08:54:59 crc kubenswrapper[4689]: I1201 08:54:59.307769 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-48955" Dec 01 08:54:59 crc kubenswrapper[4689]: I1201 08:54:59.324029 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Dec 01 08:54:59 crc kubenswrapper[4689]: I1201 08:54:59.324543 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-xwx59" Dec 01 08:54:59 crc kubenswrapper[4689]: I1201 08:54:59.324794 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Dec 01 08:54:59 crc kubenswrapper[4689]: I1201 08:54:59.379580 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8731b0fb-0429-4730-8da9-cc182fdf29e1-var-run-ovn\") pod \"ovn-controller-48955\" (UID: \"8731b0fb-0429-4730-8da9-cc182fdf29e1\") " pod="openstack/ovn-controller-48955" Dec 01 08:54:59 crc kubenswrapper[4689]: I1201 08:54:59.379655 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8731b0fb-0429-4730-8da9-cc182fdf29e1-var-log-ovn\") pod \"ovn-controller-48955\" (UID: \"8731b0fb-0429-4730-8da9-cc182fdf29e1\") " pod="openstack/ovn-controller-48955" Dec 01 08:54:59 crc kubenswrapper[4689]: I1201 08:54:59.379707 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/8731b0fb-0429-4730-8da9-cc182fdf29e1-ovn-controller-tls-certs\") pod \"ovn-controller-48955\" (UID: \"8731b0fb-0429-4730-8da9-cc182fdf29e1\") " pod="openstack/ovn-controller-48955" Dec 01 08:54:59 crc kubenswrapper[4689]: I1201 08:54:59.379730 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8731b0fb-0429-4730-8da9-cc182fdf29e1-scripts\") pod \"ovn-controller-48955\" (UID: \"8731b0fb-0429-4730-8da9-cc182fdf29e1\") " pod="openstack/ovn-controller-48955" Dec 01 08:54:59 crc kubenswrapper[4689]: I1201 08:54:59.379748 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znwgw\" (UniqueName: \"kubernetes.io/projected/8731b0fb-0429-4730-8da9-cc182fdf29e1-kube-api-access-znwgw\") pod \"ovn-controller-48955\" (UID: \"8731b0fb-0429-4730-8da9-cc182fdf29e1\") " pod="openstack/ovn-controller-48955" Dec 01 08:54:59 crc kubenswrapper[4689]: I1201 08:54:59.379770 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8731b0fb-0429-4730-8da9-cc182fdf29e1-combined-ca-bundle\") pod \"ovn-controller-48955\" (UID: \"8731b0fb-0429-4730-8da9-cc182fdf29e1\") " pod="openstack/ovn-controller-48955" Dec 01 08:54:59 crc kubenswrapper[4689]: I1201 08:54:59.379800 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8731b0fb-0429-4730-8da9-cc182fdf29e1-var-run\") pod \"ovn-controller-48955\" (UID: \"8731b0fb-0429-4730-8da9-cc182fdf29e1\") " pod="openstack/ovn-controller-48955" Dec 01 08:54:59 crc kubenswrapper[4689]: I1201 08:54:59.392849 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-sj4xx"] Dec 01 08:54:59 crc kubenswrapper[4689]: I1201 08:54:59.395248 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-sj4xx" Dec 01 08:54:59 crc kubenswrapper[4689]: I1201 08:54:59.405988 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-sj4xx"] Dec 01 08:54:59 crc kubenswrapper[4689]: I1201 08:54:59.482256 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/8731b0fb-0429-4730-8da9-cc182fdf29e1-ovn-controller-tls-certs\") pod \"ovn-controller-48955\" (UID: \"8731b0fb-0429-4730-8da9-cc182fdf29e1\") " pod="openstack/ovn-controller-48955" Dec 01 08:54:59 crc kubenswrapper[4689]: I1201 08:54:59.482668 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8731b0fb-0429-4730-8da9-cc182fdf29e1-scripts\") pod \"ovn-controller-48955\" (UID: \"8731b0fb-0429-4730-8da9-cc182fdf29e1\") " pod="openstack/ovn-controller-48955" Dec 01 08:54:59 crc kubenswrapper[4689]: I1201 08:54:59.482696 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znwgw\" (UniqueName: \"kubernetes.io/projected/8731b0fb-0429-4730-8da9-cc182fdf29e1-kube-api-access-znwgw\") pod \"ovn-controller-48955\" (UID: \"8731b0fb-0429-4730-8da9-cc182fdf29e1\") " pod="openstack/ovn-controller-48955" Dec 01 08:54:59 crc kubenswrapper[4689]: I1201 08:54:59.482721 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8731b0fb-0429-4730-8da9-cc182fdf29e1-combined-ca-bundle\") pod \"ovn-controller-48955\" (UID: \"8731b0fb-0429-4730-8da9-cc182fdf29e1\") " pod="openstack/ovn-controller-48955" Dec 01 08:54:59 crc kubenswrapper[4689]: I1201 08:54:59.482744 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8731b0fb-0429-4730-8da9-cc182fdf29e1-var-run\") pod \"ovn-controller-48955\" (UID: \"8731b0fb-0429-4730-8da9-cc182fdf29e1\") " pod="openstack/ovn-controller-48955" Dec 01 08:54:59 crc kubenswrapper[4689]: I1201 08:54:59.482907 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8731b0fb-0429-4730-8da9-cc182fdf29e1-var-run-ovn\") pod \"ovn-controller-48955\" (UID: \"8731b0fb-0429-4730-8da9-cc182fdf29e1\") " pod="openstack/ovn-controller-48955" Dec 01 08:54:59 crc kubenswrapper[4689]: I1201 08:54:59.482968 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8731b0fb-0429-4730-8da9-cc182fdf29e1-var-log-ovn\") pod \"ovn-controller-48955\" (UID: \"8731b0fb-0429-4730-8da9-cc182fdf29e1\") " pod="openstack/ovn-controller-48955" Dec 01 08:54:59 crc kubenswrapper[4689]: I1201 08:54:59.483767 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8731b0fb-0429-4730-8da9-cc182fdf29e1-var-log-ovn\") pod \"ovn-controller-48955\" (UID: \"8731b0fb-0429-4730-8da9-cc182fdf29e1\") " pod="openstack/ovn-controller-48955" Dec 01 08:54:59 crc kubenswrapper[4689]: I1201 08:54:59.492117 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8731b0fb-0429-4730-8da9-cc182fdf29e1-var-run\") pod \"ovn-controller-48955\" (UID: \"8731b0fb-0429-4730-8da9-cc182fdf29e1\") " pod="openstack/ovn-controller-48955" Dec 01 08:54:59 crc kubenswrapper[4689]: I1201 08:54:59.494097 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8731b0fb-0429-4730-8da9-cc182fdf29e1-var-run-ovn\") pod \"ovn-controller-48955\" (UID: \"8731b0fb-0429-4730-8da9-cc182fdf29e1\") " pod="openstack/ovn-controller-48955" Dec 01 08:54:59 crc kubenswrapper[4689]: I1201 08:54:59.509295 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8731b0fb-0429-4730-8da9-cc182fdf29e1-scripts\") pod \"ovn-controller-48955\" (UID: \"8731b0fb-0429-4730-8da9-cc182fdf29e1\") " pod="openstack/ovn-controller-48955" Dec 01 08:54:59 crc kubenswrapper[4689]: I1201 08:54:59.510109 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/8731b0fb-0429-4730-8da9-cc182fdf29e1-ovn-controller-tls-certs\") pod \"ovn-controller-48955\" (UID: \"8731b0fb-0429-4730-8da9-cc182fdf29e1\") " pod="openstack/ovn-controller-48955" Dec 01 08:54:59 crc kubenswrapper[4689]: I1201 08:54:59.528324 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8731b0fb-0429-4730-8da9-cc182fdf29e1-combined-ca-bundle\") pod \"ovn-controller-48955\" (UID: \"8731b0fb-0429-4730-8da9-cc182fdf29e1\") " pod="openstack/ovn-controller-48955" Dec 01 08:54:59 crc kubenswrapper[4689]: I1201 08:54:59.533290 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znwgw\" (UniqueName: \"kubernetes.io/projected/8731b0fb-0429-4730-8da9-cc182fdf29e1-kube-api-access-znwgw\") pod \"ovn-controller-48955\" (UID: \"8731b0fb-0429-4730-8da9-cc182fdf29e1\") " pod="openstack/ovn-controller-48955" Dec 01 08:54:59 crc kubenswrapper[4689]: I1201 08:54:59.584219 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a0d0f0ef-1203-4001-9872-7c32022a4839-scripts\") pod \"ovn-controller-ovs-sj4xx\" (UID: \"a0d0f0ef-1203-4001-9872-7c32022a4839\") " pod="openstack/ovn-controller-ovs-sj4xx" Dec 01 08:54:59 crc kubenswrapper[4689]: I1201 08:54:59.584278 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/a0d0f0ef-1203-4001-9872-7c32022a4839-etc-ovs\") pod \"ovn-controller-ovs-sj4xx\" (UID: \"a0d0f0ef-1203-4001-9872-7c32022a4839\") " pod="openstack/ovn-controller-ovs-sj4xx" Dec 01 08:54:59 crc kubenswrapper[4689]: I1201 08:54:59.584297 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/a0d0f0ef-1203-4001-9872-7c32022a4839-var-lib\") pod \"ovn-controller-ovs-sj4xx\" (UID: \"a0d0f0ef-1203-4001-9872-7c32022a4839\") " pod="openstack/ovn-controller-ovs-sj4xx" Dec 01 08:54:59 crc kubenswrapper[4689]: I1201 08:54:59.584556 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a0d0f0ef-1203-4001-9872-7c32022a4839-var-log\") pod \"ovn-controller-ovs-sj4xx\" (UID: \"a0d0f0ef-1203-4001-9872-7c32022a4839\") " pod="openstack/ovn-controller-ovs-sj4xx" Dec 01 08:54:59 crc kubenswrapper[4689]: I1201 08:54:59.584652 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6w2z\" (UniqueName: \"kubernetes.io/projected/a0d0f0ef-1203-4001-9872-7c32022a4839-kube-api-access-c6w2z\") pod \"ovn-controller-ovs-sj4xx\" (UID: \"a0d0f0ef-1203-4001-9872-7c32022a4839\") " pod="openstack/ovn-controller-ovs-sj4xx" Dec 01 08:54:59 crc kubenswrapper[4689]: I1201 08:54:59.584805 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a0d0f0ef-1203-4001-9872-7c32022a4839-var-run\") pod \"ovn-controller-ovs-sj4xx\" (UID: \"a0d0f0ef-1203-4001-9872-7c32022a4839\") " pod="openstack/ovn-controller-ovs-sj4xx" Dec 01 08:54:59 crc kubenswrapper[4689]: I1201 08:54:59.647978 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-48955" Dec 01 08:54:59 crc kubenswrapper[4689]: I1201 08:54:59.686081 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a0d0f0ef-1203-4001-9872-7c32022a4839-var-run\") pod \"ovn-controller-ovs-sj4xx\" (UID: \"a0d0f0ef-1203-4001-9872-7c32022a4839\") " pod="openstack/ovn-controller-ovs-sj4xx" Dec 01 08:54:59 crc kubenswrapper[4689]: I1201 08:54:59.686157 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a0d0f0ef-1203-4001-9872-7c32022a4839-scripts\") pod \"ovn-controller-ovs-sj4xx\" (UID: \"a0d0f0ef-1203-4001-9872-7c32022a4839\") " pod="openstack/ovn-controller-ovs-sj4xx" Dec 01 08:54:59 crc kubenswrapper[4689]: I1201 08:54:59.686198 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/a0d0f0ef-1203-4001-9872-7c32022a4839-etc-ovs\") pod \"ovn-controller-ovs-sj4xx\" (UID: \"a0d0f0ef-1203-4001-9872-7c32022a4839\") " pod="openstack/ovn-controller-ovs-sj4xx" Dec 01 08:54:59 crc kubenswrapper[4689]: I1201 08:54:59.686225 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/a0d0f0ef-1203-4001-9872-7c32022a4839-var-lib\") pod \"ovn-controller-ovs-sj4xx\" (UID: \"a0d0f0ef-1203-4001-9872-7c32022a4839\") " pod="openstack/ovn-controller-ovs-sj4xx" Dec 01 08:54:59 crc kubenswrapper[4689]: I1201 08:54:59.686265 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a0d0f0ef-1203-4001-9872-7c32022a4839-var-log\") pod \"ovn-controller-ovs-sj4xx\" (UID: \"a0d0f0ef-1203-4001-9872-7c32022a4839\") " pod="openstack/ovn-controller-ovs-sj4xx" Dec 01 08:54:59 crc kubenswrapper[4689]: I1201 08:54:59.686298 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6w2z\" (UniqueName: \"kubernetes.io/projected/a0d0f0ef-1203-4001-9872-7c32022a4839-kube-api-access-c6w2z\") pod \"ovn-controller-ovs-sj4xx\" (UID: \"a0d0f0ef-1203-4001-9872-7c32022a4839\") " pod="openstack/ovn-controller-ovs-sj4xx" Dec 01 08:54:59 crc kubenswrapper[4689]: I1201 08:54:59.687155 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a0d0f0ef-1203-4001-9872-7c32022a4839-var-run\") pod \"ovn-controller-ovs-sj4xx\" (UID: \"a0d0f0ef-1203-4001-9872-7c32022a4839\") " pod="openstack/ovn-controller-ovs-sj4xx" Dec 01 08:54:59 crc kubenswrapper[4689]: I1201 08:54:59.687640 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/a0d0f0ef-1203-4001-9872-7c32022a4839-etc-ovs\") pod \"ovn-controller-ovs-sj4xx\" (UID: \"a0d0f0ef-1203-4001-9872-7c32022a4839\") " pod="openstack/ovn-controller-ovs-sj4xx" Dec 01 08:54:59 crc kubenswrapper[4689]: I1201 08:54:59.687731 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a0d0f0ef-1203-4001-9872-7c32022a4839-var-log\") pod \"ovn-controller-ovs-sj4xx\" (UID: \"a0d0f0ef-1203-4001-9872-7c32022a4839\") " pod="openstack/ovn-controller-ovs-sj4xx" Dec 01 08:54:59 crc kubenswrapper[4689]: I1201 08:54:59.687890 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/a0d0f0ef-1203-4001-9872-7c32022a4839-var-lib\") pod \"ovn-controller-ovs-sj4xx\" (UID: \"a0d0f0ef-1203-4001-9872-7c32022a4839\") " pod="openstack/ovn-controller-ovs-sj4xx" Dec 01 08:54:59 crc kubenswrapper[4689]: I1201 08:54:59.689320 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a0d0f0ef-1203-4001-9872-7c32022a4839-scripts\") pod \"ovn-controller-ovs-sj4xx\" (UID: \"a0d0f0ef-1203-4001-9872-7c32022a4839\") " pod="openstack/ovn-controller-ovs-sj4xx" Dec 01 08:54:59 crc kubenswrapper[4689]: I1201 08:54:59.713983 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6w2z\" (UniqueName: \"kubernetes.io/projected/a0d0f0ef-1203-4001-9872-7c32022a4839-kube-api-access-c6w2z\") pod \"ovn-controller-ovs-sj4xx\" (UID: \"a0d0f0ef-1203-4001-9872-7c32022a4839\") " pod="openstack/ovn-controller-ovs-sj4xx" Dec 01 08:54:59 crc kubenswrapper[4689]: I1201 08:54:59.986310 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 01 08:55:00 crc kubenswrapper[4689]: W1201 08:55:00.012702 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod06aa5768_7753_4a2d_8e40_96cea62d055c.slice/crio-4686626c6e4ed23fe9fa424e4f1dc1c9ac0d67a1a54b5a1938e0bd987d830ba0 WatchSource:0}: Error finding container 4686626c6e4ed23fe9fa424e4f1dc1c9ac0d67a1a54b5a1938e0bd987d830ba0: Status 404 returned error can't find the container with id 4686626c6e4ed23fe9fa424e4f1dc1c9ac0d67a1a54b5a1938e0bd987d830ba0 Dec 01 08:55:00 crc kubenswrapper[4689]: I1201 08:55:00.013051 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-sj4xx" Dec 01 08:55:00 crc kubenswrapper[4689]: I1201 08:55:00.456739 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-48955"] Dec 01 08:55:00 crc kubenswrapper[4689]: I1201 08:55:00.563964 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-95kdv" event={"ID":"9368a0f8-4d7e-49e3-b575-7f901ed6464c","Type":"ContainerStarted","Data":"34aeba54b6874d5b66edab80b8f1780978ab5ff59e2b769b03f9337124ad102b"} Dec 01 08:55:00 crc kubenswrapper[4689]: I1201 08:55:00.581545 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-48955" event={"ID":"8731b0fb-0429-4730-8da9-cc182fdf29e1","Type":"ContainerStarted","Data":"cf75e4b0d15855cbccc50d6f8133c989cbc05239be56cef8424a90dd5bb4aff1"} Dec 01 08:55:00 crc kubenswrapper[4689]: I1201 08:55:00.583744 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"06aa5768-7753-4a2d-8e40-96cea62d055c","Type":"ContainerStarted","Data":"4686626c6e4ed23fe9fa424e4f1dc1c9ac0d67a1a54b5a1938e0bd987d830ba0"} Dec 01 08:55:02 crc kubenswrapper[4689]: I1201 08:55:02.523892 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-sj4xx"] Dec 01 08:55:02 crc kubenswrapper[4689]: I1201 08:55:02.675912 4689 generic.go:334] "Generic (PLEG): container finished" podID="9368a0f8-4d7e-49e3-b575-7f901ed6464c" containerID="34aeba54b6874d5b66edab80b8f1780978ab5ff59e2b769b03f9337124ad102b" exitCode=0 Dec 01 08:55:02 crc kubenswrapper[4689]: I1201 08:55:02.676237 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-95kdv" event={"ID":"9368a0f8-4d7e-49e3-b575-7f901ed6464c","Type":"ContainerDied","Data":"34aeba54b6874d5b66edab80b8f1780978ab5ff59e2b769b03f9337124ad102b"} Dec 01 08:55:03 crc kubenswrapper[4689]: I1201 08:55:03.306584 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 01 08:55:03 crc kubenswrapper[4689]: I1201 08:55:03.313236 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Dec 01 08:55:03 crc kubenswrapper[4689]: I1201 08:55:03.325065 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Dec 01 08:55:03 crc kubenswrapper[4689]: I1201 08:55:03.325502 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Dec 01 08:55:03 crc kubenswrapper[4689]: I1201 08:55:03.325717 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-wqzr2" Dec 01 08:55:03 crc kubenswrapper[4689]: I1201 08:55:03.325895 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Dec 01 08:55:03 crc kubenswrapper[4689]: I1201 08:55:03.326116 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Dec 01 08:55:03 crc kubenswrapper[4689]: I1201 08:55:03.342514 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 01 08:55:03 crc kubenswrapper[4689]: I1201 08:55:03.403451 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"150dfc79-4971-4c3d-aada-13fc85bd101c\") " pod="openstack/ovsdbserver-nb-0" Dec 01 08:55:03 crc kubenswrapper[4689]: I1201 08:55:03.403505 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/150dfc79-4971-4c3d-aada-13fc85bd101c-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"150dfc79-4971-4c3d-aada-13fc85bd101c\") " pod="openstack/ovsdbserver-nb-0" Dec 01 08:55:03 crc kubenswrapper[4689]: I1201 08:55:03.403530 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52zmp\" (UniqueName: \"kubernetes.io/projected/150dfc79-4971-4c3d-aada-13fc85bd101c-kube-api-access-52zmp\") pod \"ovsdbserver-nb-0\" (UID: \"150dfc79-4971-4c3d-aada-13fc85bd101c\") " pod="openstack/ovsdbserver-nb-0" Dec 01 08:55:03 crc kubenswrapper[4689]: I1201 08:55:03.403549 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/150dfc79-4971-4c3d-aada-13fc85bd101c-config\") pod \"ovsdbserver-nb-0\" (UID: \"150dfc79-4971-4c3d-aada-13fc85bd101c\") " pod="openstack/ovsdbserver-nb-0" Dec 01 08:55:03 crc kubenswrapper[4689]: I1201 08:55:03.403576 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/150dfc79-4971-4c3d-aada-13fc85bd101c-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"150dfc79-4971-4c3d-aada-13fc85bd101c\") " pod="openstack/ovsdbserver-nb-0" Dec 01 08:55:03 crc kubenswrapper[4689]: I1201 08:55:03.403624 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/150dfc79-4971-4c3d-aada-13fc85bd101c-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"150dfc79-4971-4c3d-aada-13fc85bd101c\") " pod="openstack/ovsdbserver-nb-0" Dec 01 08:55:03 crc kubenswrapper[4689]: I1201 08:55:03.403672 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/150dfc79-4971-4c3d-aada-13fc85bd101c-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"150dfc79-4971-4c3d-aada-13fc85bd101c\") " pod="openstack/ovsdbserver-nb-0" Dec 01 08:55:03 crc kubenswrapper[4689]: I1201 08:55:03.403718 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/150dfc79-4971-4c3d-aada-13fc85bd101c-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"150dfc79-4971-4c3d-aada-13fc85bd101c\") " pod="openstack/ovsdbserver-nb-0" Dec 01 08:55:03 crc kubenswrapper[4689]: I1201 08:55:03.505555 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/150dfc79-4971-4c3d-aada-13fc85bd101c-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"150dfc79-4971-4c3d-aada-13fc85bd101c\") " pod="openstack/ovsdbserver-nb-0" Dec 01 08:55:03 crc kubenswrapper[4689]: I1201 08:55:03.505636 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/150dfc79-4971-4c3d-aada-13fc85bd101c-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"150dfc79-4971-4c3d-aada-13fc85bd101c\") " pod="openstack/ovsdbserver-nb-0" Dec 01 08:55:03 crc kubenswrapper[4689]: I1201 08:55:03.505668 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"150dfc79-4971-4c3d-aada-13fc85bd101c\") " pod="openstack/ovsdbserver-nb-0" Dec 01 08:55:03 crc kubenswrapper[4689]: I1201 08:55:03.505710 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/150dfc79-4971-4c3d-aada-13fc85bd101c-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"150dfc79-4971-4c3d-aada-13fc85bd101c\") " pod="openstack/ovsdbserver-nb-0" Dec 01 08:55:03 crc kubenswrapper[4689]: I1201 08:55:03.505754 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52zmp\" (UniqueName: \"kubernetes.io/projected/150dfc79-4971-4c3d-aada-13fc85bd101c-kube-api-access-52zmp\") pod \"ovsdbserver-nb-0\" (UID: \"150dfc79-4971-4c3d-aada-13fc85bd101c\") " pod="openstack/ovsdbserver-nb-0" Dec 01 08:55:03 crc kubenswrapper[4689]: I1201 08:55:03.505779 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/150dfc79-4971-4c3d-aada-13fc85bd101c-config\") pod \"ovsdbserver-nb-0\" (UID: \"150dfc79-4971-4c3d-aada-13fc85bd101c\") " pod="openstack/ovsdbserver-nb-0" Dec 01 08:55:03 crc kubenswrapper[4689]: I1201 08:55:03.505811 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/150dfc79-4971-4c3d-aada-13fc85bd101c-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"150dfc79-4971-4c3d-aada-13fc85bd101c\") " pod="openstack/ovsdbserver-nb-0" Dec 01 08:55:03 crc kubenswrapper[4689]: I1201 08:55:03.505878 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/150dfc79-4971-4c3d-aada-13fc85bd101c-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"150dfc79-4971-4c3d-aada-13fc85bd101c\") " pod="openstack/ovsdbserver-nb-0" Dec 01 08:55:03 crc kubenswrapper[4689]: I1201 08:55:03.509180 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/150dfc79-4971-4c3d-aada-13fc85bd101c-config\") pod \"ovsdbserver-nb-0\" (UID: \"150dfc79-4971-4c3d-aada-13fc85bd101c\") " pod="openstack/ovsdbserver-nb-0" Dec 01 08:55:03 crc kubenswrapper[4689]: I1201 08:55:03.509728 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/150dfc79-4971-4c3d-aada-13fc85bd101c-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"150dfc79-4971-4c3d-aada-13fc85bd101c\") " pod="openstack/ovsdbserver-nb-0" Dec 01 08:55:03 crc kubenswrapper[4689]: I1201 08:55:03.510167 4689 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"150dfc79-4971-4c3d-aada-13fc85bd101c\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/ovsdbserver-nb-0" Dec 01 08:55:03 crc kubenswrapper[4689]: I1201 08:55:03.510386 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/150dfc79-4971-4c3d-aada-13fc85bd101c-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"150dfc79-4971-4c3d-aada-13fc85bd101c\") " pod="openstack/ovsdbserver-nb-0" Dec 01 08:55:03 crc kubenswrapper[4689]: I1201 08:55:03.520972 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/150dfc79-4971-4c3d-aada-13fc85bd101c-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"150dfc79-4971-4c3d-aada-13fc85bd101c\") " pod="openstack/ovsdbserver-nb-0" Dec 01 08:55:03 crc kubenswrapper[4689]: I1201 08:55:03.525160 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/150dfc79-4971-4c3d-aada-13fc85bd101c-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"150dfc79-4971-4c3d-aada-13fc85bd101c\") " pod="openstack/ovsdbserver-nb-0" Dec 01 08:55:03 crc kubenswrapper[4689]: I1201 08:55:03.556045 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/150dfc79-4971-4c3d-aada-13fc85bd101c-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"150dfc79-4971-4c3d-aada-13fc85bd101c\") " pod="openstack/ovsdbserver-nb-0" Dec 01 08:55:03 crc kubenswrapper[4689]: I1201 08:55:03.589409 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52zmp\" (UniqueName: \"kubernetes.io/projected/150dfc79-4971-4c3d-aada-13fc85bd101c-kube-api-access-52zmp\") pod \"ovsdbserver-nb-0\" (UID: \"150dfc79-4971-4c3d-aada-13fc85bd101c\") " pod="openstack/ovsdbserver-nb-0" Dec 01 08:55:03 crc kubenswrapper[4689]: I1201 08:55:03.635145 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"150dfc79-4971-4c3d-aada-13fc85bd101c\") " pod="openstack/ovsdbserver-nb-0" Dec 01 08:55:03 crc kubenswrapper[4689]: I1201 08:55:03.673661 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Dec 01 08:55:03 crc kubenswrapper[4689]: I1201 08:55:03.783391 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-sj4xx" event={"ID":"a0d0f0ef-1203-4001-9872-7c32022a4839","Type":"ContainerStarted","Data":"b8f93043c5f60cf48e4402d70efb4c2fe80835faba2f325f2b199f81e8722892"} Dec 01 08:55:04 crc kubenswrapper[4689]: I1201 08:55:04.189620 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-t4rfs"] Dec 01 08:55:04 crc kubenswrapper[4689]: I1201 08:55:04.237102 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-t4rfs"] Dec 01 08:55:04 crc kubenswrapper[4689]: I1201 08:55:04.237479 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-t4rfs" Dec 01 08:55:04 crc kubenswrapper[4689]: I1201 08:55:04.248333 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/5b0566d9-e730-4929-aa69-fba41a7c88c0-ovs-rundir\") pod \"ovn-controller-metrics-t4rfs\" (UID: \"5b0566d9-e730-4929-aa69-fba41a7c88c0\") " pod="openstack/ovn-controller-metrics-t4rfs" Dec 01 08:55:04 crc kubenswrapper[4689]: I1201 08:55:04.248438 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b0566d9-e730-4929-aa69-fba41a7c88c0-config\") pod \"ovn-controller-metrics-t4rfs\" (UID: \"5b0566d9-e730-4929-aa69-fba41a7c88c0\") " pod="openstack/ovn-controller-metrics-t4rfs" Dec 01 08:55:04 crc kubenswrapper[4689]: I1201 08:55:04.248464 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b0566d9-e730-4929-aa69-fba41a7c88c0-combined-ca-bundle\") pod \"ovn-controller-metrics-t4rfs\" (UID: \"5b0566d9-e730-4929-aa69-fba41a7c88c0\") " pod="openstack/ovn-controller-metrics-t4rfs" Dec 01 08:55:04 crc kubenswrapper[4689]: I1201 08:55:04.248548 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b0566d9-e730-4929-aa69-fba41a7c88c0-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-t4rfs\" (UID: \"5b0566d9-e730-4929-aa69-fba41a7c88c0\") " pod="openstack/ovn-controller-metrics-t4rfs" Dec 01 08:55:04 crc kubenswrapper[4689]: I1201 08:55:04.248582 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/5b0566d9-e730-4929-aa69-fba41a7c88c0-ovn-rundir\") pod \"ovn-controller-metrics-t4rfs\" (UID: \"5b0566d9-e730-4929-aa69-fba41a7c88c0\") " pod="openstack/ovn-controller-metrics-t4rfs" Dec 01 08:55:04 crc kubenswrapper[4689]: I1201 08:55:04.248963 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9mnw\" (UniqueName: \"kubernetes.io/projected/5b0566d9-e730-4929-aa69-fba41a7c88c0-kube-api-access-h9mnw\") pod \"ovn-controller-metrics-t4rfs\" (UID: \"5b0566d9-e730-4929-aa69-fba41a7c88c0\") " pod="openstack/ovn-controller-metrics-t4rfs" Dec 01 08:55:04 crc kubenswrapper[4689]: I1201 08:55:04.249876 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Dec 01 08:55:04 crc kubenswrapper[4689]: I1201 08:55:04.354540 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/5b0566d9-e730-4929-aa69-fba41a7c88c0-ovs-rundir\") pod \"ovn-controller-metrics-t4rfs\" (UID: \"5b0566d9-e730-4929-aa69-fba41a7c88c0\") " pod="openstack/ovn-controller-metrics-t4rfs" Dec 01 08:55:04 crc kubenswrapper[4689]: I1201 08:55:04.355174 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/5b0566d9-e730-4929-aa69-fba41a7c88c0-ovs-rundir\") pod \"ovn-controller-metrics-t4rfs\" (UID: \"5b0566d9-e730-4929-aa69-fba41a7c88c0\") " pod="openstack/ovn-controller-metrics-t4rfs" Dec 01 08:55:04 crc kubenswrapper[4689]: I1201 08:55:04.355445 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b0566d9-e730-4929-aa69-fba41a7c88c0-config\") pod \"ovn-controller-metrics-t4rfs\" (UID: \"5b0566d9-e730-4929-aa69-fba41a7c88c0\") " pod="openstack/ovn-controller-metrics-t4rfs" Dec 01 08:55:04 crc kubenswrapper[4689]: I1201 08:55:04.355575 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b0566d9-e730-4929-aa69-fba41a7c88c0-combined-ca-bundle\") pod \"ovn-controller-metrics-t4rfs\" (UID: \"5b0566d9-e730-4929-aa69-fba41a7c88c0\") " pod="openstack/ovn-controller-metrics-t4rfs" Dec 01 08:55:04 crc kubenswrapper[4689]: I1201 08:55:04.355649 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b0566d9-e730-4929-aa69-fba41a7c88c0-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-t4rfs\" (UID: \"5b0566d9-e730-4929-aa69-fba41a7c88c0\") " pod="openstack/ovn-controller-metrics-t4rfs" Dec 01 08:55:04 crc kubenswrapper[4689]: I1201 08:55:04.355683 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/5b0566d9-e730-4929-aa69-fba41a7c88c0-ovn-rundir\") pod \"ovn-controller-metrics-t4rfs\" (UID: \"5b0566d9-e730-4929-aa69-fba41a7c88c0\") " pod="openstack/ovn-controller-metrics-t4rfs" Dec 01 08:55:04 crc kubenswrapper[4689]: I1201 08:55:04.355712 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9mnw\" (UniqueName: \"kubernetes.io/projected/5b0566d9-e730-4929-aa69-fba41a7c88c0-kube-api-access-h9mnw\") pod \"ovn-controller-metrics-t4rfs\" (UID: \"5b0566d9-e730-4929-aa69-fba41a7c88c0\") " pod="openstack/ovn-controller-metrics-t4rfs" Dec 01 08:55:04 crc kubenswrapper[4689]: I1201 08:55:04.357501 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b0566d9-e730-4929-aa69-fba41a7c88c0-config\") pod \"ovn-controller-metrics-t4rfs\" (UID: \"5b0566d9-e730-4929-aa69-fba41a7c88c0\") " pod="openstack/ovn-controller-metrics-t4rfs" Dec 01 08:55:04 crc kubenswrapper[4689]: I1201 08:55:04.360550 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/5b0566d9-e730-4929-aa69-fba41a7c88c0-ovn-rundir\") pod \"ovn-controller-metrics-t4rfs\" (UID: \"5b0566d9-e730-4929-aa69-fba41a7c88c0\") " pod="openstack/ovn-controller-metrics-t4rfs" Dec 01 08:55:04 crc kubenswrapper[4689]: I1201 08:55:04.391323 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9mnw\" (UniqueName: \"kubernetes.io/projected/5b0566d9-e730-4929-aa69-fba41a7c88c0-kube-api-access-h9mnw\") pod \"ovn-controller-metrics-t4rfs\" (UID: \"5b0566d9-e730-4929-aa69-fba41a7c88c0\") " pod="openstack/ovn-controller-metrics-t4rfs" Dec 01 08:55:04 crc kubenswrapper[4689]: I1201 08:55:04.391715 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b0566d9-e730-4929-aa69-fba41a7c88c0-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-t4rfs\" (UID: \"5b0566d9-e730-4929-aa69-fba41a7c88c0\") " pod="openstack/ovn-controller-metrics-t4rfs" Dec 01 08:55:04 crc kubenswrapper[4689]: I1201 08:55:04.395717 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b0566d9-e730-4929-aa69-fba41a7c88c0-combined-ca-bundle\") pod \"ovn-controller-metrics-t4rfs\" (UID: \"5b0566d9-e730-4929-aa69-fba41a7c88c0\") " pod="openstack/ovn-controller-metrics-t4rfs" Dec 01 08:55:04 crc kubenswrapper[4689]: I1201 08:55:04.608291 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-t4rfs" Dec 01 08:55:05 crc kubenswrapper[4689]: I1201 08:55:05.217611 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 01 08:55:05 crc kubenswrapper[4689]: I1201 08:55:05.292008 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 01 08:55:05 crc kubenswrapper[4689]: I1201 08:55:05.293591 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Dec 01 08:55:05 crc kubenswrapper[4689]: I1201 08:55:05.306245 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Dec 01 08:55:05 crc kubenswrapper[4689]: I1201 08:55:05.307388 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Dec 01 08:55:05 crc kubenswrapper[4689]: I1201 08:55:05.307647 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Dec 01 08:55:05 crc kubenswrapper[4689]: I1201 08:55:05.307812 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-j4zvd" Dec 01 08:55:05 crc kubenswrapper[4689]: I1201 08:55:05.312820 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 01 08:55:05 crc kubenswrapper[4689]: I1201 08:55:05.481040 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5b1a856a-afb7-4839-a797-7625521520b2-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"5b1a856a-afb7-4839-a797-7625521520b2\") " pod="openstack/ovsdbserver-sb-0" Dec 01 08:55:05 crc kubenswrapper[4689]: I1201 08:55:05.481152 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5b1a856a-afb7-4839-a797-7625521520b2-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"5b1a856a-afb7-4839-a797-7625521520b2\") " pod="openstack/ovsdbserver-sb-0" Dec 01 08:55:05 crc kubenswrapper[4689]: I1201 08:55:05.481207 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b1a856a-afb7-4839-a797-7625521520b2-config\") pod \"ovsdbserver-sb-0\" (UID: \"5b1a856a-afb7-4839-a797-7625521520b2\") " pod="openstack/ovsdbserver-sb-0" Dec 01 08:55:05 crc kubenswrapper[4689]: I1201 08:55:05.481504 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b1a856a-afb7-4839-a797-7625521520b2-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"5b1a856a-afb7-4839-a797-7625521520b2\") " pod="openstack/ovsdbserver-sb-0" Dec 01 08:55:05 crc kubenswrapper[4689]: I1201 08:55:05.481533 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"5b1a856a-afb7-4839-a797-7625521520b2\") " pod="openstack/ovsdbserver-sb-0" Dec 01 08:55:05 crc kubenswrapper[4689]: I1201 08:55:05.481596 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b1a856a-afb7-4839-a797-7625521520b2-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"5b1a856a-afb7-4839-a797-7625521520b2\") " pod="openstack/ovsdbserver-sb-0" Dec 01 08:55:05 crc kubenswrapper[4689]: I1201 08:55:05.481618 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n56l6\" (UniqueName: \"kubernetes.io/projected/5b1a856a-afb7-4839-a797-7625521520b2-kube-api-access-n56l6\") pod \"ovsdbserver-sb-0\" (UID: \"5b1a856a-afb7-4839-a797-7625521520b2\") " pod="openstack/ovsdbserver-sb-0" Dec 01 08:55:05 crc kubenswrapper[4689]: I1201 08:55:05.481678 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b1a856a-afb7-4839-a797-7625521520b2-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"5b1a856a-afb7-4839-a797-7625521520b2\") " pod="openstack/ovsdbserver-sb-0" Dec 01 08:55:05 crc kubenswrapper[4689]: I1201 08:55:05.583832 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b1a856a-afb7-4839-a797-7625521520b2-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"5b1a856a-afb7-4839-a797-7625521520b2\") " pod="openstack/ovsdbserver-sb-0" Dec 01 08:55:05 crc kubenswrapper[4689]: I1201 08:55:05.584216 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n56l6\" (UniqueName: \"kubernetes.io/projected/5b1a856a-afb7-4839-a797-7625521520b2-kube-api-access-n56l6\") pod \"ovsdbserver-sb-0\" (UID: \"5b1a856a-afb7-4839-a797-7625521520b2\") " pod="openstack/ovsdbserver-sb-0" Dec 01 08:55:05 crc kubenswrapper[4689]: I1201 08:55:05.584265 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b1a856a-afb7-4839-a797-7625521520b2-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"5b1a856a-afb7-4839-a797-7625521520b2\") " pod="openstack/ovsdbserver-sb-0" Dec 01 08:55:05 crc kubenswrapper[4689]: I1201 08:55:05.584302 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5b1a856a-afb7-4839-a797-7625521520b2-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"5b1a856a-afb7-4839-a797-7625521520b2\") " pod="openstack/ovsdbserver-sb-0" Dec 01 08:55:05 crc kubenswrapper[4689]: I1201 08:55:05.584335 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5b1a856a-afb7-4839-a797-7625521520b2-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"5b1a856a-afb7-4839-a797-7625521520b2\") " pod="openstack/ovsdbserver-sb-0" Dec 01 08:55:05 crc kubenswrapper[4689]: I1201 08:55:05.584430 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b1a856a-afb7-4839-a797-7625521520b2-config\") pod \"ovsdbserver-sb-0\" (UID: \"5b1a856a-afb7-4839-a797-7625521520b2\") " pod="openstack/ovsdbserver-sb-0" Dec 01 08:55:05 crc kubenswrapper[4689]: I1201 08:55:05.584461 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b1a856a-afb7-4839-a797-7625521520b2-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"5b1a856a-afb7-4839-a797-7625521520b2\") " pod="openstack/ovsdbserver-sb-0" Dec 01 08:55:05 crc kubenswrapper[4689]: I1201 08:55:05.584498 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"5b1a856a-afb7-4839-a797-7625521520b2\") " pod="openstack/ovsdbserver-sb-0" Dec 01 08:55:05 crc kubenswrapper[4689]: I1201 08:55:05.585502 4689 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"5b1a856a-afb7-4839-a797-7625521520b2\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/ovsdbserver-sb-0" Dec 01 08:55:05 crc kubenswrapper[4689]: I1201 08:55:05.587509 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b1a856a-afb7-4839-a797-7625521520b2-config\") pod \"ovsdbserver-sb-0\" (UID: \"5b1a856a-afb7-4839-a797-7625521520b2\") " pod="openstack/ovsdbserver-sb-0" Dec 01 08:55:05 crc kubenswrapper[4689]: I1201 08:55:05.589828 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5b1a856a-afb7-4839-a797-7625521520b2-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"5b1a856a-afb7-4839-a797-7625521520b2\") " pod="openstack/ovsdbserver-sb-0" Dec 01 08:55:05 crc kubenswrapper[4689]: I1201 08:55:05.596630 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5b1a856a-afb7-4839-a797-7625521520b2-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"5b1a856a-afb7-4839-a797-7625521520b2\") " pod="openstack/ovsdbserver-sb-0" Dec 01 08:55:05 crc kubenswrapper[4689]: I1201 08:55:05.602314 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b1a856a-afb7-4839-a797-7625521520b2-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"5b1a856a-afb7-4839-a797-7625521520b2\") " pod="openstack/ovsdbserver-sb-0" Dec 01 08:55:05 crc kubenswrapper[4689]: I1201 08:55:05.605639 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b1a856a-afb7-4839-a797-7625521520b2-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"5b1a856a-afb7-4839-a797-7625521520b2\") " pod="openstack/ovsdbserver-sb-0" Dec 01 08:55:05 crc kubenswrapper[4689]: I1201 08:55:05.609845 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n56l6\" (UniqueName: \"kubernetes.io/projected/5b1a856a-afb7-4839-a797-7625521520b2-kube-api-access-n56l6\") pod \"ovsdbserver-sb-0\" (UID: \"5b1a856a-afb7-4839-a797-7625521520b2\") " pod="openstack/ovsdbserver-sb-0" Dec 01 08:55:05 crc kubenswrapper[4689]: I1201 08:55:05.612111 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b1a856a-afb7-4839-a797-7625521520b2-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"5b1a856a-afb7-4839-a797-7625521520b2\") " pod="openstack/ovsdbserver-sb-0" Dec 01 08:55:05 crc kubenswrapper[4689]: I1201 08:55:05.639624 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"5b1a856a-afb7-4839-a797-7625521520b2\") " pod="openstack/ovsdbserver-sb-0" Dec 01 08:55:05 crc kubenswrapper[4689]: I1201 08:55:05.939794 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Dec 01 08:55:13 crc kubenswrapper[4689]: I1201 08:55:13.926460 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"150dfc79-4971-4c3d-aada-13fc85bd101c","Type":"ContainerStarted","Data":"7b7fbe039f1e5b67ac6cda79f9707a7b856a1637de926ffc6f6181f576a03272"} Dec 01 08:55:25 crc kubenswrapper[4689]: E1201 08:55:25.939334 4689 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Dec 01 08:55:25 crc kubenswrapper[4689]: E1201 08:55:25.940028 4689 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qzssz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(555543d8-21bb-4dba-9c08-ab82e90ea894): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 01 08:55:25 crc kubenswrapper[4689]: E1201 08:55:25.941457 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="555543d8-21bb-4dba-9c08-ab82e90ea894" Dec 01 08:55:26 crc kubenswrapper[4689]: E1201 08:55:26.065649 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-galera-0" podUID="555543d8-21bb-4dba-9c08-ab82e90ea894" Dec 01 08:55:27 crc kubenswrapper[4689]: E1201 08:55:27.289996 4689 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Dec 01 08:55:27 crc kubenswrapper[4689]: E1201 08:55:27.290231 4689 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bxzqp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(edc6a475-296b-4f29-a48b-6876138662fd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 01 08:55:27 crc kubenswrapper[4689]: E1201 08:55:27.292123 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="edc6a475-296b-4f29-a48b-6876138662fd" Dec 01 08:55:28 crc kubenswrapper[4689]: E1201 08:55:28.119614 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-0" podUID="edc6a475-296b-4f29-a48b-6876138662fd" Dec 01 08:55:29 crc kubenswrapper[4689]: E1201 08:55:29.751474 4689 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-memcached:current-podified" Dec 01 08:55:29 crc kubenswrapper[4689]: E1201 08:55:29.752253 4689 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:n5bhdh657hc8h59dh658h575h5b6h5f9h556h76h97h659h54ch576h655h597h587h99h597h68bh85hf8h5dbh649hchf4h56hf9h88hb7h587q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g2t4h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_openstack(f04989a7-e9bc-4d0b-a7a1-efe12657bd2b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 01 08:55:29 crc kubenswrapper[4689]: E1201 08:55:29.753439 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/memcached-0" podUID="f04989a7-e9bc-4d0b-a7a1-efe12657bd2b" Dec 01 08:55:29 crc kubenswrapper[4689]: E1201 08:55:29.772116 4689 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Dec 01 08:55:29 crc kubenswrapper[4689]: E1201 08:55:29.772321 4689 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nvbxm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(50bb385d-f9f3-4a0d-8d26-c0a69a6eba87): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 01 08:55:29 crc kubenswrapper[4689]: E1201 08:55:29.773605 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="50bb385d-f9f3-4a0d-8d26-c0a69a6eba87" Dec 01 08:55:29 crc kubenswrapper[4689]: E1201 08:55:29.798476 4689 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Dec 01 08:55:29 crc kubenswrapper[4689]: E1201 08:55:29.798688 4689 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g96gm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(bc1ecd4c-eede-492c-ac97-071c42545607): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 01 08:55:29 crc kubenswrapper[4689]: E1201 08:55:29.800116 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="bc1ecd4c-eede-492c-ac97-071c42545607" Dec 01 08:55:30 crc kubenswrapper[4689]: E1201 08:55:30.144799 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-cell1-galera-0" podUID="bc1ecd4c-eede-492c-ac97-071c42545607" Dec 01 08:55:30 crc kubenswrapper[4689]: E1201 08:55:30.144859 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="50bb385d-f9f3-4a0d-8d26-c0a69a6eba87" Dec 01 08:55:30 crc kubenswrapper[4689]: E1201 08:55:30.145640 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-memcached:current-podified\\\"\"" pod="openstack/memcached-0" podUID="f04989a7-e9bc-4d0b-a7a1-efe12657bd2b" Dec 01 08:55:30 crc kubenswrapper[4689]: E1201 08:55:30.634213 4689 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Dec 01 08:55:30 crc kubenswrapper[4689]: E1201 08:55:30.634556 4689 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hvrjn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-xvz5v_openstack(8957703f-79dd-4f15-bad3-d7be659b8de6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 01 08:55:30 crc kubenswrapper[4689]: E1201 08:55:30.635801 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-xvz5v" podUID="8957703f-79dd-4f15-bad3-d7be659b8de6" Dec 01 08:55:30 crc kubenswrapper[4689]: E1201 08:55:30.640508 4689 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Dec 01 08:55:30 crc kubenswrapper[4689]: E1201 08:55:30.640687 4689 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ws4c2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-c8mdm_openstack(f74987d3-25fa-4744-860a-e4c272305c81): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 01 08:55:30 crc kubenswrapper[4689]: E1201 08:55:30.642775 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-c8mdm" podUID="f74987d3-25fa-4744-860a-e4c272305c81" Dec 01 08:55:30 crc kubenswrapper[4689]: E1201 08:55:30.805428 4689 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified" Dec 01 08:55:30 crc kubenswrapper[4689]: E1201 08:55:30.805633 4689 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovn-controller,Image:quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified,Command:[ovn-controller --pidfile unix:/run/openvswitch/db.sock --certificate=/etc/pki/tls/certs/ovndb.crt --private-key=/etc/pki/tls/private/ovndb.key --ca-cert=/etc/pki/tls/certs/ovndbca.crt],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5d4h669h57hfh659h54bh544h5fh598h55dhd6h5b5h67fh5f4hbhf8h7fh646h56fh585hb9h588h575h4h6bh659h54dh665h67dh7bh5c7h66cq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:var-run,ReadOnly:false,MountPath:/var/run/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-ovn,ReadOnly:false,MountPath:/var/run/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-log-ovn,ReadOnly:false,MountPath:/var/log/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-znwgw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_liveness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_readiness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/share/ovn/scripts/ovn-ctl stop_controller],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYS_NICE],Drop:[],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-controller-48955_openstack(8731b0fb-0429-4730-8da9-cc182fdf29e1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 01 08:55:30 crc kubenswrapper[4689]: E1201 08:55:30.807563 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovn-controller-48955" podUID="8731b0fb-0429-4730-8da9-cc182fdf29e1" Dec 01 08:55:31 crc kubenswrapper[4689]: E1201 08:55:31.085244 4689 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Dec 01 08:55:31 crc kubenswrapper[4689]: E1201 08:55:31.085878 4689 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fhg6f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-v6tfg_openstack(cac79dc0-0c0d-4fc8-b148-5b0ae546eba6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 01 08:55:31 crc kubenswrapper[4689]: E1201 08:55:31.091224 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-v6tfg" podUID="cac79dc0-0c0d-4fc8-b148-5b0ae546eba6" Dec 01 08:55:31 crc kubenswrapper[4689]: E1201 08:55:31.127900 4689 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Dec 01 08:55:31 crc kubenswrapper[4689]: E1201 08:55:31.128176 4689 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-snrrk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-6hl9x_openstack(b01a2e6b-975d-4968-abf2-62137709ab4e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 01 08:55:31 crc kubenswrapper[4689]: E1201 08:55:31.129560 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-6hl9x" podUID="b01a2e6b-975d-4968-abf2-62137709ab4e" Dec 01 08:55:31 crc kubenswrapper[4689]: E1201 08:55:31.149766 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-xvz5v" podUID="8957703f-79dd-4f15-bad3-d7be659b8de6" Dec 01 08:55:31 crc kubenswrapper[4689]: E1201 08:55:31.149800 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-v6tfg" podUID="cac79dc0-0c0d-4fc8-b148-5b0ae546eba6" Dec 01 08:55:31 crc kubenswrapper[4689]: E1201 08:55:31.153499 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified\\\"\"" pod="openstack/ovn-controller-48955" podUID="8731b0fb-0429-4730-8da9-cc182fdf29e1" Dec 01 08:55:31 crc kubenswrapper[4689]: I1201 08:55:31.793797 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-6hl9x" Dec 01 08:55:31 crc kubenswrapper[4689]: I1201 08:55:31.798456 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-c8mdm" Dec 01 08:55:31 crc kubenswrapper[4689]: I1201 08:55:31.845786 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-t4rfs"] Dec 01 08:55:31 crc kubenswrapper[4689]: I1201 08:55:31.951721 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 01 08:55:31 crc kubenswrapper[4689]: I1201 08:55:31.977360 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b01a2e6b-975d-4968-abf2-62137709ab4e-config\") pod \"b01a2e6b-975d-4968-abf2-62137709ab4e\" (UID: \"b01a2e6b-975d-4968-abf2-62137709ab4e\") " Dec 01 08:55:31 crc kubenswrapper[4689]: I1201 08:55:31.977517 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snrrk\" (UniqueName: \"kubernetes.io/projected/b01a2e6b-975d-4968-abf2-62137709ab4e-kube-api-access-snrrk\") pod \"b01a2e6b-975d-4968-abf2-62137709ab4e\" (UID: \"b01a2e6b-975d-4968-abf2-62137709ab4e\") " Dec 01 08:55:31 crc kubenswrapper[4689]: I1201 08:55:31.977565 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f74987d3-25fa-4744-860a-e4c272305c81-config\") pod \"f74987d3-25fa-4744-860a-e4c272305c81\" (UID: \"f74987d3-25fa-4744-860a-e4c272305c81\") " Dec 01 08:55:31 crc kubenswrapper[4689]: I1201 08:55:31.977619 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f74987d3-25fa-4744-860a-e4c272305c81-dns-svc\") pod \"f74987d3-25fa-4744-860a-e4c272305c81\" (UID: \"f74987d3-25fa-4744-860a-e4c272305c81\") " Dec 01 08:55:31 crc kubenswrapper[4689]: I1201 08:55:31.977673 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ws4c2\" (UniqueName: \"kubernetes.io/projected/f74987d3-25fa-4744-860a-e4c272305c81-kube-api-access-ws4c2\") pod \"f74987d3-25fa-4744-860a-e4c272305c81\" (UID: \"f74987d3-25fa-4744-860a-e4c272305c81\") " Dec 01 08:55:31 crc kubenswrapper[4689]: I1201 08:55:31.978105 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b01a2e6b-975d-4968-abf2-62137709ab4e-config" (OuterVolumeSpecName: "config") pod "b01a2e6b-975d-4968-abf2-62137709ab4e" (UID: "b01a2e6b-975d-4968-abf2-62137709ab4e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:55:31 crc kubenswrapper[4689]: I1201 08:55:31.978162 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f74987d3-25fa-4744-860a-e4c272305c81-config" (OuterVolumeSpecName: "config") pod "f74987d3-25fa-4744-860a-e4c272305c81" (UID: "f74987d3-25fa-4744-860a-e4c272305c81"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:55:31 crc kubenswrapper[4689]: I1201 08:55:31.978193 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f74987d3-25fa-4744-860a-e4c272305c81-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f74987d3-25fa-4744-860a-e4c272305c81" (UID: "f74987d3-25fa-4744-860a-e4c272305c81"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:55:31 crc kubenswrapper[4689]: I1201 08:55:31.980908 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b01a2e6b-975d-4968-abf2-62137709ab4e-kube-api-access-snrrk" (OuterVolumeSpecName: "kube-api-access-snrrk") pod "b01a2e6b-975d-4968-abf2-62137709ab4e" (UID: "b01a2e6b-975d-4968-abf2-62137709ab4e"). InnerVolumeSpecName "kube-api-access-snrrk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:55:31 crc kubenswrapper[4689]: I1201 08:55:31.981097 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f74987d3-25fa-4744-860a-e4c272305c81-kube-api-access-ws4c2" (OuterVolumeSpecName: "kube-api-access-ws4c2") pod "f74987d3-25fa-4744-860a-e4c272305c81" (UID: "f74987d3-25fa-4744-860a-e4c272305c81"). InnerVolumeSpecName "kube-api-access-ws4c2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:55:32 crc kubenswrapper[4689]: I1201 08:55:32.079920 4689 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b01a2e6b-975d-4968-abf2-62137709ab4e-config\") on node \"crc\" DevicePath \"\"" Dec 01 08:55:32 crc kubenswrapper[4689]: I1201 08:55:32.080315 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-snrrk\" (UniqueName: \"kubernetes.io/projected/b01a2e6b-975d-4968-abf2-62137709ab4e-kube-api-access-snrrk\") on node \"crc\" DevicePath \"\"" Dec 01 08:55:32 crc kubenswrapper[4689]: I1201 08:55:32.080330 4689 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f74987d3-25fa-4744-860a-e4c272305c81-config\") on node \"crc\" DevicePath \"\"" Dec 01 08:55:32 crc kubenswrapper[4689]: I1201 08:55:32.080341 4689 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f74987d3-25fa-4744-860a-e4c272305c81-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 01 08:55:32 crc kubenswrapper[4689]: I1201 08:55:32.080349 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ws4c2\" (UniqueName: \"kubernetes.io/projected/f74987d3-25fa-4744-860a-e4c272305c81-kube-api-access-ws4c2\") on node \"crc\" DevicePath \"\"" Dec 01 08:55:32 crc kubenswrapper[4689]: I1201 08:55:32.154276 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-6hl9x" event={"ID":"b01a2e6b-975d-4968-abf2-62137709ab4e","Type":"ContainerDied","Data":"d78ed2bd7c6580f6eea60c4c70b41786d78b2099a6eb9eca3b763d30cc60f10f"} Dec 01 08:55:32 crc kubenswrapper[4689]: I1201 08:55:32.154298 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-6hl9x" Dec 01 08:55:32 crc kubenswrapper[4689]: I1201 08:55:32.158990 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-c8mdm" event={"ID":"f74987d3-25fa-4744-860a-e4c272305c81","Type":"ContainerDied","Data":"6bf7e8c0859e6111d38040655317249741c3d35d0f65d7672ba230855710639b"} Dec 01 08:55:32 crc kubenswrapper[4689]: I1201 08:55:32.159037 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-c8mdm" Dec 01 08:55:32 crc kubenswrapper[4689]: I1201 08:55:32.226822 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-6hl9x"] Dec 01 08:55:32 crc kubenswrapper[4689]: I1201 08:55:32.247850 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-6hl9x"] Dec 01 08:55:32 crc kubenswrapper[4689]: E1201 08:55:32.261709 4689 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Dec 01 08:55:32 crc kubenswrapper[4689]: E1201 08:55:32.261800 4689 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Dec 01 08:55:32 crc kubenswrapper[4689]: E1201 08:55:32.261956 4689 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fq4xn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(06aa5768-7753-4a2d-8e40-96cea62d055c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 01 08:55:32 crc kubenswrapper[4689]: E1201 08:55:32.263271 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/kube-state-metrics-0" podUID="06aa5768-7753-4a2d-8e40-96cea62d055c" Dec 01 08:55:32 crc kubenswrapper[4689]: I1201 08:55:32.263511 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-c8mdm"] Dec 01 08:55:32 crc kubenswrapper[4689]: I1201 08:55:32.269898 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-c8mdm"] Dec 01 08:55:32 crc kubenswrapper[4689]: W1201 08:55:32.311456 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5b1a856a_afb7_4839_a797_7625521520b2.slice/crio-66ae925f28ad491a83aa0d03f67f2f734f9189f508b10ed2555d3ab24e6155b6 WatchSource:0}: Error finding container 66ae925f28ad491a83aa0d03f67f2f734f9189f508b10ed2555d3ab24e6155b6: Status 404 returned error can't find the container with id 66ae925f28ad491a83aa0d03f67f2f734f9189f508b10ed2555d3ab24e6155b6 Dec 01 08:55:33 crc kubenswrapper[4689]: I1201 08:55:33.058007 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b01a2e6b-975d-4968-abf2-62137709ab4e" path="/var/lib/kubelet/pods/b01a2e6b-975d-4968-abf2-62137709ab4e/volumes" Dec 01 08:55:33 crc kubenswrapper[4689]: I1201 08:55:33.058615 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f74987d3-25fa-4744-860a-e4c272305c81" path="/var/lib/kubelet/pods/f74987d3-25fa-4744-860a-e4c272305c81/volumes" Dec 01 08:55:33 crc kubenswrapper[4689]: I1201 08:55:33.202343 4689 generic.go:334] "Generic (PLEG): container finished" podID="a0d0f0ef-1203-4001-9872-7c32022a4839" containerID="0d799b8007bdc6fbdc0cbb246c50ab8988e7995192a8030839573885af4bfa2f" exitCode=0 Dec 01 08:55:33 crc kubenswrapper[4689]: I1201 08:55:33.202945 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-sj4xx" event={"ID":"a0d0f0ef-1203-4001-9872-7c32022a4839","Type":"ContainerDied","Data":"0d799b8007bdc6fbdc0cbb246c50ab8988e7995192a8030839573885af4bfa2f"} Dec 01 08:55:33 crc kubenswrapper[4689]: I1201 08:55:33.213287 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-95kdv" event={"ID":"9368a0f8-4d7e-49e3-b575-7f901ed6464c","Type":"ContainerStarted","Data":"81699f95419fb7a42418e5e702ed22816ef881dc9f687853a2e534e789588353"} Dec 01 08:55:33 crc kubenswrapper[4689]: I1201 08:55:33.216183 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"5b1a856a-afb7-4839-a797-7625521520b2","Type":"ContainerStarted","Data":"66ae925f28ad491a83aa0d03f67f2f734f9189f508b10ed2555d3ab24e6155b6"} Dec 01 08:55:33 crc kubenswrapper[4689]: I1201 08:55:33.218267 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-t4rfs" event={"ID":"5b0566d9-e730-4929-aa69-fba41a7c88c0","Type":"ContainerStarted","Data":"7f753b287ccaaaab95491c3268b3f825272d37bcd505a88fbb69ed17d11fed2e"} Dec 01 08:55:33 crc kubenswrapper[4689]: I1201 08:55:33.220095 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"150dfc79-4971-4c3d-aada-13fc85bd101c","Type":"ContainerStarted","Data":"fe032ac7b9498a1a96af5a4235c43ab346c1f9d819b163adb6c3171834d5d7d3"} Dec 01 08:55:33 crc kubenswrapper[4689]: E1201 08:55:33.247905 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0\\\"\"" pod="openstack/kube-state-metrics-0" podUID="06aa5768-7753-4a2d-8e40-96cea62d055c" Dec 01 08:55:33 crc kubenswrapper[4689]: I1201 08:55:33.335267 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-95kdv" podStartSLOduration=5.581349989 podStartE2EDuration="38.335213392s" podCreationTimestamp="2025-12-01 08:54:55 +0000 UTC" firstStartedPulling="2025-12-01 08:54:58.495393124 +0000 UTC m=+978.567681028" lastFinishedPulling="2025-12-01 08:55:31.249256527 +0000 UTC m=+1011.321544431" observedRunningTime="2025-12-01 08:55:33.285015737 +0000 UTC m=+1013.357303651" watchObservedRunningTime="2025-12-01 08:55:33.335213392 +0000 UTC m=+1013.407501306" Dec 01 08:55:34 crc kubenswrapper[4689]: I1201 08:55:34.233491 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-sj4xx" event={"ID":"a0d0f0ef-1203-4001-9872-7c32022a4839","Type":"ContainerStarted","Data":"becf420ce9e03aad2ec275a388f9235b537a206f9074c4c345949fd74a0a7cf8"} Dec 01 08:55:35 crc kubenswrapper[4689]: I1201 08:55:35.241853 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"5b1a856a-afb7-4839-a797-7625521520b2","Type":"ContainerStarted","Data":"86ec8092bece51d47d9d2ec7a0377cbc161aab72da8a8e0f0d41991f157f7dbb"} Dec 01 08:55:35 crc kubenswrapper[4689]: I1201 08:55:35.245034 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-sj4xx" event={"ID":"a0d0f0ef-1203-4001-9872-7c32022a4839","Type":"ContainerStarted","Data":"4909bb39f4668a102cc39543367749a519295486864c5da27b375673afcbe51d"} Dec 01 08:55:35 crc kubenswrapper[4689]: I1201 08:55:35.245200 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-sj4xx" Dec 01 08:55:35 crc kubenswrapper[4689]: I1201 08:55:35.267599 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-sj4xx" podStartSLOduration=8.654995678 podStartE2EDuration="36.267576717s" podCreationTimestamp="2025-12-01 08:54:59 +0000 UTC" firstStartedPulling="2025-12-01 08:55:03.448724718 +0000 UTC m=+983.521012622" lastFinishedPulling="2025-12-01 08:55:31.061305727 +0000 UTC m=+1011.133593661" observedRunningTime="2025-12-01 08:55:35.265236484 +0000 UTC m=+1015.337524398" watchObservedRunningTime="2025-12-01 08:55:35.267576717 +0000 UTC m=+1015.339864621" Dec 01 08:55:35 crc kubenswrapper[4689]: I1201 08:55:35.762920 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-95kdv" Dec 01 08:55:35 crc kubenswrapper[4689]: I1201 08:55:35.763271 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-95kdv" Dec 01 08:55:36 crc kubenswrapper[4689]: I1201 08:55:36.254186 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-sj4xx" Dec 01 08:55:36 crc kubenswrapper[4689]: I1201 08:55:36.825180 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-95kdv" podUID="9368a0f8-4d7e-49e3-b575-7f901ed6464c" containerName="registry-server" probeResult="failure" output=< Dec 01 08:55:36 crc kubenswrapper[4689]: timeout: failed to connect service ":50051" within 1s Dec 01 08:55:36 crc kubenswrapper[4689]: > Dec 01 08:55:38 crc kubenswrapper[4689]: I1201 08:55:38.269572 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-t4rfs" event={"ID":"5b0566d9-e730-4929-aa69-fba41a7c88c0","Type":"ContainerStarted","Data":"b076e9d3dafccbaf588ee8eaa79971609483414a28bdb761aa9a67adfde04910"} Dec 01 08:55:38 crc kubenswrapper[4689]: I1201 08:55:38.271802 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"150dfc79-4971-4c3d-aada-13fc85bd101c","Type":"ContainerStarted","Data":"4f18fd3b906db3665c94f0beb64a4ea4353a203a4c23ed55f4a31d327cc237d2"} Dec 01 08:55:38 crc kubenswrapper[4689]: I1201 08:55:38.274095 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"5b1a856a-afb7-4839-a797-7625521520b2","Type":"ContainerStarted","Data":"31d3b05bc81895f888c944d62a7282b9a628b64128ffcbd27c58333a08e72e81"} Dec 01 08:55:38 crc kubenswrapper[4689]: I1201 08:55:38.293002 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-t4rfs" podStartSLOduration=29.610620245 podStartE2EDuration="35.292977927s" podCreationTimestamp="2025-12-01 08:55:03 +0000 UTC" firstStartedPulling="2025-12-01 08:55:32.296432805 +0000 UTC m=+1012.368720709" lastFinishedPulling="2025-12-01 08:55:37.978790487 +0000 UTC m=+1018.051078391" observedRunningTime="2025-12-01 08:55:38.286427308 +0000 UTC m=+1018.358715202" watchObservedRunningTime="2025-12-01 08:55:38.292977927 +0000 UTC m=+1018.365265831" Dec 01 08:55:38 crc kubenswrapper[4689]: I1201 08:55:38.321396 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=12.203088644 podStartE2EDuration="36.321351954s" podCreationTimestamp="2025-12-01 08:55:02 +0000 UTC" firstStartedPulling="2025-12-01 08:55:13.742058251 +0000 UTC m=+993.814346155" lastFinishedPulling="2025-12-01 08:55:37.860321561 +0000 UTC m=+1017.932609465" observedRunningTime="2025-12-01 08:55:38.30621603 +0000 UTC m=+1018.378503944" watchObservedRunningTime="2025-12-01 08:55:38.321351954 +0000 UTC m=+1018.393639868" Dec 01 08:55:38 crc kubenswrapper[4689]: I1201 08:55:38.345584 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=28.785056515 podStartE2EDuration="34.345560868s" podCreationTimestamp="2025-12-01 08:55:04 +0000 UTC" firstStartedPulling="2025-12-01 08:55:32.325557703 +0000 UTC m=+1012.397845617" lastFinishedPulling="2025-12-01 08:55:37.886062056 +0000 UTC m=+1017.958349970" observedRunningTime="2025-12-01 08:55:38.337350193 +0000 UTC m=+1018.409638137" watchObservedRunningTime="2025-12-01 08:55:38.345560868 +0000 UTC m=+1018.417848772" Dec 01 08:55:38 crc kubenswrapper[4689]: I1201 08:55:38.644360 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-v6tfg"] Dec 01 08:55:38 crc kubenswrapper[4689]: I1201 08:55:38.675694 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Dec 01 08:55:38 crc kubenswrapper[4689]: I1201 08:55:38.708169 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-56dd2"] Dec 01 08:55:38 crc kubenswrapper[4689]: I1201 08:55:38.709900 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-56dd2" Dec 01 08:55:38 crc kubenswrapper[4689]: I1201 08:55:38.714249 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-56dd2"] Dec 01 08:55:38 crc kubenswrapper[4689]: I1201 08:55:38.715315 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Dec 01 08:55:38 crc kubenswrapper[4689]: I1201 08:55:38.830310 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6332c4d-2320-47cb-b432-a67c79c46240-config\") pod \"dnsmasq-dns-7fd796d7df-56dd2\" (UID: \"c6332c4d-2320-47cb-b432-a67c79c46240\") " pod="openstack/dnsmasq-dns-7fd796d7df-56dd2" Dec 01 08:55:38 crc kubenswrapper[4689]: I1201 08:55:38.830434 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c6332c4d-2320-47cb-b432-a67c79c46240-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-56dd2\" (UID: \"c6332c4d-2320-47cb-b432-a67c79c46240\") " pod="openstack/dnsmasq-dns-7fd796d7df-56dd2" Dec 01 08:55:38 crc kubenswrapper[4689]: I1201 08:55:38.830472 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c6332c4d-2320-47cb-b432-a67c79c46240-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-56dd2\" (UID: \"c6332c4d-2320-47cb-b432-a67c79c46240\") " pod="openstack/dnsmasq-dns-7fd796d7df-56dd2" Dec 01 08:55:38 crc kubenswrapper[4689]: I1201 08:55:38.830497 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bspkc\" (UniqueName: \"kubernetes.io/projected/c6332c4d-2320-47cb-b432-a67c79c46240-kube-api-access-bspkc\") pod \"dnsmasq-dns-7fd796d7df-56dd2\" (UID: \"c6332c4d-2320-47cb-b432-a67c79c46240\") " pod="openstack/dnsmasq-dns-7fd796d7df-56dd2" Dec 01 08:55:38 crc kubenswrapper[4689]: I1201 08:55:38.919350 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-xvz5v"] Dec 01 08:55:38 crc kubenswrapper[4689]: I1201 08:55:38.931891 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c6332c4d-2320-47cb-b432-a67c79c46240-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-56dd2\" (UID: \"c6332c4d-2320-47cb-b432-a67c79c46240\") " pod="openstack/dnsmasq-dns-7fd796d7df-56dd2" Dec 01 08:55:38 crc kubenswrapper[4689]: I1201 08:55:38.931954 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c6332c4d-2320-47cb-b432-a67c79c46240-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-56dd2\" (UID: \"c6332c4d-2320-47cb-b432-a67c79c46240\") " pod="openstack/dnsmasq-dns-7fd796d7df-56dd2" Dec 01 08:55:38 crc kubenswrapper[4689]: I1201 08:55:38.931989 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bspkc\" (UniqueName: \"kubernetes.io/projected/c6332c4d-2320-47cb-b432-a67c79c46240-kube-api-access-bspkc\") pod \"dnsmasq-dns-7fd796d7df-56dd2\" (UID: \"c6332c4d-2320-47cb-b432-a67c79c46240\") " pod="openstack/dnsmasq-dns-7fd796d7df-56dd2" Dec 01 08:55:38 crc kubenswrapper[4689]: I1201 08:55:38.932091 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6332c4d-2320-47cb-b432-a67c79c46240-config\") pod \"dnsmasq-dns-7fd796d7df-56dd2\" (UID: \"c6332c4d-2320-47cb-b432-a67c79c46240\") " pod="openstack/dnsmasq-dns-7fd796d7df-56dd2" Dec 01 08:55:38 crc kubenswrapper[4689]: I1201 08:55:38.933171 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6332c4d-2320-47cb-b432-a67c79c46240-config\") pod \"dnsmasq-dns-7fd796d7df-56dd2\" (UID: \"c6332c4d-2320-47cb-b432-a67c79c46240\") " pod="openstack/dnsmasq-dns-7fd796d7df-56dd2" Dec 01 08:55:38 crc kubenswrapper[4689]: I1201 08:55:38.933736 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c6332c4d-2320-47cb-b432-a67c79c46240-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-56dd2\" (UID: \"c6332c4d-2320-47cb-b432-a67c79c46240\") " pod="openstack/dnsmasq-dns-7fd796d7df-56dd2" Dec 01 08:55:38 crc kubenswrapper[4689]: I1201 08:55:38.934238 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c6332c4d-2320-47cb-b432-a67c79c46240-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-56dd2\" (UID: \"c6332c4d-2320-47cb-b432-a67c79c46240\") " pod="openstack/dnsmasq-dns-7fd796d7df-56dd2" Dec 01 08:55:38 crc kubenswrapper[4689]: I1201 08:55:38.944697 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Dec 01 08:55:38 crc kubenswrapper[4689]: I1201 08:55:38.977921 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bspkc\" (UniqueName: \"kubernetes.io/projected/c6332c4d-2320-47cb-b432-a67c79c46240-kube-api-access-bspkc\") pod \"dnsmasq-dns-7fd796d7df-56dd2\" (UID: \"c6332c4d-2320-47cb-b432-a67c79c46240\") " pod="openstack/dnsmasq-dns-7fd796d7df-56dd2" Dec 01 08:55:38 crc kubenswrapper[4689]: I1201 08:55:38.989728 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-tlzl7"] Dec 01 08:55:38 crc kubenswrapper[4689]: I1201 08:55:38.991156 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-tlzl7" Dec 01 08:55:39 crc kubenswrapper[4689]: I1201 08:55:39.021745 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Dec 01 08:55:39 crc kubenswrapper[4689]: I1201 08:55:39.024884 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-56dd2" Dec 01 08:55:39 crc kubenswrapper[4689]: I1201 08:55:39.043510 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-tlzl7"] Dec 01 08:55:39 crc kubenswrapper[4689]: I1201 08:55:39.088315 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Dec 01 08:55:39 crc kubenswrapper[4689]: I1201 08:55:39.136187 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6161a545-1c73-42c1-8176-c311aad98fed-config\") pod \"dnsmasq-dns-86db49b7ff-tlzl7\" (UID: \"6161a545-1c73-42c1-8176-c311aad98fed\") " pod="openstack/dnsmasq-dns-86db49b7ff-tlzl7" Dec 01 08:55:39 crc kubenswrapper[4689]: I1201 08:55:39.136249 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wr7m6\" (UniqueName: \"kubernetes.io/projected/6161a545-1c73-42c1-8176-c311aad98fed-kube-api-access-wr7m6\") pod \"dnsmasq-dns-86db49b7ff-tlzl7\" (UID: \"6161a545-1c73-42c1-8176-c311aad98fed\") " pod="openstack/dnsmasq-dns-86db49b7ff-tlzl7" Dec 01 08:55:39 crc kubenswrapper[4689]: I1201 08:55:39.136292 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6161a545-1c73-42c1-8176-c311aad98fed-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-tlzl7\" (UID: \"6161a545-1c73-42c1-8176-c311aad98fed\") " pod="openstack/dnsmasq-dns-86db49b7ff-tlzl7" Dec 01 08:55:39 crc kubenswrapper[4689]: I1201 08:55:39.136315 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6161a545-1c73-42c1-8176-c311aad98fed-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-tlzl7\" (UID: \"6161a545-1c73-42c1-8176-c311aad98fed\") " pod="openstack/dnsmasq-dns-86db49b7ff-tlzl7" Dec 01 08:55:39 crc kubenswrapper[4689]: I1201 08:55:39.136348 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6161a545-1c73-42c1-8176-c311aad98fed-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-tlzl7\" (UID: \"6161a545-1c73-42c1-8176-c311aad98fed\") " pod="openstack/dnsmasq-dns-86db49b7ff-tlzl7" Dec 01 08:55:39 crc kubenswrapper[4689]: I1201 08:55:39.216094 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-v6tfg" Dec 01 08:55:39 crc kubenswrapper[4689]: I1201 08:55:39.240860 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6161a545-1c73-42c1-8176-c311aad98fed-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-tlzl7\" (UID: \"6161a545-1c73-42c1-8176-c311aad98fed\") " pod="openstack/dnsmasq-dns-86db49b7ff-tlzl7" Dec 01 08:55:39 crc kubenswrapper[4689]: I1201 08:55:39.241045 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6161a545-1c73-42c1-8176-c311aad98fed-config\") pod \"dnsmasq-dns-86db49b7ff-tlzl7\" (UID: \"6161a545-1c73-42c1-8176-c311aad98fed\") " pod="openstack/dnsmasq-dns-86db49b7ff-tlzl7" Dec 01 08:55:39 crc kubenswrapper[4689]: I1201 08:55:39.241094 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wr7m6\" (UniqueName: \"kubernetes.io/projected/6161a545-1c73-42c1-8176-c311aad98fed-kube-api-access-wr7m6\") pod \"dnsmasq-dns-86db49b7ff-tlzl7\" (UID: \"6161a545-1c73-42c1-8176-c311aad98fed\") " pod="openstack/dnsmasq-dns-86db49b7ff-tlzl7" Dec 01 08:55:39 crc kubenswrapper[4689]: I1201 08:55:39.241136 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6161a545-1c73-42c1-8176-c311aad98fed-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-tlzl7\" (UID: \"6161a545-1c73-42c1-8176-c311aad98fed\") " pod="openstack/dnsmasq-dns-86db49b7ff-tlzl7" Dec 01 08:55:39 crc kubenswrapper[4689]: I1201 08:55:39.241164 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6161a545-1c73-42c1-8176-c311aad98fed-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-tlzl7\" (UID: \"6161a545-1c73-42c1-8176-c311aad98fed\") " pod="openstack/dnsmasq-dns-86db49b7ff-tlzl7" Dec 01 08:55:39 crc kubenswrapper[4689]: I1201 08:55:39.242267 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6161a545-1c73-42c1-8176-c311aad98fed-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-tlzl7\" (UID: \"6161a545-1c73-42c1-8176-c311aad98fed\") " pod="openstack/dnsmasq-dns-86db49b7ff-tlzl7" Dec 01 08:55:39 crc kubenswrapper[4689]: I1201 08:55:39.242818 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6161a545-1c73-42c1-8176-c311aad98fed-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-tlzl7\" (UID: \"6161a545-1c73-42c1-8176-c311aad98fed\") " pod="openstack/dnsmasq-dns-86db49b7ff-tlzl7" Dec 01 08:55:39 crc kubenswrapper[4689]: I1201 08:55:39.243587 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6161a545-1c73-42c1-8176-c311aad98fed-config\") pod \"dnsmasq-dns-86db49b7ff-tlzl7\" (UID: \"6161a545-1c73-42c1-8176-c311aad98fed\") " pod="openstack/dnsmasq-dns-86db49b7ff-tlzl7" Dec 01 08:55:39 crc kubenswrapper[4689]: I1201 08:55:39.249775 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6161a545-1c73-42c1-8176-c311aad98fed-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-tlzl7\" (UID: \"6161a545-1c73-42c1-8176-c311aad98fed\") " pod="openstack/dnsmasq-dns-86db49b7ff-tlzl7" Dec 01 08:55:39 crc kubenswrapper[4689]: I1201 08:55:39.263385 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wr7m6\" (UniqueName: \"kubernetes.io/projected/6161a545-1c73-42c1-8176-c311aad98fed-kube-api-access-wr7m6\") pod \"dnsmasq-dns-86db49b7ff-tlzl7\" (UID: \"6161a545-1c73-42c1-8176-c311aad98fed\") " pod="openstack/dnsmasq-dns-86db49b7ff-tlzl7" Dec 01 08:55:39 crc kubenswrapper[4689]: I1201 08:55:39.284414 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-v6tfg" event={"ID":"cac79dc0-0c0d-4fc8-b148-5b0ae546eba6","Type":"ContainerDied","Data":"958955d70b5b5ebd403995bde8a821a2c8216da345755bcfbfcd113713b57257"} Dec 01 08:55:39 crc kubenswrapper[4689]: I1201 08:55:39.284625 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-v6tfg" Dec 01 08:55:39 crc kubenswrapper[4689]: I1201 08:55:39.284680 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Dec 01 08:55:39 crc kubenswrapper[4689]: I1201 08:55:39.344454 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cac79dc0-0c0d-4fc8-b148-5b0ae546eba6-dns-svc\") pod \"cac79dc0-0c0d-4fc8-b148-5b0ae546eba6\" (UID: \"cac79dc0-0c0d-4fc8-b148-5b0ae546eba6\") " Dec 01 08:55:39 crc kubenswrapper[4689]: I1201 08:55:39.344549 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cac79dc0-0c0d-4fc8-b148-5b0ae546eba6-config\") pod \"cac79dc0-0c0d-4fc8-b148-5b0ae546eba6\" (UID: \"cac79dc0-0c0d-4fc8-b148-5b0ae546eba6\") " Dec 01 08:55:39 crc kubenswrapper[4689]: I1201 08:55:39.344580 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhg6f\" (UniqueName: \"kubernetes.io/projected/cac79dc0-0c0d-4fc8-b148-5b0ae546eba6-kube-api-access-fhg6f\") pod \"cac79dc0-0c0d-4fc8-b148-5b0ae546eba6\" (UID: \"cac79dc0-0c0d-4fc8-b148-5b0ae546eba6\") " Dec 01 08:55:39 crc kubenswrapper[4689]: I1201 08:55:39.345583 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cac79dc0-0c0d-4fc8-b148-5b0ae546eba6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cac79dc0-0c0d-4fc8-b148-5b0ae546eba6" (UID: "cac79dc0-0c0d-4fc8-b148-5b0ae546eba6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:55:39 crc kubenswrapper[4689]: I1201 08:55:39.345998 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Dec 01 08:55:39 crc kubenswrapper[4689]: I1201 08:55:39.346696 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cac79dc0-0c0d-4fc8-b148-5b0ae546eba6-config" (OuterVolumeSpecName: "config") pod "cac79dc0-0c0d-4fc8-b148-5b0ae546eba6" (UID: "cac79dc0-0c0d-4fc8-b148-5b0ae546eba6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:55:39 crc kubenswrapper[4689]: I1201 08:55:39.353771 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cac79dc0-0c0d-4fc8-b148-5b0ae546eba6-kube-api-access-fhg6f" (OuterVolumeSpecName: "kube-api-access-fhg6f") pod "cac79dc0-0c0d-4fc8-b148-5b0ae546eba6" (UID: "cac79dc0-0c0d-4fc8-b148-5b0ae546eba6"). InnerVolumeSpecName "kube-api-access-fhg6f". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:55:39 crc kubenswrapper[4689]: I1201 08:55:39.426955 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-xvz5v" Dec 01 08:55:39 crc kubenswrapper[4689]: I1201 08:55:39.446737 4689 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cac79dc0-0c0d-4fc8-b148-5b0ae546eba6-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 01 08:55:39 crc kubenswrapper[4689]: I1201 08:55:39.446768 4689 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cac79dc0-0c0d-4fc8-b148-5b0ae546eba6-config\") on node \"crc\" DevicePath \"\"" Dec 01 08:55:39 crc kubenswrapper[4689]: I1201 08:55:39.446780 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhg6f\" (UniqueName: \"kubernetes.io/projected/cac79dc0-0c0d-4fc8-b148-5b0ae546eba6-kube-api-access-fhg6f\") on node \"crc\" DevicePath \"\"" Dec 01 08:55:39 crc kubenswrapper[4689]: I1201 08:55:39.498262 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-tlzl7" Dec 01 08:55:39 crc kubenswrapper[4689]: I1201 08:55:39.548417 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8957703f-79dd-4f15-bad3-d7be659b8de6-dns-svc\") pod \"8957703f-79dd-4f15-bad3-d7be659b8de6\" (UID: \"8957703f-79dd-4f15-bad3-d7be659b8de6\") " Dec 01 08:55:39 crc kubenswrapper[4689]: I1201 08:55:39.548534 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hvrjn\" (UniqueName: \"kubernetes.io/projected/8957703f-79dd-4f15-bad3-d7be659b8de6-kube-api-access-hvrjn\") pod \"8957703f-79dd-4f15-bad3-d7be659b8de6\" (UID: \"8957703f-79dd-4f15-bad3-d7be659b8de6\") " Dec 01 08:55:39 crc kubenswrapper[4689]: I1201 08:55:39.548560 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8957703f-79dd-4f15-bad3-d7be659b8de6-config\") pod \"8957703f-79dd-4f15-bad3-d7be659b8de6\" (UID: \"8957703f-79dd-4f15-bad3-d7be659b8de6\") " Dec 01 08:55:39 crc kubenswrapper[4689]: I1201 08:55:39.549007 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8957703f-79dd-4f15-bad3-d7be659b8de6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8957703f-79dd-4f15-bad3-d7be659b8de6" (UID: "8957703f-79dd-4f15-bad3-d7be659b8de6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:55:39 crc kubenswrapper[4689]: I1201 08:55:39.549341 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8957703f-79dd-4f15-bad3-d7be659b8de6-config" (OuterVolumeSpecName: "config") pod "8957703f-79dd-4f15-bad3-d7be659b8de6" (UID: "8957703f-79dd-4f15-bad3-d7be659b8de6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:55:39 crc kubenswrapper[4689]: I1201 08:55:39.551951 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8957703f-79dd-4f15-bad3-d7be659b8de6-kube-api-access-hvrjn" (OuterVolumeSpecName: "kube-api-access-hvrjn") pod "8957703f-79dd-4f15-bad3-d7be659b8de6" (UID: "8957703f-79dd-4f15-bad3-d7be659b8de6"). InnerVolumeSpecName "kube-api-access-hvrjn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:55:39 crc kubenswrapper[4689]: I1201 08:55:39.650656 4689 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8957703f-79dd-4f15-bad3-d7be659b8de6-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 01 08:55:39 crc kubenswrapper[4689]: I1201 08:55:39.650993 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hvrjn\" (UniqueName: \"kubernetes.io/projected/8957703f-79dd-4f15-bad3-d7be659b8de6-kube-api-access-hvrjn\") on node \"crc\" DevicePath \"\"" Dec 01 08:55:39 crc kubenswrapper[4689]: I1201 08:55:39.651012 4689 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8957703f-79dd-4f15-bad3-d7be659b8de6-config\") on node \"crc\" DevicePath \"\"" Dec 01 08:55:39 crc kubenswrapper[4689]: I1201 08:55:39.656206 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-v6tfg"] Dec 01 08:55:39 crc kubenswrapper[4689]: I1201 08:55:39.664774 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-v6tfg"] Dec 01 08:55:39 crc kubenswrapper[4689]: I1201 08:55:39.679879 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Dec 01 08:55:39 crc kubenswrapper[4689]: I1201 08:55:39.709100 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-56dd2"] Dec 01 08:55:39 crc kubenswrapper[4689]: W1201 08:55:39.724307 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc6332c4d_2320_47cb_b432_a67c79c46240.slice/crio-45f466a0bcc714002dd1227cb7c282e6962f73414805b662c9efd7b20a190a9f WatchSource:0}: Error finding container 45f466a0bcc714002dd1227cb7c282e6962f73414805b662c9efd7b20a190a9f: Status 404 returned error can't find the container with id 45f466a0bcc714002dd1227cb7c282e6962f73414805b662c9efd7b20a190a9f Dec 01 08:55:39 crc kubenswrapper[4689]: I1201 08:55:39.737811 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Dec 01 08:55:40 crc kubenswrapper[4689]: I1201 08:55:40.001192 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-tlzl7"] Dec 01 08:55:40 crc kubenswrapper[4689]: W1201 08:55:40.008524 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6161a545_1c73_42c1_8176_c311aad98fed.slice/crio-6ed64f9027398840c804f06f609402a5aa2a7c8e5431c01d49855455e258d7c8 WatchSource:0}: Error finding container 6ed64f9027398840c804f06f609402a5aa2a7c8e5431c01d49855455e258d7c8: Status 404 returned error can't find the container with id 6ed64f9027398840c804f06f609402a5aa2a7c8e5431c01d49855455e258d7c8 Dec 01 08:55:40 crc kubenswrapper[4689]: I1201 08:55:40.294915 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-xvz5v" Dec 01 08:55:40 crc kubenswrapper[4689]: I1201 08:55:40.300384 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-xvz5v" event={"ID":"8957703f-79dd-4f15-bad3-d7be659b8de6","Type":"ContainerDied","Data":"381a1c27f4ccc911e074d5bcdb746df5973ce1da3aac3310f0f2ea3373edcbd7"} Dec 01 08:55:40 crc kubenswrapper[4689]: I1201 08:55:40.314446 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-56dd2" event={"ID":"c6332c4d-2320-47cb-b432-a67c79c46240","Type":"ContainerStarted","Data":"45f466a0bcc714002dd1227cb7c282e6962f73414805b662c9efd7b20a190a9f"} Dec 01 08:55:40 crc kubenswrapper[4689]: I1201 08:55:40.321637 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-tlzl7" event={"ID":"6161a545-1c73-42c1-8176-c311aad98fed","Type":"ContainerStarted","Data":"6ed64f9027398840c804f06f609402a5aa2a7c8e5431c01d49855455e258d7c8"} Dec 01 08:55:40 crc kubenswrapper[4689]: I1201 08:55:40.384675 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-xvz5v"] Dec 01 08:55:40 crc kubenswrapper[4689]: I1201 08:55:40.393470 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Dec 01 08:55:40 crc kubenswrapper[4689]: I1201 08:55:40.395098 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-xvz5v"] Dec 01 08:55:40 crc kubenswrapper[4689]: I1201 08:55:40.608727 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Dec 01 08:55:40 crc kubenswrapper[4689]: I1201 08:55:40.610078 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Dec 01 08:55:40 crc kubenswrapper[4689]: I1201 08:55:40.614630 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-wbcqk" Dec 01 08:55:40 crc kubenswrapper[4689]: I1201 08:55:40.614682 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Dec 01 08:55:40 crc kubenswrapper[4689]: I1201 08:55:40.615275 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Dec 01 08:55:40 crc kubenswrapper[4689]: I1201 08:55:40.615300 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Dec 01 08:55:40 crc kubenswrapper[4689]: I1201 08:55:40.630143 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Dec 01 08:55:40 crc kubenswrapper[4689]: I1201 08:55:40.672383 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa6871a1-f6d5-44b1-a4b7-638763c9c92b-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"aa6871a1-f6d5-44b1-a4b7-638763c9c92b\") " pod="openstack/ovn-northd-0" Dec 01 08:55:40 crc kubenswrapper[4689]: I1201 08:55:40.673009 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lm8m\" (UniqueName: \"kubernetes.io/projected/aa6871a1-f6d5-44b1-a4b7-638763c9c92b-kube-api-access-9lm8m\") pod \"ovn-northd-0\" (UID: \"aa6871a1-f6d5-44b1-a4b7-638763c9c92b\") " pod="openstack/ovn-northd-0" Dec 01 08:55:40 crc kubenswrapper[4689]: I1201 08:55:40.673072 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa6871a1-f6d5-44b1-a4b7-638763c9c92b-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"aa6871a1-f6d5-44b1-a4b7-638763c9c92b\") " pod="openstack/ovn-northd-0" Dec 01 08:55:40 crc kubenswrapper[4689]: I1201 08:55:40.673090 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/aa6871a1-f6d5-44b1-a4b7-638763c9c92b-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"aa6871a1-f6d5-44b1-a4b7-638763c9c92b\") " pod="openstack/ovn-northd-0" Dec 01 08:55:40 crc kubenswrapper[4689]: I1201 08:55:40.673127 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aa6871a1-f6d5-44b1-a4b7-638763c9c92b-scripts\") pod \"ovn-northd-0\" (UID: \"aa6871a1-f6d5-44b1-a4b7-638763c9c92b\") " pod="openstack/ovn-northd-0" Dec 01 08:55:40 crc kubenswrapper[4689]: I1201 08:55:40.673169 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa6871a1-f6d5-44b1-a4b7-638763c9c92b-config\") pod \"ovn-northd-0\" (UID: \"aa6871a1-f6d5-44b1-a4b7-638763c9c92b\") " pod="openstack/ovn-northd-0" Dec 01 08:55:40 crc kubenswrapper[4689]: I1201 08:55:40.673242 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa6871a1-f6d5-44b1-a4b7-638763c9c92b-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"aa6871a1-f6d5-44b1-a4b7-638763c9c92b\") " pod="openstack/ovn-northd-0" Dec 01 08:55:40 crc kubenswrapper[4689]: I1201 08:55:40.776185 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aa6871a1-f6d5-44b1-a4b7-638763c9c92b-scripts\") pod \"ovn-northd-0\" (UID: \"aa6871a1-f6d5-44b1-a4b7-638763c9c92b\") " pod="openstack/ovn-northd-0" Dec 01 08:55:40 crc kubenswrapper[4689]: I1201 08:55:40.776456 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa6871a1-f6d5-44b1-a4b7-638763c9c92b-config\") pod \"ovn-northd-0\" (UID: \"aa6871a1-f6d5-44b1-a4b7-638763c9c92b\") " pod="openstack/ovn-northd-0" Dec 01 08:55:40 crc kubenswrapper[4689]: I1201 08:55:40.776600 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa6871a1-f6d5-44b1-a4b7-638763c9c92b-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"aa6871a1-f6d5-44b1-a4b7-638763c9c92b\") " pod="openstack/ovn-northd-0" Dec 01 08:55:40 crc kubenswrapper[4689]: I1201 08:55:40.776697 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa6871a1-f6d5-44b1-a4b7-638763c9c92b-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"aa6871a1-f6d5-44b1-a4b7-638763c9c92b\") " pod="openstack/ovn-northd-0" Dec 01 08:55:40 crc kubenswrapper[4689]: I1201 08:55:40.776793 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lm8m\" (UniqueName: \"kubernetes.io/projected/aa6871a1-f6d5-44b1-a4b7-638763c9c92b-kube-api-access-9lm8m\") pod \"ovn-northd-0\" (UID: \"aa6871a1-f6d5-44b1-a4b7-638763c9c92b\") " pod="openstack/ovn-northd-0" Dec 01 08:55:40 crc kubenswrapper[4689]: I1201 08:55:40.777017 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa6871a1-f6d5-44b1-a4b7-638763c9c92b-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"aa6871a1-f6d5-44b1-a4b7-638763c9c92b\") " pod="openstack/ovn-northd-0" Dec 01 08:55:40 crc kubenswrapper[4689]: I1201 08:55:40.777090 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/aa6871a1-f6d5-44b1-a4b7-638763c9c92b-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"aa6871a1-f6d5-44b1-a4b7-638763c9c92b\") " pod="openstack/ovn-northd-0" Dec 01 08:55:40 crc kubenswrapper[4689]: I1201 08:55:40.779050 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/aa6871a1-f6d5-44b1-a4b7-638763c9c92b-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"aa6871a1-f6d5-44b1-a4b7-638763c9c92b\") " pod="openstack/ovn-northd-0" Dec 01 08:55:40 crc kubenswrapper[4689]: I1201 08:55:40.781344 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Dec 01 08:55:40 crc kubenswrapper[4689]: I1201 08:55:40.781430 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Dec 01 08:55:40 crc kubenswrapper[4689]: I1201 08:55:40.781735 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Dec 01 08:55:40 crc kubenswrapper[4689]: I1201 08:55:40.781881 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa6871a1-f6d5-44b1-a4b7-638763c9c92b-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"aa6871a1-f6d5-44b1-a4b7-638763c9c92b\") " pod="openstack/ovn-northd-0" Dec 01 08:55:40 crc kubenswrapper[4689]: I1201 08:55:40.790225 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa6871a1-f6d5-44b1-a4b7-638763c9c92b-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"aa6871a1-f6d5-44b1-a4b7-638763c9c92b\") " pod="openstack/ovn-northd-0" Dec 01 08:55:40 crc kubenswrapper[4689]: I1201 08:55:40.794024 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aa6871a1-f6d5-44b1-a4b7-638763c9c92b-scripts\") pod \"ovn-northd-0\" (UID: \"aa6871a1-f6d5-44b1-a4b7-638763c9c92b\") " pod="openstack/ovn-northd-0" Dec 01 08:55:40 crc kubenswrapper[4689]: I1201 08:55:40.796160 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa6871a1-f6d5-44b1-a4b7-638763c9c92b-config\") pod \"ovn-northd-0\" (UID: \"aa6871a1-f6d5-44b1-a4b7-638763c9c92b\") " pod="openstack/ovn-northd-0" Dec 01 08:55:40 crc kubenswrapper[4689]: I1201 08:55:40.797903 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lm8m\" (UniqueName: \"kubernetes.io/projected/aa6871a1-f6d5-44b1-a4b7-638763c9c92b-kube-api-access-9lm8m\") pod \"ovn-northd-0\" (UID: \"aa6871a1-f6d5-44b1-a4b7-638763c9c92b\") " pod="openstack/ovn-northd-0" Dec 01 08:55:40 crc kubenswrapper[4689]: I1201 08:55:40.808165 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa6871a1-f6d5-44b1-a4b7-638763c9c92b-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"aa6871a1-f6d5-44b1-a4b7-638763c9c92b\") " pod="openstack/ovn-northd-0" Dec 01 08:55:40 crc kubenswrapper[4689]: I1201 08:55:40.930158 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-wbcqk" Dec 01 08:55:40 crc kubenswrapper[4689]: I1201 08:55:40.937993 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Dec 01 08:55:41 crc kubenswrapper[4689]: I1201 08:55:41.062500 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8957703f-79dd-4f15-bad3-d7be659b8de6" path="/var/lib/kubelet/pods/8957703f-79dd-4f15-bad3-d7be659b8de6/volumes" Dec 01 08:55:41 crc kubenswrapper[4689]: I1201 08:55:41.062963 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cac79dc0-0c0d-4fc8-b148-5b0ae546eba6" path="/var/lib/kubelet/pods/cac79dc0-0c0d-4fc8-b148-5b0ae546eba6/volumes" Dec 01 08:55:41 crc kubenswrapper[4689]: I1201 08:55:41.326793 4689 generic.go:334] "Generic (PLEG): container finished" podID="6161a545-1c73-42c1-8176-c311aad98fed" containerID="6b3853ac6b776a5558ee9a5e0acd3df9f1721b31474e7b25bc88a1961fc1c25e" exitCode=0 Dec 01 08:55:41 crc kubenswrapper[4689]: I1201 08:55:41.327122 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-tlzl7" event={"ID":"6161a545-1c73-42c1-8176-c311aad98fed","Type":"ContainerDied","Data":"6b3853ac6b776a5558ee9a5e0acd3df9f1721b31474e7b25bc88a1961fc1c25e"} Dec 01 08:55:41 crc kubenswrapper[4689]: I1201 08:55:41.333596 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"555543d8-21bb-4dba-9c08-ab82e90ea894","Type":"ContainerStarted","Data":"8168af9bc95d08d2550cc77efc305540837b2c0e523d7cfd1d4a3cc06875f461"} Dec 01 08:55:41 crc kubenswrapper[4689]: I1201 08:55:41.338443 4689 generic.go:334] "Generic (PLEG): container finished" podID="c6332c4d-2320-47cb-b432-a67c79c46240" containerID="9518afaf32afedd4fc4f6a808f519930f2de94d8c03244ffd1c8b2c8750c26ff" exitCode=0 Dec 01 08:55:41 crc kubenswrapper[4689]: I1201 08:55:41.339512 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-56dd2" event={"ID":"c6332c4d-2320-47cb-b432-a67c79c46240","Type":"ContainerDied","Data":"9518afaf32afedd4fc4f6a808f519930f2de94d8c03244ffd1c8b2c8750c26ff"} Dec 01 08:55:41 crc kubenswrapper[4689]: I1201 08:55:41.485437 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Dec 01 08:55:41 crc kubenswrapper[4689]: W1201 08:55:41.492940 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaa6871a1_f6d5_44b1_a4b7_638763c9c92b.slice/crio-b22b35ca763a72418554edca776857c02948cd360fbadde799c6fd33f224c232 WatchSource:0}: Error finding container b22b35ca763a72418554edca776857c02948cd360fbadde799c6fd33f224c232: Status 404 returned error can't find the container with id b22b35ca763a72418554edca776857c02948cd360fbadde799c6fd33f224c232 Dec 01 08:55:42 crc kubenswrapper[4689]: I1201 08:55:42.346988 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"aa6871a1-f6d5-44b1-a4b7-638763c9c92b","Type":"ContainerStarted","Data":"b22b35ca763a72418554edca776857c02948cd360fbadde799c6fd33f224c232"} Dec 01 08:55:42 crc kubenswrapper[4689]: I1201 08:55:42.349859 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"edc6a475-296b-4f29-a48b-6876138662fd","Type":"ContainerStarted","Data":"9696664d4002a6911085236ecd7df6fa9f9eb259ffb735c43de9d7035c79daf7"} Dec 01 08:55:42 crc kubenswrapper[4689]: I1201 08:55:42.351934 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-56dd2" event={"ID":"c6332c4d-2320-47cb-b432-a67c79c46240","Type":"ContainerStarted","Data":"cb644e76eeaf02e99a4e0ab4db9f5bdb2900ec44653a5c70c901cabf6fa708dc"} Dec 01 08:55:42 crc kubenswrapper[4689]: I1201 08:55:42.352348 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7fd796d7df-56dd2" Dec 01 08:55:42 crc kubenswrapper[4689]: I1201 08:55:42.354728 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-tlzl7" event={"ID":"6161a545-1c73-42c1-8176-c311aad98fed","Type":"ContainerStarted","Data":"fc91af50ca133f0cace1425809c5db035ee0d26aabac842b9ed4b0ed3a035e9b"} Dec 01 08:55:42 crc kubenswrapper[4689]: I1201 08:55:42.354758 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-tlzl7" Dec 01 08:55:42 crc kubenswrapper[4689]: I1201 08:55:42.404068 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-tlzl7" podStartSLOduration=3.935872839 podStartE2EDuration="4.404048839s" podCreationTimestamp="2025-12-01 08:55:38 +0000 UTC" firstStartedPulling="2025-12-01 08:55:40.012401617 +0000 UTC m=+1020.084689521" lastFinishedPulling="2025-12-01 08:55:40.480577617 +0000 UTC m=+1020.552865521" observedRunningTime="2025-12-01 08:55:42.400290525 +0000 UTC m=+1022.472578439" watchObservedRunningTime="2025-12-01 08:55:42.404048839 +0000 UTC m=+1022.476336743" Dec 01 08:55:42 crc kubenswrapper[4689]: I1201 08:55:42.426947 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7fd796d7df-56dd2" podStartSLOduration=3.9664956780000002 podStartE2EDuration="4.426926776s" podCreationTimestamp="2025-12-01 08:55:38 +0000 UTC" firstStartedPulling="2025-12-01 08:55:39.729214536 +0000 UTC m=+1019.801502440" lastFinishedPulling="2025-12-01 08:55:40.189645634 +0000 UTC m=+1020.261933538" observedRunningTime="2025-12-01 08:55:42.423833781 +0000 UTC m=+1022.496121695" watchObservedRunningTime="2025-12-01 08:55:42.426926776 +0000 UTC m=+1022.499214680" Dec 01 08:55:46 crc kubenswrapper[4689]: I1201 08:55:46.822649 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-95kdv" podUID="9368a0f8-4d7e-49e3-b575-7f901ed6464c" containerName="registry-server" probeResult="failure" output=< Dec 01 08:55:46 crc kubenswrapper[4689]: timeout: failed to connect service ":50051" within 1s Dec 01 08:55:46 crc kubenswrapper[4689]: > Dec 01 08:55:47 crc kubenswrapper[4689]: I1201 08:55:47.442849 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"50bb385d-f9f3-4a0d-8d26-c0a69a6eba87","Type":"ContainerStarted","Data":"6c94d18bc981b2dbe3f34fea2eb76e6d8e0b233b1a1d374cb8c1ed08c12aed49"} Dec 01 08:55:47 crc kubenswrapper[4689]: I1201 08:55:47.463629 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"bc1ecd4c-eede-492c-ac97-071c42545607","Type":"ContainerStarted","Data":"3921a6605f05594aca49f06702b23e2298d2d782c891cb3f7727494124cca42f"} Dec 01 08:55:48 crc kubenswrapper[4689]: I1201 08:55:48.475282 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"06aa5768-7753-4a2d-8e40-96cea62d055c","Type":"ContainerStarted","Data":"ede28f8b9139bfbecb340bb19f7194de7a0396dcd6e9144ebd65ce65a30b701d"} Dec 01 08:55:48 crc kubenswrapper[4689]: I1201 08:55:48.476924 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Dec 01 08:55:48 crc kubenswrapper[4689]: I1201 08:55:48.477685 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"aa6871a1-f6d5-44b1-a4b7-638763c9c92b","Type":"ContainerStarted","Data":"d4fa18e5563566b6a082f652116504dcc69d3d8344ea4b0f1dbd8e813cb87cd5"} Dec 01 08:55:48 crc kubenswrapper[4689]: I1201 08:55:48.477710 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"aa6871a1-f6d5-44b1-a4b7-638763c9c92b","Type":"ContainerStarted","Data":"e0545a9355c6f2ba9d54c5498cc40fe57c2033ae3465134cdf6b264d80721a4d"} Dec 01 08:55:48 crc kubenswrapper[4689]: I1201 08:55:48.477849 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Dec 01 08:55:48 crc kubenswrapper[4689]: I1201 08:55:48.479409 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"f04989a7-e9bc-4d0b-a7a1-efe12657bd2b","Type":"ContainerStarted","Data":"906d289eed3baeabcdc8c2aef9ef5d80e5e385ff0c8f2ece289b766898acba07"} Dec 01 08:55:48 crc kubenswrapper[4689]: I1201 08:55:48.479671 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Dec 01 08:55:48 crc kubenswrapper[4689]: I1201 08:55:48.481485 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-48955" event={"ID":"8731b0fb-0429-4730-8da9-cc182fdf29e1","Type":"ContainerStarted","Data":"bdc036641d16d61c36c0bb274035e0a57a37c72519d3b66a1fdfcf012d704c95"} Dec 01 08:55:48 crc kubenswrapper[4689]: I1201 08:55:48.481676 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-48955" Dec 01 08:55:48 crc kubenswrapper[4689]: I1201 08:55:48.505538 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.941936309 podStartE2EDuration="50.505515926s" podCreationTimestamp="2025-12-01 08:54:58 +0000 UTC" firstStartedPulling="2025-12-01 08:55:00.053524814 +0000 UTC m=+980.125812728" lastFinishedPulling="2025-12-01 08:55:47.617104451 +0000 UTC m=+1027.689392345" observedRunningTime="2025-12-01 08:55:48.499328337 +0000 UTC m=+1028.571616241" watchObservedRunningTime="2025-12-01 08:55:48.505515926 +0000 UTC m=+1028.577803840" Dec 01 08:55:48 crc kubenswrapper[4689]: I1201 08:55:48.529316 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=3.008399462 podStartE2EDuration="8.529290998s" podCreationTimestamp="2025-12-01 08:55:40 +0000 UTC" firstStartedPulling="2025-12-01 08:55:41.495969914 +0000 UTC m=+1021.568257818" lastFinishedPulling="2025-12-01 08:55:47.01686145 +0000 UTC m=+1027.089149354" observedRunningTime="2025-12-01 08:55:48.515190561 +0000 UTC m=+1028.587478475" watchObservedRunningTime="2025-12-01 08:55:48.529290998 +0000 UTC m=+1028.601578912" Dec 01 08:55:48 crc kubenswrapper[4689]: I1201 08:55:48.546059 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=4.420182296 podStartE2EDuration="53.546036616s" podCreationTimestamp="2025-12-01 08:54:55 +0000 UTC" firstStartedPulling="2025-12-01 08:54:57.83478797 +0000 UTC m=+977.907075864" lastFinishedPulling="2025-12-01 08:55:46.96064228 +0000 UTC m=+1027.032930184" observedRunningTime="2025-12-01 08:55:48.544079483 +0000 UTC m=+1028.616367387" watchObservedRunningTime="2025-12-01 08:55:48.546036616 +0000 UTC m=+1028.618324520" Dec 01 08:55:48 crc kubenswrapper[4689]: I1201 08:55:48.581357 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-48955" podStartSLOduration=3.116628003 podStartE2EDuration="49.581332354s" podCreationTimestamp="2025-12-01 08:54:59 +0000 UTC" firstStartedPulling="2025-12-01 08:55:00.503945848 +0000 UTC m=+980.576233752" lastFinishedPulling="2025-12-01 08:55:46.968650199 +0000 UTC m=+1027.040938103" observedRunningTime="2025-12-01 08:55:48.577797967 +0000 UTC m=+1028.650085881" watchObservedRunningTime="2025-12-01 08:55:48.581332354 +0000 UTC m=+1028.653620258" Dec 01 08:55:49 crc kubenswrapper[4689]: I1201 08:55:49.026800 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7fd796d7df-56dd2" Dec 01 08:55:49 crc kubenswrapper[4689]: I1201 08:55:49.500264 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-tlzl7" Dec 01 08:55:49 crc kubenswrapper[4689]: I1201 08:55:49.558628 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-56dd2"] Dec 01 08:55:49 crc kubenswrapper[4689]: I1201 08:55:49.558925 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7fd796d7df-56dd2" podUID="c6332c4d-2320-47cb-b432-a67c79c46240" containerName="dnsmasq-dns" containerID="cri-o://cb644e76eeaf02e99a4e0ab4db9f5bdb2900ec44653a5c70c901cabf6fa708dc" gracePeriod=10 Dec 01 08:55:50 crc kubenswrapper[4689]: I1201 08:55:50.057123 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-56dd2" Dec 01 08:55:50 crc kubenswrapper[4689]: I1201 08:55:50.146968 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bspkc\" (UniqueName: \"kubernetes.io/projected/c6332c4d-2320-47cb-b432-a67c79c46240-kube-api-access-bspkc\") pod \"c6332c4d-2320-47cb-b432-a67c79c46240\" (UID: \"c6332c4d-2320-47cb-b432-a67c79c46240\") " Dec 01 08:55:50 crc kubenswrapper[4689]: I1201 08:55:50.147036 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c6332c4d-2320-47cb-b432-a67c79c46240-dns-svc\") pod \"c6332c4d-2320-47cb-b432-a67c79c46240\" (UID: \"c6332c4d-2320-47cb-b432-a67c79c46240\") " Dec 01 08:55:50 crc kubenswrapper[4689]: I1201 08:55:50.147202 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c6332c4d-2320-47cb-b432-a67c79c46240-ovsdbserver-nb\") pod \"c6332c4d-2320-47cb-b432-a67c79c46240\" (UID: \"c6332c4d-2320-47cb-b432-a67c79c46240\") " Dec 01 08:55:50 crc kubenswrapper[4689]: I1201 08:55:50.147253 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6332c4d-2320-47cb-b432-a67c79c46240-config\") pod \"c6332c4d-2320-47cb-b432-a67c79c46240\" (UID: \"c6332c4d-2320-47cb-b432-a67c79c46240\") " Dec 01 08:55:50 crc kubenswrapper[4689]: I1201 08:55:50.152684 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6332c4d-2320-47cb-b432-a67c79c46240-kube-api-access-bspkc" (OuterVolumeSpecName: "kube-api-access-bspkc") pod "c6332c4d-2320-47cb-b432-a67c79c46240" (UID: "c6332c4d-2320-47cb-b432-a67c79c46240"). InnerVolumeSpecName "kube-api-access-bspkc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:55:50 crc kubenswrapper[4689]: I1201 08:55:50.197388 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6332c4d-2320-47cb-b432-a67c79c46240-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c6332c4d-2320-47cb-b432-a67c79c46240" (UID: "c6332c4d-2320-47cb-b432-a67c79c46240"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:55:50 crc kubenswrapper[4689]: I1201 08:55:50.201010 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6332c4d-2320-47cb-b432-a67c79c46240-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c6332c4d-2320-47cb-b432-a67c79c46240" (UID: "c6332c4d-2320-47cb-b432-a67c79c46240"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:55:50 crc kubenswrapper[4689]: I1201 08:55:50.211592 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6332c4d-2320-47cb-b432-a67c79c46240-config" (OuterVolumeSpecName: "config") pod "c6332c4d-2320-47cb-b432-a67c79c46240" (UID: "c6332c4d-2320-47cb-b432-a67c79c46240"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:55:50 crc kubenswrapper[4689]: I1201 08:55:50.249590 4689 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c6332c4d-2320-47cb-b432-a67c79c46240-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 01 08:55:50 crc kubenswrapper[4689]: I1201 08:55:50.249629 4689 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c6332c4d-2320-47cb-b432-a67c79c46240-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 01 08:55:50 crc kubenswrapper[4689]: I1201 08:55:50.249642 4689 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6332c4d-2320-47cb-b432-a67c79c46240-config\") on node \"crc\" DevicePath \"\"" Dec 01 08:55:50 crc kubenswrapper[4689]: I1201 08:55:50.249651 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bspkc\" (UniqueName: \"kubernetes.io/projected/c6332c4d-2320-47cb-b432-a67c79c46240-kube-api-access-bspkc\") on node \"crc\" DevicePath \"\"" Dec 01 08:55:50 crc kubenswrapper[4689]: I1201 08:55:50.496877 4689 generic.go:334] "Generic (PLEG): container finished" podID="555543d8-21bb-4dba-9c08-ab82e90ea894" containerID="8168af9bc95d08d2550cc77efc305540837b2c0e523d7cfd1d4a3cc06875f461" exitCode=0 Dec 01 08:55:50 crc kubenswrapper[4689]: I1201 08:55:50.497230 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"555543d8-21bb-4dba-9c08-ab82e90ea894","Type":"ContainerDied","Data":"8168af9bc95d08d2550cc77efc305540837b2c0e523d7cfd1d4a3cc06875f461"} Dec 01 08:55:50 crc kubenswrapper[4689]: I1201 08:55:50.502154 4689 generic.go:334] "Generic (PLEG): container finished" podID="c6332c4d-2320-47cb-b432-a67c79c46240" containerID="cb644e76eeaf02e99a4e0ab4db9f5bdb2900ec44653a5c70c901cabf6fa708dc" exitCode=0 Dec 01 08:55:50 crc kubenswrapper[4689]: I1201 08:55:50.502208 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-56dd2" event={"ID":"c6332c4d-2320-47cb-b432-a67c79c46240","Type":"ContainerDied","Data":"cb644e76eeaf02e99a4e0ab4db9f5bdb2900ec44653a5c70c901cabf6fa708dc"} Dec 01 08:55:50 crc kubenswrapper[4689]: I1201 08:55:50.502238 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-56dd2" event={"ID":"c6332c4d-2320-47cb-b432-a67c79c46240","Type":"ContainerDied","Data":"45f466a0bcc714002dd1227cb7c282e6962f73414805b662c9efd7b20a190a9f"} Dec 01 08:55:50 crc kubenswrapper[4689]: I1201 08:55:50.502282 4689 scope.go:117] "RemoveContainer" containerID="cb644e76eeaf02e99a4e0ab4db9f5bdb2900ec44653a5c70c901cabf6fa708dc" Dec 01 08:55:50 crc kubenswrapper[4689]: I1201 08:55:50.502425 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-56dd2" Dec 01 08:55:50 crc kubenswrapper[4689]: I1201 08:55:50.535745 4689 scope.go:117] "RemoveContainer" containerID="9518afaf32afedd4fc4f6a808f519930f2de94d8c03244ffd1c8b2c8750c26ff" Dec 01 08:55:50 crc kubenswrapper[4689]: I1201 08:55:50.550244 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-56dd2"] Dec 01 08:55:50 crc kubenswrapper[4689]: I1201 08:55:50.565687 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-56dd2"] Dec 01 08:55:50 crc kubenswrapper[4689]: I1201 08:55:50.611182 4689 scope.go:117] "RemoveContainer" containerID="cb644e76eeaf02e99a4e0ab4db9f5bdb2900ec44653a5c70c901cabf6fa708dc" Dec 01 08:55:50 crc kubenswrapper[4689]: E1201 08:55:50.611802 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb644e76eeaf02e99a4e0ab4db9f5bdb2900ec44653a5c70c901cabf6fa708dc\": container with ID starting with cb644e76eeaf02e99a4e0ab4db9f5bdb2900ec44653a5c70c901cabf6fa708dc not found: ID does not exist" containerID="cb644e76eeaf02e99a4e0ab4db9f5bdb2900ec44653a5c70c901cabf6fa708dc" Dec 01 08:55:50 crc kubenswrapper[4689]: I1201 08:55:50.611883 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb644e76eeaf02e99a4e0ab4db9f5bdb2900ec44653a5c70c901cabf6fa708dc"} err="failed to get container status \"cb644e76eeaf02e99a4e0ab4db9f5bdb2900ec44653a5c70c901cabf6fa708dc\": rpc error: code = NotFound desc = could not find container \"cb644e76eeaf02e99a4e0ab4db9f5bdb2900ec44653a5c70c901cabf6fa708dc\": container with ID starting with cb644e76eeaf02e99a4e0ab4db9f5bdb2900ec44653a5c70c901cabf6fa708dc not found: ID does not exist" Dec 01 08:55:50 crc kubenswrapper[4689]: I1201 08:55:50.611925 4689 scope.go:117] "RemoveContainer" containerID="9518afaf32afedd4fc4f6a808f519930f2de94d8c03244ffd1c8b2c8750c26ff" Dec 01 08:55:50 crc kubenswrapper[4689]: E1201 08:55:50.612291 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9518afaf32afedd4fc4f6a808f519930f2de94d8c03244ffd1c8b2c8750c26ff\": container with ID starting with 9518afaf32afedd4fc4f6a808f519930f2de94d8c03244ffd1c8b2c8750c26ff not found: ID does not exist" containerID="9518afaf32afedd4fc4f6a808f519930f2de94d8c03244ffd1c8b2c8750c26ff" Dec 01 08:55:50 crc kubenswrapper[4689]: I1201 08:55:50.612332 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9518afaf32afedd4fc4f6a808f519930f2de94d8c03244ffd1c8b2c8750c26ff"} err="failed to get container status \"9518afaf32afedd4fc4f6a808f519930f2de94d8c03244ffd1c8b2c8750c26ff\": rpc error: code = NotFound desc = could not find container \"9518afaf32afedd4fc4f6a808f519930f2de94d8c03244ffd1c8b2c8750c26ff\": container with ID starting with 9518afaf32afedd4fc4f6a808f519930f2de94d8c03244ffd1c8b2c8750c26ff not found: ID does not exist" Dec 01 08:55:51 crc kubenswrapper[4689]: I1201 08:55:51.057777 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6332c4d-2320-47cb-b432-a67c79c46240" path="/var/lib/kubelet/pods/c6332c4d-2320-47cb-b432-a67c79c46240/volumes" Dec 01 08:55:51 crc kubenswrapper[4689]: I1201 08:55:51.513002 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"555543d8-21bb-4dba-9c08-ab82e90ea894","Type":"ContainerStarted","Data":"dd35cdcc63b59bd0ec3b1fcfc4e426e3585823f6c176ab62c8ed5bfd6bedec01"} Dec 01 08:55:51 crc kubenswrapper[4689]: I1201 08:55:51.535193 4689 generic.go:334] "Generic (PLEG): container finished" podID="bc1ecd4c-eede-492c-ac97-071c42545607" containerID="3921a6605f05594aca49f06702b23e2298d2d782c891cb3f7727494124cca42f" exitCode=0 Dec 01 08:55:51 crc kubenswrapper[4689]: I1201 08:55:51.535259 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"bc1ecd4c-eede-492c-ac97-071c42545607","Type":"ContainerDied","Data":"3921a6605f05594aca49f06702b23e2298d2d782c891cb3f7727494124cca42f"} Dec 01 08:55:51 crc kubenswrapper[4689]: I1201 08:55:51.572979 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=13.746395788 podStartE2EDuration="58.572954838s" podCreationTimestamp="2025-12-01 08:54:53 +0000 UTC" firstStartedPulling="2025-12-01 08:54:55.832769906 +0000 UTC m=+975.905057810" lastFinishedPulling="2025-12-01 08:55:40.659328956 +0000 UTC m=+1020.731616860" observedRunningTime="2025-12-01 08:55:51.565969586 +0000 UTC m=+1031.638257500" watchObservedRunningTime="2025-12-01 08:55:51.572954838 +0000 UTC m=+1031.645242742" Dec 01 08:55:52 crc kubenswrapper[4689]: I1201 08:55:52.559991 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"bc1ecd4c-eede-492c-ac97-071c42545607","Type":"ContainerStarted","Data":"e2b0fa9a93491ea839311a49524203c3d83c4efd8aad981f0425aad8ebf82692"} Dec 01 08:55:52 crc kubenswrapper[4689]: I1201 08:55:52.597071 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=-9223371978.257727 podStartE2EDuration="58.597048952s" podCreationTimestamp="2025-12-01 08:54:54 +0000 UTC" firstStartedPulling="2025-12-01 08:54:57.061584101 +0000 UTC m=+977.133872005" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:55:52.586190556 +0000 UTC m=+1032.658478460" watchObservedRunningTime="2025-12-01 08:55:52.597048952 +0000 UTC m=+1032.669336856" Dec 01 08:55:54 crc kubenswrapper[4689]: I1201 08:55:54.461674 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Dec 01 08:55:54 crc kubenswrapper[4689]: I1201 08:55:54.461736 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Dec 01 08:55:55 crc kubenswrapper[4689]: I1201 08:55:55.826054 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-95kdv" Dec 01 08:55:55 crc kubenswrapper[4689]: I1201 08:55:55.870466 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-95kdv" Dec 01 08:55:56 crc kubenswrapper[4689]: I1201 08:55:56.017979 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Dec 01 08:55:56 crc kubenswrapper[4689]: I1201 08:55:56.018069 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Dec 01 08:55:56 crc kubenswrapper[4689]: I1201 08:55:56.369536 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Dec 01 08:55:56 crc kubenswrapper[4689]: I1201 08:55:56.540215 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-95kdv"] Dec 01 08:55:57 crc kubenswrapper[4689]: I1201 08:55:57.603918 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-95kdv" podUID="9368a0f8-4d7e-49e3-b575-7f901ed6464c" containerName="registry-server" containerID="cri-o://81699f95419fb7a42418e5e702ed22816ef881dc9f687853a2e534e789588353" gracePeriod=2 Dec 01 08:55:58 crc kubenswrapper[4689]: I1201 08:55:58.076747 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-95kdv" Dec 01 08:55:58 crc kubenswrapper[4689]: I1201 08:55:58.172326 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9368a0f8-4d7e-49e3-b575-7f901ed6464c-catalog-content\") pod \"9368a0f8-4d7e-49e3-b575-7f901ed6464c\" (UID: \"9368a0f8-4d7e-49e3-b575-7f901ed6464c\") " Dec 01 08:55:58 crc kubenswrapper[4689]: I1201 08:55:58.172418 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h6l22\" (UniqueName: \"kubernetes.io/projected/9368a0f8-4d7e-49e3-b575-7f901ed6464c-kube-api-access-h6l22\") pod \"9368a0f8-4d7e-49e3-b575-7f901ed6464c\" (UID: \"9368a0f8-4d7e-49e3-b575-7f901ed6464c\") " Dec 01 08:55:58 crc kubenswrapper[4689]: I1201 08:55:58.172446 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9368a0f8-4d7e-49e3-b575-7f901ed6464c-utilities\") pod \"9368a0f8-4d7e-49e3-b575-7f901ed6464c\" (UID: \"9368a0f8-4d7e-49e3-b575-7f901ed6464c\") " Dec 01 08:55:58 crc kubenswrapper[4689]: I1201 08:55:58.175977 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9368a0f8-4d7e-49e3-b575-7f901ed6464c-utilities" (OuterVolumeSpecName: "utilities") pod "9368a0f8-4d7e-49e3-b575-7f901ed6464c" (UID: "9368a0f8-4d7e-49e3-b575-7f901ed6464c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 08:55:58 crc kubenswrapper[4689]: I1201 08:55:58.179534 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9368a0f8-4d7e-49e3-b575-7f901ed6464c-kube-api-access-h6l22" (OuterVolumeSpecName: "kube-api-access-h6l22") pod "9368a0f8-4d7e-49e3-b575-7f901ed6464c" (UID: "9368a0f8-4d7e-49e3-b575-7f901ed6464c"). InnerVolumeSpecName "kube-api-access-h6l22". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:55:58 crc kubenswrapper[4689]: I1201 08:55:58.274979 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h6l22\" (UniqueName: \"kubernetes.io/projected/9368a0f8-4d7e-49e3-b575-7f901ed6464c-kube-api-access-h6l22\") on node \"crc\" DevicePath \"\"" Dec 01 08:55:58 crc kubenswrapper[4689]: I1201 08:55:58.275031 4689 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9368a0f8-4d7e-49e3-b575-7f901ed6464c-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 08:55:58 crc kubenswrapper[4689]: I1201 08:55:58.300419 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9368a0f8-4d7e-49e3-b575-7f901ed6464c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9368a0f8-4d7e-49e3-b575-7f901ed6464c" (UID: "9368a0f8-4d7e-49e3-b575-7f901ed6464c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 08:55:58 crc kubenswrapper[4689]: I1201 08:55:58.376906 4689 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9368a0f8-4d7e-49e3-b575-7f901ed6464c-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 08:55:58 crc kubenswrapper[4689]: I1201 08:55:58.612562 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Dec 01 08:55:58 crc kubenswrapper[4689]: I1201 08:55:58.614604 4689 generic.go:334] "Generic (PLEG): container finished" podID="9368a0f8-4d7e-49e3-b575-7f901ed6464c" containerID="81699f95419fb7a42418e5e702ed22816ef881dc9f687853a2e534e789588353" exitCode=0 Dec 01 08:55:58 crc kubenswrapper[4689]: I1201 08:55:58.614635 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-95kdv" event={"ID":"9368a0f8-4d7e-49e3-b575-7f901ed6464c","Type":"ContainerDied","Data":"81699f95419fb7a42418e5e702ed22816ef881dc9f687853a2e534e789588353"} Dec 01 08:55:58 crc kubenswrapper[4689]: I1201 08:55:58.614656 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-95kdv" event={"ID":"9368a0f8-4d7e-49e3-b575-7f901ed6464c","Type":"ContainerDied","Data":"ee7bb79a4ce94b2df00b36c91c18a4a0006ccfec49caa9fe1c53446b3699b4e7"} Dec 01 08:55:58 crc kubenswrapper[4689]: I1201 08:55:58.614675 4689 scope.go:117] "RemoveContainer" containerID="81699f95419fb7a42418e5e702ed22816ef881dc9f687853a2e534e789588353" Dec 01 08:55:58 crc kubenswrapper[4689]: I1201 08:55:58.614800 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-95kdv" Dec 01 08:55:58 crc kubenswrapper[4689]: I1201 08:55:58.672060 4689 scope.go:117] "RemoveContainer" containerID="34aeba54b6874d5b66edab80b8f1780978ab5ff59e2b769b03f9337124ad102b" Dec 01 08:55:58 crc kubenswrapper[4689]: I1201 08:55:58.673058 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Dec 01 08:55:58 crc kubenswrapper[4689]: I1201 08:55:58.696798 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-95kdv"] Dec 01 08:55:58 crc kubenswrapper[4689]: I1201 08:55:58.705796 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-95kdv"] Dec 01 08:55:58 crc kubenswrapper[4689]: I1201 08:55:58.788948 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-lmql7"] Dec 01 08:55:58 crc kubenswrapper[4689]: E1201 08:55:58.804998 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6332c4d-2320-47cb-b432-a67c79c46240" containerName="dnsmasq-dns" Dec 01 08:55:58 crc kubenswrapper[4689]: I1201 08:55:58.805210 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6332c4d-2320-47cb-b432-a67c79c46240" containerName="dnsmasq-dns" Dec 01 08:55:58 crc kubenswrapper[4689]: E1201 08:55:58.805231 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9368a0f8-4d7e-49e3-b575-7f901ed6464c" containerName="registry-server" Dec 01 08:55:58 crc kubenswrapper[4689]: I1201 08:55:58.805238 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="9368a0f8-4d7e-49e3-b575-7f901ed6464c" containerName="registry-server" Dec 01 08:55:58 crc kubenswrapper[4689]: E1201 08:55:58.805259 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6332c4d-2320-47cb-b432-a67c79c46240" containerName="init" Dec 01 08:55:58 crc kubenswrapper[4689]: I1201 08:55:58.805264 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6332c4d-2320-47cb-b432-a67c79c46240" containerName="init" Dec 01 08:55:58 crc kubenswrapper[4689]: E1201 08:55:58.805275 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9368a0f8-4d7e-49e3-b575-7f901ed6464c" containerName="extract-content" Dec 01 08:55:58 crc kubenswrapper[4689]: I1201 08:55:58.805282 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="9368a0f8-4d7e-49e3-b575-7f901ed6464c" containerName="extract-content" Dec 01 08:55:58 crc kubenswrapper[4689]: E1201 08:55:58.805298 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9368a0f8-4d7e-49e3-b575-7f901ed6464c" containerName="extract-utilities" Dec 01 08:55:58 crc kubenswrapper[4689]: I1201 08:55:58.805305 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="9368a0f8-4d7e-49e3-b575-7f901ed6464c" containerName="extract-utilities" Dec 01 08:55:58 crc kubenswrapper[4689]: I1201 08:55:58.805508 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="9368a0f8-4d7e-49e3-b575-7f901ed6464c" containerName="registry-server" Dec 01 08:55:58 crc kubenswrapper[4689]: I1201 08:55:58.805533 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6332c4d-2320-47cb-b432-a67c79c46240" containerName="dnsmasq-dns" Dec 01 08:55:58 crc kubenswrapper[4689]: I1201 08:55:58.806492 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-lmql7" Dec 01 08:55:58 crc kubenswrapper[4689]: I1201 08:55:58.805157 4689 scope.go:117] "RemoveContainer" containerID="fa6b6254448234f062352e365d6e3a30058292040cd9bec337cf12ea3a1264ab" Dec 01 08:55:58 crc kubenswrapper[4689]: I1201 08:55:58.817927 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-lmql7"] Dec 01 08:55:58 crc kubenswrapper[4689]: I1201 08:55:58.870506 4689 scope.go:117] "RemoveContainer" containerID="81699f95419fb7a42418e5e702ed22816ef881dc9f687853a2e534e789588353" Dec 01 08:55:58 crc kubenswrapper[4689]: E1201 08:55:58.871643 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81699f95419fb7a42418e5e702ed22816ef881dc9f687853a2e534e789588353\": container with ID starting with 81699f95419fb7a42418e5e702ed22816ef881dc9f687853a2e534e789588353 not found: ID does not exist" containerID="81699f95419fb7a42418e5e702ed22816ef881dc9f687853a2e534e789588353" Dec 01 08:55:58 crc kubenswrapper[4689]: I1201 08:55:58.871671 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81699f95419fb7a42418e5e702ed22816ef881dc9f687853a2e534e789588353"} err="failed to get container status \"81699f95419fb7a42418e5e702ed22816ef881dc9f687853a2e534e789588353\": rpc error: code = NotFound desc = could not find container \"81699f95419fb7a42418e5e702ed22816ef881dc9f687853a2e534e789588353\": container with ID starting with 81699f95419fb7a42418e5e702ed22816ef881dc9f687853a2e534e789588353 not found: ID does not exist" Dec 01 08:55:58 crc kubenswrapper[4689]: I1201 08:55:58.871707 4689 scope.go:117] "RemoveContainer" containerID="34aeba54b6874d5b66edab80b8f1780978ab5ff59e2b769b03f9337124ad102b" Dec 01 08:55:58 crc kubenswrapper[4689]: E1201 08:55:58.878555 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34aeba54b6874d5b66edab80b8f1780978ab5ff59e2b769b03f9337124ad102b\": container with ID starting with 34aeba54b6874d5b66edab80b8f1780978ab5ff59e2b769b03f9337124ad102b not found: ID does not exist" containerID="34aeba54b6874d5b66edab80b8f1780978ab5ff59e2b769b03f9337124ad102b" Dec 01 08:55:58 crc kubenswrapper[4689]: I1201 08:55:58.878610 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34aeba54b6874d5b66edab80b8f1780978ab5ff59e2b769b03f9337124ad102b"} err="failed to get container status \"34aeba54b6874d5b66edab80b8f1780978ab5ff59e2b769b03f9337124ad102b\": rpc error: code = NotFound desc = could not find container \"34aeba54b6874d5b66edab80b8f1780978ab5ff59e2b769b03f9337124ad102b\": container with ID starting with 34aeba54b6874d5b66edab80b8f1780978ab5ff59e2b769b03f9337124ad102b not found: ID does not exist" Dec 01 08:55:58 crc kubenswrapper[4689]: I1201 08:55:58.878645 4689 scope.go:117] "RemoveContainer" containerID="fa6b6254448234f062352e365d6e3a30058292040cd9bec337cf12ea3a1264ab" Dec 01 08:55:58 crc kubenswrapper[4689]: E1201 08:55:58.880718 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa6b6254448234f062352e365d6e3a30058292040cd9bec337cf12ea3a1264ab\": container with ID starting with fa6b6254448234f062352e365d6e3a30058292040cd9bec337cf12ea3a1264ab not found: ID does not exist" containerID="fa6b6254448234f062352e365d6e3a30058292040cd9bec337cf12ea3a1264ab" Dec 01 08:55:58 crc kubenswrapper[4689]: I1201 08:55:58.880759 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa6b6254448234f062352e365d6e3a30058292040cd9bec337cf12ea3a1264ab"} err="failed to get container status \"fa6b6254448234f062352e365d6e3a30058292040cd9bec337cf12ea3a1264ab\": rpc error: code = NotFound desc = could not find container \"fa6b6254448234f062352e365d6e3a30058292040cd9bec337cf12ea3a1264ab\": container with ID starting with fa6b6254448234f062352e365d6e3a30058292040cd9bec337cf12ea3a1264ab not found: ID does not exist" Dec 01 08:55:58 crc kubenswrapper[4689]: I1201 08:55:58.973335 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Dec 01 08:55:58 crc kubenswrapper[4689]: I1201 08:55:58.987830 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5ad3fe76-0301-4ad7-be7a-2dea749a1c63-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-lmql7\" (UID: \"5ad3fe76-0301-4ad7-be7a-2dea749a1c63\") " pod="openstack/dnsmasq-dns-698758b865-lmql7" Dec 01 08:55:58 crc kubenswrapper[4689]: I1201 08:55:58.987913 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5ad3fe76-0301-4ad7-be7a-2dea749a1c63-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-lmql7\" (UID: \"5ad3fe76-0301-4ad7-be7a-2dea749a1c63\") " pod="openstack/dnsmasq-dns-698758b865-lmql7" Dec 01 08:55:58 crc kubenswrapper[4689]: I1201 08:55:58.987968 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zcn2\" (UniqueName: \"kubernetes.io/projected/5ad3fe76-0301-4ad7-be7a-2dea749a1c63-kube-api-access-5zcn2\") pod \"dnsmasq-dns-698758b865-lmql7\" (UID: \"5ad3fe76-0301-4ad7-be7a-2dea749a1c63\") " pod="openstack/dnsmasq-dns-698758b865-lmql7" Dec 01 08:55:58 crc kubenswrapper[4689]: I1201 08:55:58.988004 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ad3fe76-0301-4ad7-be7a-2dea749a1c63-config\") pod \"dnsmasq-dns-698758b865-lmql7\" (UID: \"5ad3fe76-0301-4ad7-be7a-2dea749a1c63\") " pod="openstack/dnsmasq-dns-698758b865-lmql7" Dec 01 08:55:58 crc kubenswrapper[4689]: I1201 08:55:58.988054 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5ad3fe76-0301-4ad7-be7a-2dea749a1c63-dns-svc\") pod \"dnsmasq-dns-698758b865-lmql7\" (UID: \"5ad3fe76-0301-4ad7-be7a-2dea749a1c63\") " pod="openstack/dnsmasq-dns-698758b865-lmql7" Dec 01 08:55:59 crc kubenswrapper[4689]: I1201 08:55:59.062748 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9368a0f8-4d7e-49e3-b575-7f901ed6464c" path="/var/lib/kubelet/pods/9368a0f8-4d7e-49e3-b575-7f901ed6464c/volumes" Dec 01 08:55:59 crc kubenswrapper[4689]: I1201 08:55:59.089277 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5ad3fe76-0301-4ad7-be7a-2dea749a1c63-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-lmql7\" (UID: \"5ad3fe76-0301-4ad7-be7a-2dea749a1c63\") " pod="openstack/dnsmasq-dns-698758b865-lmql7" Dec 01 08:55:59 crc kubenswrapper[4689]: I1201 08:55:59.089361 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5ad3fe76-0301-4ad7-be7a-2dea749a1c63-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-lmql7\" (UID: \"5ad3fe76-0301-4ad7-be7a-2dea749a1c63\") " pod="openstack/dnsmasq-dns-698758b865-lmql7" Dec 01 08:55:59 crc kubenswrapper[4689]: I1201 08:55:59.089430 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zcn2\" (UniqueName: \"kubernetes.io/projected/5ad3fe76-0301-4ad7-be7a-2dea749a1c63-kube-api-access-5zcn2\") pod \"dnsmasq-dns-698758b865-lmql7\" (UID: \"5ad3fe76-0301-4ad7-be7a-2dea749a1c63\") " pod="openstack/dnsmasq-dns-698758b865-lmql7" Dec 01 08:55:59 crc kubenswrapper[4689]: I1201 08:55:59.089463 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ad3fe76-0301-4ad7-be7a-2dea749a1c63-config\") pod \"dnsmasq-dns-698758b865-lmql7\" (UID: \"5ad3fe76-0301-4ad7-be7a-2dea749a1c63\") " pod="openstack/dnsmasq-dns-698758b865-lmql7" Dec 01 08:55:59 crc kubenswrapper[4689]: I1201 08:55:59.089517 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5ad3fe76-0301-4ad7-be7a-2dea749a1c63-dns-svc\") pod \"dnsmasq-dns-698758b865-lmql7\" (UID: \"5ad3fe76-0301-4ad7-be7a-2dea749a1c63\") " pod="openstack/dnsmasq-dns-698758b865-lmql7" Dec 01 08:55:59 crc kubenswrapper[4689]: I1201 08:55:59.090446 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5ad3fe76-0301-4ad7-be7a-2dea749a1c63-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-lmql7\" (UID: \"5ad3fe76-0301-4ad7-be7a-2dea749a1c63\") " pod="openstack/dnsmasq-dns-698758b865-lmql7" Dec 01 08:55:59 crc kubenswrapper[4689]: I1201 08:55:59.090539 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ad3fe76-0301-4ad7-be7a-2dea749a1c63-config\") pod \"dnsmasq-dns-698758b865-lmql7\" (UID: \"5ad3fe76-0301-4ad7-be7a-2dea749a1c63\") " pod="openstack/dnsmasq-dns-698758b865-lmql7" Dec 01 08:55:59 crc kubenswrapper[4689]: I1201 08:55:59.090547 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5ad3fe76-0301-4ad7-be7a-2dea749a1c63-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-lmql7\" (UID: \"5ad3fe76-0301-4ad7-be7a-2dea749a1c63\") " pod="openstack/dnsmasq-dns-698758b865-lmql7" Dec 01 08:55:59 crc kubenswrapper[4689]: I1201 08:55:59.090978 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5ad3fe76-0301-4ad7-be7a-2dea749a1c63-dns-svc\") pod \"dnsmasq-dns-698758b865-lmql7\" (UID: \"5ad3fe76-0301-4ad7-be7a-2dea749a1c63\") " pod="openstack/dnsmasq-dns-698758b865-lmql7" Dec 01 08:55:59 crc kubenswrapper[4689]: I1201 08:55:59.107147 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Dec 01 08:55:59 crc kubenswrapper[4689]: I1201 08:55:59.113644 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Dec 01 08:55:59 crc kubenswrapper[4689]: I1201 08:55:59.135160 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zcn2\" (UniqueName: \"kubernetes.io/projected/5ad3fe76-0301-4ad7-be7a-2dea749a1c63-kube-api-access-5zcn2\") pod \"dnsmasq-dns-698758b865-lmql7\" (UID: \"5ad3fe76-0301-4ad7-be7a-2dea749a1c63\") " pod="openstack/dnsmasq-dns-698758b865-lmql7" Dec 01 08:55:59 crc kubenswrapper[4689]: I1201 08:55:59.171845 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-lmql7" Dec 01 08:55:59 crc kubenswrapper[4689]: I1201 08:55:59.710668 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-lmql7"] Dec 01 08:55:59 crc kubenswrapper[4689]: I1201 08:55:59.911576 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Dec 01 08:55:59 crc kubenswrapper[4689]: I1201 08:55:59.916821 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Dec 01 08:55:59 crc kubenswrapper[4689]: I1201 08:55:59.920063 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Dec 01 08:55:59 crc kubenswrapper[4689]: I1201 08:55:59.920098 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Dec 01 08:55:59 crc kubenswrapper[4689]: I1201 08:55:59.920099 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Dec 01 08:55:59 crc kubenswrapper[4689]: I1201 08:55:59.921450 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-4qsds" Dec 01 08:55:59 crc kubenswrapper[4689]: I1201 08:55:59.965614 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Dec 01 08:56:00 crc kubenswrapper[4689]: I1201 08:56:00.011421 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/c18c4a63-48ba-42e2-a7f0-d5750963b90f-lock\") pod \"swift-storage-0\" (UID: \"c18c4a63-48ba-42e2-a7f0-d5750963b90f\") " pod="openstack/swift-storage-0" Dec 01 08:56:00 crc kubenswrapper[4689]: I1201 08:56:00.011481 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c18c4a63-48ba-42e2-a7f0-d5750963b90f-etc-swift\") pod \"swift-storage-0\" (UID: \"c18c4a63-48ba-42e2-a7f0-d5750963b90f\") " pod="openstack/swift-storage-0" Dec 01 08:56:00 crc kubenswrapper[4689]: I1201 08:56:00.011506 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/c18c4a63-48ba-42e2-a7f0-d5750963b90f-cache\") pod \"swift-storage-0\" (UID: \"c18c4a63-48ba-42e2-a7f0-d5750963b90f\") " pod="openstack/swift-storage-0" Dec 01 08:56:00 crc kubenswrapper[4689]: I1201 08:56:00.011524 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-0\" (UID: \"c18c4a63-48ba-42e2-a7f0-d5750963b90f\") " pod="openstack/swift-storage-0" Dec 01 08:56:00 crc kubenswrapper[4689]: I1201 08:56:00.011565 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkg97\" (UniqueName: \"kubernetes.io/projected/c18c4a63-48ba-42e2-a7f0-d5750963b90f-kube-api-access-mkg97\") pod \"swift-storage-0\" (UID: \"c18c4a63-48ba-42e2-a7f0-d5750963b90f\") " pod="openstack/swift-storage-0" Dec 01 08:56:00 crc kubenswrapper[4689]: I1201 08:56:00.112758 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/c18c4a63-48ba-42e2-a7f0-d5750963b90f-lock\") pod \"swift-storage-0\" (UID: \"c18c4a63-48ba-42e2-a7f0-d5750963b90f\") " pod="openstack/swift-storage-0" Dec 01 08:56:00 crc kubenswrapper[4689]: I1201 08:56:00.112813 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c18c4a63-48ba-42e2-a7f0-d5750963b90f-etc-swift\") pod \"swift-storage-0\" (UID: \"c18c4a63-48ba-42e2-a7f0-d5750963b90f\") " pod="openstack/swift-storage-0" Dec 01 08:56:00 crc kubenswrapper[4689]: I1201 08:56:00.112844 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/c18c4a63-48ba-42e2-a7f0-d5750963b90f-cache\") pod \"swift-storage-0\" (UID: \"c18c4a63-48ba-42e2-a7f0-d5750963b90f\") " pod="openstack/swift-storage-0" Dec 01 08:56:00 crc kubenswrapper[4689]: I1201 08:56:00.112872 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-0\" (UID: \"c18c4a63-48ba-42e2-a7f0-d5750963b90f\") " pod="openstack/swift-storage-0" Dec 01 08:56:00 crc kubenswrapper[4689]: I1201 08:56:00.112943 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkg97\" (UniqueName: \"kubernetes.io/projected/c18c4a63-48ba-42e2-a7f0-d5750963b90f-kube-api-access-mkg97\") pod \"swift-storage-0\" (UID: \"c18c4a63-48ba-42e2-a7f0-d5750963b90f\") " pod="openstack/swift-storage-0" Dec 01 08:56:00 crc kubenswrapper[4689]: E1201 08:56:00.113650 4689 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 01 08:56:00 crc kubenswrapper[4689]: I1201 08:56:00.113669 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/c18c4a63-48ba-42e2-a7f0-d5750963b90f-lock\") pod \"swift-storage-0\" (UID: \"c18c4a63-48ba-42e2-a7f0-d5750963b90f\") " pod="openstack/swift-storage-0" Dec 01 08:56:00 crc kubenswrapper[4689]: E1201 08:56:00.113684 4689 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 01 08:56:00 crc kubenswrapper[4689]: E1201 08:56:00.113750 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c18c4a63-48ba-42e2-a7f0-d5750963b90f-etc-swift podName:c18c4a63-48ba-42e2-a7f0-d5750963b90f nodeName:}" failed. No retries permitted until 2025-12-01 08:56:00.613721644 +0000 UTC m=+1040.686009548 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c18c4a63-48ba-42e2-a7f0-d5750963b90f-etc-swift") pod "swift-storage-0" (UID: "c18c4a63-48ba-42e2-a7f0-d5750963b90f") : configmap "swift-ring-files" not found Dec 01 08:56:00 crc kubenswrapper[4689]: I1201 08:56:00.113980 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/c18c4a63-48ba-42e2-a7f0-d5750963b90f-cache\") pod \"swift-storage-0\" (UID: \"c18c4a63-48ba-42e2-a7f0-d5750963b90f\") " pod="openstack/swift-storage-0" Dec 01 08:56:00 crc kubenswrapper[4689]: I1201 08:56:00.114049 4689 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-0\" (UID: \"c18c4a63-48ba-42e2-a7f0-d5750963b90f\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/swift-storage-0" Dec 01 08:56:00 crc kubenswrapper[4689]: I1201 08:56:00.134250 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkg97\" (UniqueName: \"kubernetes.io/projected/c18c4a63-48ba-42e2-a7f0-d5750963b90f-kube-api-access-mkg97\") pod \"swift-storage-0\" (UID: \"c18c4a63-48ba-42e2-a7f0-d5750963b90f\") " pod="openstack/swift-storage-0" Dec 01 08:56:00 crc kubenswrapper[4689]: I1201 08:56:00.139041 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-0\" (UID: \"c18c4a63-48ba-42e2-a7f0-d5750963b90f\") " pod="openstack/swift-storage-0" Dec 01 08:56:00 crc kubenswrapper[4689]: I1201 08:56:00.418899 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-wkk2p"] Dec 01 08:56:00 crc kubenswrapper[4689]: I1201 08:56:00.459967 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-wkk2p"] Dec 01 08:56:00 crc kubenswrapper[4689]: I1201 08:56:00.474513 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-wkk2p" Dec 01 08:56:00 crc kubenswrapper[4689]: I1201 08:56:00.480691 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Dec 01 08:56:00 crc kubenswrapper[4689]: I1201 08:56:00.481664 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Dec 01 08:56:00 crc kubenswrapper[4689]: I1201 08:56:00.493686 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-66t7q"] Dec 01 08:56:00 crc kubenswrapper[4689]: I1201 08:56:00.497681 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-66t7q" Dec 01 08:56:00 crc kubenswrapper[4689]: I1201 08:56:00.498218 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Dec 01 08:56:00 crc kubenswrapper[4689]: I1201 08:56:00.529298 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgknh\" (UniqueName: \"kubernetes.io/projected/3f0bda4f-c3ae-4068-b165-fa2b99d17a01-kube-api-access-lgknh\") pod \"swift-ring-rebalance-wkk2p\" (UID: \"3f0bda4f-c3ae-4068-b165-fa2b99d17a01\") " pod="openstack/swift-ring-rebalance-wkk2p" Dec 01 08:56:00 crc kubenswrapper[4689]: I1201 08:56:00.529547 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/3f0bda4f-c3ae-4068-b165-fa2b99d17a01-ring-data-devices\") pod \"swift-ring-rebalance-wkk2p\" (UID: \"3f0bda4f-c3ae-4068-b165-fa2b99d17a01\") " pod="openstack/swift-ring-rebalance-wkk2p" Dec 01 08:56:00 crc kubenswrapper[4689]: I1201 08:56:00.529657 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/3f0bda4f-c3ae-4068-b165-fa2b99d17a01-dispersionconf\") pod \"swift-ring-rebalance-wkk2p\" (UID: \"3f0bda4f-c3ae-4068-b165-fa2b99d17a01\") " pod="openstack/swift-ring-rebalance-wkk2p" Dec 01 08:56:00 crc kubenswrapper[4689]: I1201 08:56:00.529773 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f0bda4f-c3ae-4068-b165-fa2b99d17a01-combined-ca-bundle\") pod \"swift-ring-rebalance-wkk2p\" (UID: \"3f0bda4f-c3ae-4068-b165-fa2b99d17a01\") " pod="openstack/swift-ring-rebalance-wkk2p" Dec 01 08:56:00 crc kubenswrapper[4689]: I1201 08:56:00.529914 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3f0bda4f-c3ae-4068-b165-fa2b99d17a01-scripts\") pod \"swift-ring-rebalance-wkk2p\" (UID: \"3f0bda4f-c3ae-4068-b165-fa2b99d17a01\") " pod="openstack/swift-ring-rebalance-wkk2p" Dec 01 08:56:00 crc kubenswrapper[4689]: I1201 08:56:00.530036 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/3f0bda4f-c3ae-4068-b165-fa2b99d17a01-etc-swift\") pod \"swift-ring-rebalance-wkk2p\" (UID: \"3f0bda4f-c3ae-4068-b165-fa2b99d17a01\") " pod="openstack/swift-ring-rebalance-wkk2p" Dec 01 08:56:00 crc kubenswrapper[4689]: I1201 08:56:00.530140 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/3f0bda4f-c3ae-4068-b165-fa2b99d17a01-swiftconf\") pod \"swift-ring-rebalance-wkk2p\" (UID: \"3f0bda4f-c3ae-4068-b165-fa2b99d17a01\") " pod="openstack/swift-ring-rebalance-wkk2p" Dec 01 08:56:00 crc kubenswrapper[4689]: I1201 08:56:00.532447 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-wkk2p"] Dec 01 08:56:00 crc kubenswrapper[4689]: E1201 08:56:00.533162 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-lgknh ring-data-devices scripts swiftconf], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/swift-ring-rebalance-wkk2p" podUID="3f0bda4f-c3ae-4068-b165-fa2b99d17a01" Dec 01 08:56:00 crc kubenswrapper[4689]: I1201 08:56:00.547444 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-66t7q"] Dec 01 08:56:00 crc kubenswrapper[4689]: I1201 08:56:00.633084 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3f0bda4f-c3ae-4068-b165-fa2b99d17a01-scripts\") pod \"swift-ring-rebalance-wkk2p\" (UID: \"3f0bda4f-c3ae-4068-b165-fa2b99d17a01\") " pod="openstack/swift-ring-rebalance-wkk2p" Dec 01 08:56:00 crc kubenswrapper[4689]: I1201 08:56:00.633405 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/3f0bda4f-c3ae-4068-b165-fa2b99d17a01-etc-swift\") pod \"swift-ring-rebalance-wkk2p\" (UID: \"3f0bda4f-c3ae-4068-b165-fa2b99d17a01\") " pod="openstack/swift-ring-rebalance-wkk2p" Dec 01 08:56:00 crc kubenswrapper[4689]: I1201 08:56:00.633492 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/0e57c646-4b20-4bb9-9c89-bad52b7a1c07-ring-data-devices\") pod \"swift-ring-rebalance-66t7q\" (UID: \"0e57c646-4b20-4bb9-9c89-bad52b7a1c07\") " pod="openstack/swift-ring-rebalance-66t7q" Dec 01 08:56:00 crc kubenswrapper[4689]: I1201 08:56:00.633569 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/3f0bda4f-c3ae-4068-b165-fa2b99d17a01-swiftconf\") pod \"swift-ring-rebalance-wkk2p\" (UID: \"3f0bda4f-c3ae-4068-b165-fa2b99d17a01\") " pod="openstack/swift-ring-rebalance-wkk2p" Dec 01 08:56:00 crc kubenswrapper[4689]: I1201 08:56:00.633668 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/0e57c646-4b20-4bb9-9c89-bad52b7a1c07-dispersionconf\") pod \"swift-ring-rebalance-66t7q\" (UID: \"0e57c646-4b20-4bb9-9c89-bad52b7a1c07\") " pod="openstack/swift-ring-rebalance-66t7q" Dec 01 08:56:00 crc kubenswrapper[4689]: I1201 08:56:00.633761 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzvqb\" (UniqueName: \"kubernetes.io/projected/0e57c646-4b20-4bb9-9c89-bad52b7a1c07-kube-api-access-xzvqb\") pod \"swift-ring-rebalance-66t7q\" (UID: \"0e57c646-4b20-4bb9-9c89-bad52b7a1c07\") " pod="openstack/swift-ring-rebalance-66t7q" Dec 01 08:56:00 crc kubenswrapper[4689]: I1201 08:56:00.633880 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c18c4a63-48ba-42e2-a7f0-d5750963b90f-etc-swift\") pod \"swift-storage-0\" (UID: \"c18c4a63-48ba-42e2-a7f0-d5750963b90f\") " pod="openstack/swift-storage-0" Dec 01 08:56:00 crc kubenswrapper[4689]: I1201 08:56:00.633955 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3f0bda4f-c3ae-4068-b165-fa2b99d17a01-scripts\") pod \"swift-ring-rebalance-wkk2p\" (UID: \"3f0bda4f-c3ae-4068-b165-fa2b99d17a01\") " pod="openstack/swift-ring-rebalance-wkk2p" Dec 01 08:56:00 crc kubenswrapper[4689]: I1201 08:56:00.633970 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgknh\" (UniqueName: \"kubernetes.io/projected/3f0bda4f-c3ae-4068-b165-fa2b99d17a01-kube-api-access-lgknh\") pod \"swift-ring-rebalance-wkk2p\" (UID: \"3f0bda4f-c3ae-4068-b165-fa2b99d17a01\") " pod="openstack/swift-ring-rebalance-wkk2p" Dec 01 08:56:00 crc kubenswrapper[4689]: I1201 08:56:00.634068 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/3f0bda4f-c3ae-4068-b165-fa2b99d17a01-ring-data-devices\") pod \"swift-ring-rebalance-wkk2p\" (UID: \"3f0bda4f-c3ae-4068-b165-fa2b99d17a01\") " pod="openstack/swift-ring-rebalance-wkk2p" Dec 01 08:56:00 crc kubenswrapper[4689]: I1201 08:56:00.634091 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/3f0bda4f-c3ae-4068-b165-fa2b99d17a01-dispersionconf\") pod \"swift-ring-rebalance-wkk2p\" (UID: \"3f0bda4f-c3ae-4068-b165-fa2b99d17a01\") " pod="openstack/swift-ring-rebalance-wkk2p" Dec 01 08:56:00 crc kubenswrapper[4689]: I1201 08:56:00.634116 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/0e57c646-4b20-4bb9-9c89-bad52b7a1c07-swiftconf\") pod \"swift-ring-rebalance-66t7q\" (UID: \"0e57c646-4b20-4bb9-9c89-bad52b7a1c07\") " pod="openstack/swift-ring-rebalance-66t7q" Dec 01 08:56:00 crc kubenswrapper[4689]: I1201 08:56:00.634161 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e57c646-4b20-4bb9-9c89-bad52b7a1c07-combined-ca-bundle\") pod \"swift-ring-rebalance-66t7q\" (UID: \"0e57c646-4b20-4bb9-9c89-bad52b7a1c07\") " pod="openstack/swift-ring-rebalance-66t7q" Dec 01 08:56:00 crc kubenswrapper[4689]: I1201 08:56:00.634202 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/0e57c646-4b20-4bb9-9c89-bad52b7a1c07-etc-swift\") pod \"swift-ring-rebalance-66t7q\" (UID: \"0e57c646-4b20-4bb9-9c89-bad52b7a1c07\") " pod="openstack/swift-ring-rebalance-66t7q" Dec 01 08:56:00 crc kubenswrapper[4689]: I1201 08:56:00.634240 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0e57c646-4b20-4bb9-9c89-bad52b7a1c07-scripts\") pod \"swift-ring-rebalance-66t7q\" (UID: \"0e57c646-4b20-4bb9-9c89-bad52b7a1c07\") " pod="openstack/swift-ring-rebalance-66t7q" Dec 01 08:56:00 crc kubenswrapper[4689]: I1201 08:56:00.634272 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f0bda4f-c3ae-4068-b165-fa2b99d17a01-combined-ca-bundle\") pod \"swift-ring-rebalance-wkk2p\" (UID: \"3f0bda4f-c3ae-4068-b165-fa2b99d17a01\") " pod="openstack/swift-ring-rebalance-wkk2p" Dec 01 08:56:00 crc kubenswrapper[4689]: I1201 08:56:00.635057 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/3f0bda4f-c3ae-4068-b165-fa2b99d17a01-etc-swift\") pod \"swift-ring-rebalance-wkk2p\" (UID: \"3f0bda4f-c3ae-4068-b165-fa2b99d17a01\") " pod="openstack/swift-ring-rebalance-wkk2p" Dec 01 08:56:00 crc kubenswrapper[4689]: E1201 08:56:00.635738 4689 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 01 08:56:00 crc kubenswrapper[4689]: E1201 08:56:00.635872 4689 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 01 08:56:00 crc kubenswrapper[4689]: E1201 08:56:00.635914 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c18c4a63-48ba-42e2-a7f0-d5750963b90f-etc-swift podName:c18c4a63-48ba-42e2-a7f0-d5750963b90f nodeName:}" failed. No retries permitted until 2025-12-01 08:56:01.635898913 +0000 UTC m=+1041.708186817 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c18c4a63-48ba-42e2-a7f0-d5750963b90f-etc-swift") pod "swift-storage-0" (UID: "c18c4a63-48ba-42e2-a7f0-d5750963b90f") : configmap "swift-ring-files" not found Dec 01 08:56:00 crc kubenswrapper[4689]: I1201 08:56:00.636098 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/3f0bda4f-c3ae-4068-b165-fa2b99d17a01-ring-data-devices\") pod \"swift-ring-rebalance-wkk2p\" (UID: \"3f0bda4f-c3ae-4068-b165-fa2b99d17a01\") " pod="openstack/swift-ring-rebalance-wkk2p" Dec 01 08:56:00 crc kubenswrapper[4689]: I1201 08:56:00.636385 4689 generic.go:334] "Generic (PLEG): container finished" podID="5ad3fe76-0301-4ad7-be7a-2dea749a1c63" containerID="22716f9ae4156a8dc3a9e5a6c9aa514050df1b9033502a1cc185c074d0f09301" exitCode=0 Dec 01 08:56:00 crc kubenswrapper[4689]: I1201 08:56:00.636532 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-wkk2p" Dec 01 08:56:00 crc kubenswrapper[4689]: I1201 08:56:00.636506 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-lmql7" event={"ID":"5ad3fe76-0301-4ad7-be7a-2dea749a1c63","Type":"ContainerDied","Data":"22716f9ae4156a8dc3a9e5a6c9aa514050df1b9033502a1cc185c074d0f09301"} Dec 01 08:56:00 crc kubenswrapper[4689]: I1201 08:56:00.637129 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-lmql7" event={"ID":"5ad3fe76-0301-4ad7-be7a-2dea749a1c63","Type":"ContainerStarted","Data":"f01dfbb24209ac4303a2dad27b14802e9d6a9b59e19c152f730e3c302ccc68d9"} Dec 01 08:56:00 crc kubenswrapper[4689]: I1201 08:56:00.639925 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f0bda4f-c3ae-4068-b165-fa2b99d17a01-combined-ca-bundle\") pod \"swift-ring-rebalance-wkk2p\" (UID: \"3f0bda4f-c3ae-4068-b165-fa2b99d17a01\") " pod="openstack/swift-ring-rebalance-wkk2p" Dec 01 08:56:00 crc kubenswrapper[4689]: I1201 08:56:00.642986 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/3f0bda4f-c3ae-4068-b165-fa2b99d17a01-dispersionconf\") pod \"swift-ring-rebalance-wkk2p\" (UID: \"3f0bda4f-c3ae-4068-b165-fa2b99d17a01\") " pod="openstack/swift-ring-rebalance-wkk2p" Dec 01 08:56:00 crc kubenswrapper[4689]: I1201 08:56:00.652242 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/3f0bda4f-c3ae-4068-b165-fa2b99d17a01-swiftconf\") pod \"swift-ring-rebalance-wkk2p\" (UID: \"3f0bda4f-c3ae-4068-b165-fa2b99d17a01\") " pod="openstack/swift-ring-rebalance-wkk2p" Dec 01 08:56:00 crc kubenswrapper[4689]: I1201 08:56:00.671124 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgknh\" (UniqueName: \"kubernetes.io/projected/3f0bda4f-c3ae-4068-b165-fa2b99d17a01-kube-api-access-lgknh\") pod \"swift-ring-rebalance-wkk2p\" (UID: \"3f0bda4f-c3ae-4068-b165-fa2b99d17a01\") " pod="openstack/swift-ring-rebalance-wkk2p" Dec 01 08:56:00 crc kubenswrapper[4689]: I1201 08:56:00.730607 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-wkk2p" Dec 01 08:56:00 crc kubenswrapper[4689]: I1201 08:56:00.736786 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/0e57c646-4b20-4bb9-9c89-bad52b7a1c07-ring-data-devices\") pod \"swift-ring-rebalance-66t7q\" (UID: \"0e57c646-4b20-4bb9-9c89-bad52b7a1c07\") " pod="openstack/swift-ring-rebalance-66t7q" Dec 01 08:56:00 crc kubenswrapper[4689]: I1201 08:56:00.736887 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/0e57c646-4b20-4bb9-9c89-bad52b7a1c07-dispersionconf\") pod \"swift-ring-rebalance-66t7q\" (UID: \"0e57c646-4b20-4bb9-9c89-bad52b7a1c07\") " pod="openstack/swift-ring-rebalance-66t7q" Dec 01 08:56:00 crc kubenswrapper[4689]: I1201 08:56:00.736919 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzvqb\" (UniqueName: \"kubernetes.io/projected/0e57c646-4b20-4bb9-9c89-bad52b7a1c07-kube-api-access-xzvqb\") pod \"swift-ring-rebalance-66t7q\" (UID: \"0e57c646-4b20-4bb9-9c89-bad52b7a1c07\") " pod="openstack/swift-ring-rebalance-66t7q" Dec 01 08:56:00 crc kubenswrapper[4689]: I1201 08:56:00.737054 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/0e57c646-4b20-4bb9-9c89-bad52b7a1c07-swiftconf\") pod \"swift-ring-rebalance-66t7q\" (UID: \"0e57c646-4b20-4bb9-9c89-bad52b7a1c07\") " pod="openstack/swift-ring-rebalance-66t7q" Dec 01 08:56:00 crc kubenswrapper[4689]: I1201 08:56:00.737089 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e57c646-4b20-4bb9-9c89-bad52b7a1c07-combined-ca-bundle\") pod \"swift-ring-rebalance-66t7q\" (UID: \"0e57c646-4b20-4bb9-9c89-bad52b7a1c07\") " pod="openstack/swift-ring-rebalance-66t7q" Dec 01 08:56:00 crc kubenswrapper[4689]: I1201 08:56:00.737132 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/0e57c646-4b20-4bb9-9c89-bad52b7a1c07-etc-swift\") pod \"swift-ring-rebalance-66t7q\" (UID: \"0e57c646-4b20-4bb9-9c89-bad52b7a1c07\") " pod="openstack/swift-ring-rebalance-66t7q" Dec 01 08:56:00 crc kubenswrapper[4689]: I1201 08:56:00.737156 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0e57c646-4b20-4bb9-9c89-bad52b7a1c07-scripts\") pod \"swift-ring-rebalance-66t7q\" (UID: \"0e57c646-4b20-4bb9-9c89-bad52b7a1c07\") " pod="openstack/swift-ring-rebalance-66t7q" Dec 01 08:56:00 crc kubenswrapper[4689]: I1201 08:56:00.738294 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/0e57c646-4b20-4bb9-9c89-bad52b7a1c07-ring-data-devices\") pod \"swift-ring-rebalance-66t7q\" (UID: \"0e57c646-4b20-4bb9-9c89-bad52b7a1c07\") " pod="openstack/swift-ring-rebalance-66t7q" Dec 01 08:56:00 crc kubenswrapper[4689]: I1201 08:56:00.740510 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/0e57c646-4b20-4bb9-9c89-bad52b7a1c07-etc-swift\") pod \"swift-ring-rebalance-66t7q\" (UID: \"0e57c646-4b20-4bb9-9c89-bad52b7a1c07\") " pod="openstack/swift-ring-rebalance-66t7q" Dec 01 08:56:00 crc kubenswrapper[4689]: I1201 08:56:00.740899 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0e57c646-4b20-4bb9-9c89-bad52b7a1c07-scripts\") pod \"swift-ring-rebalance-66t7q\" (UID: \"0e57c646-4b20-4bb9-9c89-bad52b7a1c07\") " pod="openstack/swift-ring-rebalance-66t7q" Dec 01 08:56:00 crc kubenswrapper[4689]: I1201 08:56:00.749912 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/0e57c646-4b20-4bb9-9c89-bad52b7a1c07-dispersionconf\") pod \"swift-ring-rebalance-66t7q\" (UID: \"0e57c646-4b20-4bb9-9c89-bad52b7a1c07\") " pod="openstack/swift-ring-rebalance-66t7q" Dec 01 08:56:00 crc kubenswrapper[4689]: I1201 08:56:00.757831 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/0e57c646-4b20-4bb9-9c89-bad52b7a1c07-swiftconf\") pod \"swift-ring-rebalance-66t7q\" (UID: \"0e57c646-4b20-4bb9-9c89-bad52b7a1c07\") " pod="openstack/swift-ring-rebalance-66t7q" Dec 01 08:56:00 crc kubenswrapper[4689]: I1201 08:56:00.767865 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e57c646-4b20-4bb9-9c89-bad52b7a1c07-combined-ca-bundle\") pod \"swift-ring-rebalance-66t7q\" (UID: \"0e57c646-4b20-4bb9-9c89-bad52b7a1c07\") " pod="openstack/swift-ring-rebalance-66t7q" Dec 01 08:56:00 crc kubenswrapper[4689]: I1201 08:56:00.768591 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzvqb\" (UniqueName: \"kubernetes.io/projected/0e57c646-4b20-4bb9-9c89-bad52b7a1c07-kube-api-access-xzvqb\") pod \"swift-ring-rebalance-66t7q\" (UID: \"0e57c646-4b20-4bb9-9c89-bad52b7a1c07\") " pod="openstack/swift-ring-rebalance-66t7q" Dec 01 08:56:00 crc kubenswrapper[4689]: I1201 08:56:00.831131 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-66t7q" Dec 01 08:56:00 crc kubenswrapper[4689]: I1201 08:56:00.837986 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lgknh\" (UniqueName: \"kubernetes.io/projected/3f0bda4f-c3ae-4068-b165-fa2b99d17a01-kube-api-access-lgknh\") pod \"3f0bda4f-c3ae-4068-b165-fa2b99d17a01\" (UID: \"3f0bda4f-c3ae-4068-b165-fa2b99d17a01\") " Dec 01 08:56:00 crc kubenswrapper[4689]: I1201 08:56:00.838120 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/3f0bda4f-c3ae-4068-b165-fa2b99d17a01-etc-swift\") pod \"3f0bda4f-c3ae-4068-b165-fa2b99d17a01\" (UID: \"3f0bda4f-c3ae-4068-b165-fa2b99d17a01\") " Dec 01 08:56:00 crc kubenswrapper[4689]: I1201 08:56:00.838262 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/3f0bda4f-c3ae-4068-b165-fa2b99d17a01-dispersionconf\") pod \"3f0bda4f-c3ae-4068-b165-fa2b99d17a01\" (UID: \"3f0bda4f-c3ae-4068-b165-fa2b99d17a01\") " Dec 01 08:56:00 crc kubenswrapper[4689]: I1201 08:56:00.838299 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f0bda4f-c3ae-4068-b165-fa2b99d17a01-combined-ca-bundle\") pod \"3f0bda4f-c3ae-4068-b165-fa2b99d17a01\" (UID: \"3f0bda4f-c3ae-4068-b165-fa2b99d17a01\") " Dec 01 08:56:00 crc kubenswrapper[4689]: I1201 08:56:00.838335 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/3f0bda4f-c3ae-4068-b165-fa2b99d17a01-ring-data-devices\") pod \"3f0bda4f-c3ae-4068-b165-fa2b99d17a01\" (UID: \"3f0bda4f-c3ae-4068-b165-fa2b99d17a01\") " Dec 01 08:56:00 crc kubenswrapper[4689]: I1201 08:56:00.838357 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3f0bda4f-c3ae-4068-b165-fa2b99d17a01-scripts\") pod \"3f0bda4f-c3ae-4068-b165-fa2b99d17a01\" (UID: \"3f0bda4f-c3ae-4068-b165-fa2b99d17a01\") " Dec 01 08:56:00 crc kubenswrapper[4689]: I1201 08:56:00.838419 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/3f0bda4f-c3ae-4068-b165-fa2b99d17a01-swiftconf\") pod \"3f0bda4f-c3ae-4068-b165-fa2b99d17a01\" (UID: \"3f0bda4f-c3ae-4068-b165-fa2b99d17a01\") " Dec 01 08:56:00 crc kubenswrapper[4689]: I1201 08:56:00.840994 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f0bda4f-c3ae-4068-b165-fa2b99d17a01-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "3f0bda4f-c3ae-4068-b165-fa2b99d17a01" (UID: "3f0bda4f-c3ae-4068-b165-fa2b99d17a01"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:56:00 crc kubenswrapper[4689]: I1201 08:56:00.841877 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f0bda4f-c3ae-4068-b165-fa2b99d17a01-scripts" (OuterVolumeSpecName: "scripts") pod "3f0bda4f-c3ae-4068-b165-fa2b99d17a01" (UID: "3f0bda4f-c3ae-4068-b165-fa2b99d17a01"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:56:00 crc kubenswrapper[4689]: I1201 08:56:00.842252 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f0bda4f-c3ae-4068-b165-fa2b99d17a01-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3f0bda4f-c3ae-4068-b165-fa2b99d17a01" (UID: "3f0bda4f-c3ae-4068-b165-fa2b99d17a01"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:56:00 crc kubenswrapper[4689]: I1201 08:56:00.842405 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f0bda4f-c3ae-4068-b165-fa2b99d17a01-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "3f0bda4f-c3ae-4068-b165-fa2b99d17a01" (UID: "3f0bda4f-c3ae-4068-b165-fa2b99d17a01"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 08:56:00 crc kubenswrapper[4689]: I1201 08:56:00.844616 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f0bda4f-c3ae-4068-b165-fa2b99d17a01-kube-api-access-lgknh" (OuterVolumeSpecName: "kube-api-access-lgknh") pod "3f0bda4f-c3ae-4068-b165-fa2b99d17a01" (UID: "3f0bda4f-c3ae-4068-b165-fa2b99d17a01"). InnerVolumeSpecName "kube-api-access-lgknh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:56:00 crc kubenswrapper[4689]: I1201 08:56:00.846271 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f0bda4f-c3ae-4068-b165-fa2b99d17a01-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "3f0bda4f-c3ae-4068-b165-fa2b99d17a01" (UID: "3f0bda4f-c3ae-4068-b165-fa2b99d17a01"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:56:00 crc kubenswrapper[4689]: I1201 08:56:00.846387 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f0bda4f-c3ae-4068-b165-fa2b99d17a01-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "3f0bda4f-c3ae-4068-b165-fa2b99d17a01" (UID: "3f0bda4f-c3ae-4068-b165-fa2b99d17a01"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:56:00 crc kubenswrapper[4689]: I1201 08:56:00.940307 4689 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/3f0bda4f-c3ae-4068-b165-fa2b99d17a01-dispersionconf\") on node \"crc\" DevicePath \"\"" Dec 01 08:56:00 crc kubenswrapper[4689]: I1201 08:56:00.940580 4689 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f0bda4f-c3ae-4068-b165-fa2b99d17a01-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 08:56:00 crc kubenswrapper[4689]: I1201 08:56:00.940589 4689 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/3f0bda4f-c3ae-4068-b165-fa2b99d17a01-ring-data-devices\") on node \"crc\" DevicePath \"\"" Dec 01 08:56:00 crc kubenswrapper[4689]: I1201 08:56:00.940600 4689 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3f0bda4f-c3ae-4068-b165-fa2b99d17a01-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 08:56:00 crc kubenswrapper[4689]: I1201 08:56:00.940610 4689 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/3f0bda4f-c3ae-4068-b165-fa2b99d17a01-swiftconf\") on node \"crc\" DevicePath \"\"" Dec 01 08:56:00 crc kubenswrapper[4689]: I1201 08:56:00.940619 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lgknh\" (UniqueName: \"kubernetes.io/projected/3f0bda4f-c3ae-4068-b165-fa2b99d17a01-kube-api-access-lgknh\") on node \"crc\" DevicePath \"\"" Dec 01 08:56:00 crc kubenswrapper[4689]: I1201 08:56:00.940644 4689 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/3f0bda4f-c3ae-4068-b165-fa2b99d17a01-etc-swift\") on node \"crc\" DevicePath \"\"" Dec 01 08:56:01 crc kubenswrapper[4689]: I1201 08:56:01.029694 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Dec 01 08:56:01 crc kubenswrapper[4689]: I1201 08:56:01.319022 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-66t7q"] Dec 01 08:56:01 crc kubenswrapper[4689]: I1201 08:56:01.645901 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-lmql7" event={"ID":"5ad3fe76-0301-4ad7-be7a-2dea749a1c63","Type":"ContainerStarted","Data":"d408b61adfcd055eab5b3a8f47f70ae36569c3b40c0eb926958f5a122e594e45"} Dec 01 08:56:01 crc kubenswrapper[4689]: I1201 08:56:01.646279 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-lmql7" Dec 01 08:56:01 crc kubenswrapper[4689]: I1201 08:56:01.647136 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-wkk2p" Dec 01 08:56:01 crc kubenswrapper[4689]: I1201 08:56:01.647149 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-66t7q" event={"ID":"0e57c646-4b20-4bb9-9c89-bad52b7a1c07","Type":"ContainerStarted","Data":"10c3cadf99a13fd24ef73c67b0e8c9b020165fda6c1a37f5fd46e3f256c97d14"} Dec 01 08:56:01 crc kubenswrapper[4689]: I1201 08:56:01.650724 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c18c4a63-48ba-42e2-a7f0-d5750963b90f-etc-swift\") pod \"swift-storage-0\" (UID: \"c18c4a63-48ba-42e2-a7f0-d5750963b90f\") " pod="openstack/swift-storage-0" Dec 01 08:56:01 crc kubenswrapper[4689]: E1201 08:56:01.650915 4689 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 01 08:56:01 crc kubenswrapper[4689]: E1201 08:56:01.650932 4689 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 01 08:56:01 crc kubenswrapper[4689]: E1201 08:56:01.650990 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c18c4a63-48ba-42e2-a7f0-d5750963b90f-etc-swift podName:c18c4a63-48ba-42e2-a7f0-d5750963b90f nodeName:}" failed. No retries permitted until 2025-12-01 08:56:03.6509733 +0000 UTC m=+1043.723261204 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c18c4a63-48ba-42e2-a7f0-d5750963b90f-etc-swift") pod "swift-storage-0" (UID: "c18c4a63-48ba-42e2-a7f0-d5750963b90f") : configmap "swift-ring-files" not found Dec 01 08:56:01 crc kubenswrapper[4689]: I1201 08:56:01.672876 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-lmql7" podStartSLOduration=3.67285944 podStartE2EDuration="3.67285944s" podCreationTimestamp="2025-12-01 08:55:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:56:01.670363222 +0000 UTC m=+1041.742665626" watchObservedRunningTime="2025-12-01 08:56:01.67285944 +0000 UTC m=+1041.745147344" Dec 01 08:56:01 crc kubenswrapper[4689]: I1201 08:56:01.705826 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-wkk2p"] Dec 01 08:56:01 crc kubenswrapper[4689]: I1201 08:56:01.720665 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-wkk2p"] Dec 01 08:56:03 crc kubenswrapper[4689]: I1201 08:56:03.057128 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f0bda4f-c3ae-4068-b165-fa2b99d17a01" path="/var/lib/kubelet/pods/3f0bda4f-c3ae-4068-b165-fa2b99d17a01/volumes" Dec 01 08:56:03 crc kubenswrapper[4689]: I1201 08:56:03.687640 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c18c4a63-48ba-42e2-a7f0-d5750963b90f-etc-swift\") pod \"swift-storage-0\" (UID: \"c18c4a63-48ba-42e2-a7f0-d5750963b90f\") " pod="openstack/swift-storage-0" Dec 01 08:56:03 crc kubenswrapper[4689]: E1201 08:56:03.687865 4689 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 01 08:56:03 crc kubenswrapper[4689]: E1201 08:56:03.688126 4689 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 01 08:56:03 crc kubenswrapper[4689]: E1201 08:56:03.688200 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c18c4a63-48ba-42e2-a7f0-d5750963b90f-etc-swift podName:c18c4a63-48ba-42e2-a7f0-d5750963b90f nodeName:}" failed. No retries permitted until 2025-12-01 08:56:07.68817988 +0000 UTC m=+1047.760467784 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c18c4a63-48ba-42e2-a7f0-d5750963b90f-etc-swift") pod "swift-storage-0" (UID: "c18c4a63-48ba-42e2-a7f0-d5750963b90f") : configmap "swift-ring-files" not found Dec 01 08:56:05 crc kubenswrapper[4689]: I1201 08:56:05.060171 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-sj4xx" Dec 01 08:56:05 crc kubenswrapper[4689]: I1201 08:56:05.063452 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-sj4xx" Dec 01 08:56:05 crc kubenswrapper[4689]: I1201 08:56:05.288513 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-48955-config-gg8jq"] Dec 01 08:56:05 crc kubenswrapper[4689]: I1201 08:56:05.295825 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-48955-config-gg8jq" Dec 01 08:56:05 crc kubenswrapper[4689]: I1201 08:56:05.298686 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Dec 01 08:56:05 crc kubenswrapper[4689]: I1201 08:56:05.315338 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-48955-config-gg8jq"] Dec 01 08:56:05 crc kubenswrapper[4689]: I1201 08:56:05.322671 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/61e872f8-178c-4fc2-9445-78bb85dcc09e-var-log-ovn\") pod \"ovn-controller-48955-config-gg8jq\" (UID: \"61e872f8-178c-4fc2-9445-78bb85dcc09e\") " pod="openstack/ovn-controller-48955-config-gg8jq" Dec 01 08:56:05 crc kubenswrapper[4689]: I1201 08:56:05.322726 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/61e872f8-178c-4fc2-9445-78bb85dcc09e-var-run\") pod \"ovn-controller-48955-config-gg8jq\" (UID: \"61e872f8-178c-4fc2-9445-78bb85dcc09e\") " pod="openstack/ovn-controller-48955-config-gg8jq" Dec 01 08:56:05 crc kubenswrapper[4689]: I1201 08:56:05.322746 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/61e872f8-178c-4fc2-9445-78bb85dcc09e-additional-scripts\") pod \"ovn-controller-48955-config-gg8jq\" (UID: \"61e872f8-178c-4fc2-9445-78bb85dcc09e\") " pod="openstack/ovn-controller-48955-config-gg8jq" Dec 01 08:56:05 crc kubenswrapper[4689]: I1201 08:56:05.323151 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fclk\" (UniqueName: \"kubernetes.io/projected/61e872f8-178c-4fc2-9445-78bb85dcc09e-kube-api-access-5fclk\") pod \"ovn-controller-48955-config-gg8jq\" (UID: \"61e872f8-178c-4fc2-9445-78bb85dcc09e\") " pod="openstack/ovn-controller-48955-config-gg8jq" Dec 01 08:56:05 crc kubenswrapper[4689]: I1201 08:56:05.323278 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/61e872f8-178c-4fc2-9445-78bb85dcc09e-var-run-ovn\") pod \"ovn-controller-48955-config-gg8jq\" (UID: \"61e872f8-178c-4fc2-9445-78bb85dcc09e\") " pod="openstack/ovn-controller-48955-config-gg8jq" Dec 01 08:56:05 crc kubenswrapper[4689]: I1201 08:56:05.323404 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/61e872f8-178c-4fc2-9445-78bb85dcc09e-scripts\") pod \"ovn-controller-48955-config-gg8jq\" (UID: \"61e872f8-178c-4fc2-9445-78bb85dcc09e\") " pod="openstack/ovn-controller-48955-config-gg8jq" Dec 01 08:56:05 crc kubenswrapper[4689]: I1201 08:56:05.428535 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/61e872f8-178c-4fc2-9445-78bb85dcc09e-var-run-ovn\") pod \"ovn-controller-48955-config-gg8jq\" (UID: \"61e872f8-178c-4fc2-9445-78bb85dcc09e\") " pod="openstack/ovn-controller-48955-config-gg8jq" Dec 01 08:56:05 crc kubenswrapper[4689]: I1201 08:56:05.428879 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/61e872f8-178c-4fc2-9445-78bb85dcc09e-scripts\") pod \"ovn-controller-48955-config-gg8jq\" (UID: \"61e872f8-178c-4fc2-9445-78bb85dcc09e\") " pod="openstack/ovn-controller-48955-config-gg8jq" Dec 01 08:56:05 crc kubenswrapper[4689]: I1201 08:56:05.428938 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/61e872f8-178c-4fc2-9445-78bb85dcc09e-var-log-ovn\") pod \"ovn-controller-48955-config-gg8jq\" (UID: \"61e872f8-178c-4fc2-9445-78bb85dcc09e\") " pod="openstack/ovn-controller-48955-config-gg8jq" Dec 01 08:56:05 crc kubenswrapper[4689]: I1201 08:56:05.428978 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/61e872f8-178c-4fc2-9445-78bb85dcc09e-var-run\") pod \"ovn-controller-48955-config-gg8jq\" (UID: \"61e872f8-178c-4fc2-9445-78bb85dcc09e\") " pod="openstack/ovn-controller-48955-config-gg8jq" Dec 01 08:56:05 crc kubenswrapper[4689]: I1201 08:56:05.428993 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/61e872f8-178c-4fc2-9445-78bb85dcc09e-additional-scripts\") pod \"ovn-controller-48955-config-gg8jq\" (UID: \"61e872f8-178c-4fc2-9445-78bb85dcc09e\") " pod="openstack/ovn-controller-48955-config-gg8jq" Dec 01 08:56:05 crc kubenswrapper[4689]: I1201 08:56:05.429067 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fclk\" (UniqueName: \"kubernetes.io/projected/61e872f8-178c-4fc2-9445-78bb85dcc09e-kube-api-access-5fclk\") pod \"ovn-controller-48955-config-gg8jq\" (UID: \"61e872f8-178c-4fc2-9445-78bb85dcc09e\") " pod="openstack/ovn-controller-48955-config-gg8jq" Dec 01 08:56:05 crc kubenswrapper[4689]: I1201 08:56:05.429096 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/61e872f8-178c-4fc2-9445-78bb85dcc09e-var-run-ovn\") pod \"ovn-controller-48955-config-gg8jq\" (UID: \"61e872f8-178c-4fc2-9445-78bb85dcc09e\") " pod="openstack/ovn-controller-48955-config-gg8jq" Dec 01 08:56:05 crc kubenswrapper[4689]: I1201 08:56:05.429378 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/61e872f8-178c-4fc2-9445-78bb85dcc09e-var-log-ovn\") pod \"ovn-controller-48955-config-gg8jq\" (UID: \"61e872f8-178c-4fc2-9445-78bb85dcc09e\") " pod="openstack/ovn-controller-48955-config-gg8jq" Dec 01 08:56:05 crc kubenswrapper[4689]: I1201 08:56:05.429385 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/61e872f8-178c-4fc2-9445-78bb85dcc09e-var-run\") pod \"ovn-controller-48955-config-gg8jq\" (UID: \"61e872f8-178c-4fc2-9445-78bb85dcc09e\") " pod="openstack/ovn-controller-48955-config-gg8jq" Dec 01 08:56:05 crc kubenswrapper[4689]: I1201 08:56:05.429956 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/61e872f8-178c-4fc2-9445-78bb85dcc09e-additional-scripts\") pod \"ovn-controller-48955-config-gg8jq\" (UID: \"61e872f8-178c-4fc2-9445-78bb85dcc09e\") " pod="openstack/ovn-controller-48955-config-gg8jq" Dec 01 08:56:05 crc kubenswrapper[4689]: I1201 08:56:05.431222 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/61e872f8-178c-4fc2-9445-78bb85dcc09e-scripts\") pod \"ovn-controller-48955-config-gg8jq\" (UID: \"61e872f8-178c-4fc2-9445-78bb85dcc09e\") " pod="openstack/ovn-controller-48955-config-gg8jq" Dec 01 08:56:05 crc kubenswrapper[4689]: I1201 08:56:05.464299 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fclk\" (UniqueName: \"kubernetes.io/projected/61e872f8-178c-4fc2-9445-78bb85dcc09e-kube-api-access-5fclk\") pod \"ovn-controller-48955-config-gg8jq\" (UID: \"61e872f8-178c-4fc2-9445-78bb85dcc09e\") " pod="openstack/ovn-controller-48955-config-gg8jq" Dec 01 08:56:05 crc kubenswrapper[4689]: I1201 08:56:05.614438 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-48955-config-gg8jq" Dec 01 08:56:05 crc kubenswrapper[4689]: I1201 08:56:05.686115 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-66t7q" event={"ID":"0e57c646-4b20-4bb9-9c89-bad52b7a1c07","Type":"ContainerStarted","Data":"e1f5456c5ad2dd0431725b9d0b0072efa58d45ce6971689d51ab59578545315e"} Dec 01 08:56:05 crc kubenswrapper[4689]: I1201 08:56:05.712189 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-66t7q" podStartSLOduration=1.707633273 podStartE2EDuration="5.712151855s" podCreationTimestamp="2025-12-01 08:56:00 +0000 UTC" firstStartedPulling="2025-12-01 08:56:01.328192685 +0000 UTC m=+1041.400480579" lastFinishedPulling="2025-12-01 08:56:05.332711247 +0000 UTC m=+1045.404999161" observedRunningTime="2025-12-01 08:56:05.711698793 +0000 UTC m=+1045.783986697" watchObservedRunningTime="2025-12-01 08:56:05.712151855 +0000 UTC m=+1045.784439759" Dec 01 08:56:06 crc kubenswrapper[4689]: I1201 08:56:06.006628 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-rg78r"] Dec 01 08:56:06 crc kubenswrapper[4689]: I1201 08:56:06.008117 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-rg78r" Dec 01 08:56:06 crc kubenswrapper[4689]: I1201 08:56:06.036588 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4tv5\" (UniqueName: \"kubernetes.io/projected/a07bdee7-df02-4a87-ab8e-68939cd995bd-kube-api-access-t4tv5\") pod \"keystone-db-create-rg78r\" (UID: \"a07bdee7-df02-4a87-ab8e-68939cd995bd\") " pod="openstack/keystone-db-create-rg78r" Dec 01 08:56:06 crc kubenswrapper[4689]: I1201 08:56:06.036663 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a07bdee7-df02-4a87-ab8e-68939cd995bd-operator-scripts\") pod \"keystone-db-create-rg78r\" (UID: \"a07bdee7-df02-4a87-ab8e-68939cd995bd\") " pod="openstack/keystone-db-create-rg78r" Dec 01 08:56:06 crc kubenswrapper[4689]: I1201 08:56:06.041000 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-rg78r"] Dec 01 08:56:06 crc kubenswrapper[4689]: I1201 08:56:06.138218 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4tv5\" (UniqueName: \"kubernetes.io/projected/a07bdee7-df02-4a87-ab8e-68939cd995bd-kube-api-access-t4tv5\") pod \"keystone-db-create-rg78r\" (UID: \"a07bdee7-df02-4a87-ab8e-68939cd995bd\") " pod="openstack/keystone-db-create-rg78r" Dec 01 08:56:06 crc kubenswrapper[4689]: I1201 08:56:06.138768 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a07bdee7-df02-4a87-ab8e-68939cd995bd-operator-scripts\") pod \"keystone-db-create-rg78r\" (UID: \"a07bdee7-df02-4a87-ab8e-68939cd995bd\") " pod="openstack/keystone-db-create-rg78r" Dec 01 08:56:06 crc kubenswrapper[4689]: I1201 08:56:06.140061 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a07bdee7-df02-4a87-ab8e-68939cd995bd-operator-scripts\") pod \"keystone-db-create-rg78r\" (UID: \"a07bdee7-df02-4a87-ab8e-68939cd995bd\") " pod="openstack/keystone-db-create-rg78r" Dec 01 08:56:06 crc kubenswrapper[4689]: I1201 08:56:06.147323 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-c798-account-create-update-6jt58"] Dec 01 08:56:06 crc kubenswrapper[4689]: I1201 08:56:06.148976 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-c798-account-create-update-6jt58" Dec 01 08:56:06 crc kubenswrapper[4689]: I1201 08:56:06.152783 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Dec 01 08:56:06 crc kubenswrapper[4689]: I1201 08:56:06.163205 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-c798-account-create-update-6jt58"] Dec 01 08:56:06 crc kubenswrapper[4689]: I1201 08:56:06.169916 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4tv5\" (UniqueName: \"kubernetes.io/projected/a07bdee7-df02-4a87-ab8e-68939cd995bd-kube-api-access-t4tv5\") pod \"keystone-db-create-rg78r\" (UID: \"a07bdee7-df02-4a87-ab8e-68939cd995bd\") " pod="openstack/keystone-db-create-rg78r" Dec 01 08:56:06 crc kubenswrapper[4689]: I1201 08:56:06.180118 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-48955-config-gg8jq"] Dec 01 08:56:06 crc kubenswrapper[4689]: I1201 08:56:06.240635 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmqsm\" (UniqueName: \"kubernetes.io/projected/8502e89e-509a-4cf5-8308-d769dba3a547-kube-api-access-hmqsm\") pod \"keystone-c798-account-create-update-6jt58\" (UID: \"8502e89e-509a-4cf5-8308-d769dba3a547\") " pod="openstack/keystone-c798-account-create-update-6jt58" Dec 01 08:56:06 crc kubenswrapper[4689]: I1201 08:56:06.240728 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8502e89e-509a-4cf5-8308-d769dba3a547-operator-scripts\") pod \"keystone-c798-account-create-update-6jt58\" (UID: \"8502e89e-509a-4cf5-8308-d769dba3a547\") " pod="openstack/keystone-c798-account-create-update-6jt58" Dec 01 08:56:06 crc kubenswrapper[4689]: I1201 08:56:06.251061 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-pxsb5"] Dec 01 08:56:06 crc kubenswrapper[4689]: I1201 08:56:06.253884 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-pxsb5" Dec 01 08:56:06 crc kubenswrapper[4689]: I1201 08:56:06.276833 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-pxsb5"] Dec 01 08:56:06 crc kubenswrapper[4689]: I1201 08:56:06.341791 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8502e89e-509a-4cf5-8308-d769dba3a547-operator-scripts\") pod \"keystone-c798-account-create-update-6jt58\" (UID: \"8502e89e-509a-4cf5-8308-d769dba3a547\") " pod="openstack/keystone-c798-account-create-update-6jt58" Dec 01 08:56:06 crc kubenswrapper[4689]: I1201 08:56:06.341845 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fcefa00f-aa74-4399-8e77-df956e479367-operator-scripts\") pod \"placement-db-create-pxsb5\" (UID: \"fcefa00f-aa74-4399-8e77-df956e479367\") " pod="openstack/placement-db-create-pxsb5" Dec 01 08:56:06 crc kubenswrapper[4689]: I1201 08:56:06.341896 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxkp4\" (UniqueName: \"kubernetes.io/projected/fcefa00f-aa74-4399-8e77-df956e479367-kube-api-access-qxkp4\") pod \"placement-db-create-pxsb5\" (UID: \"fcefa00f-aa74-4399-8e77-df956e479367\") " pod="openstack/placement-db-create-pxsb5" Dec 01 08:56:06 crc kubenswrapper[4689]: I1201 08:56:06.341985 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmqsm\" (UniqueName: \"kubernetes.io/projected/8502e89e-509a-4cf5-8308-d769dba3a547-kube-api-access-hmqsm\") pod \"keystone-c798-account-create-update-6jt58\" (UID: \"8502e89e-509a-4cf5-8308-d769dba3a547\") " pod="openstack/keystone-c798-account-create-update-6jt58" Dec 01 08:56:06 crc kubenswrapper[4689]: I1201 08:56:06.343124 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8502e89e-509a-4cf5-8308-d769dba3a547-operator-scripts\") pod \"keystone-c798-account-create-update-6jt58\" (UID: \"8502e89e-509a-4cf5-8308-d769dba3a547\") " pod="openstack/keystone-c798-account-create-update-6jt58" Dec 01 08:56:06 crc kubenswrapper[4689]: I1201 08:56:06.349598 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-1ae5-account-create-update-55l2b"] Dec 01 08:56:06 crc kubenswrapper[4689]: I1201 08:56:06.351104 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-1ae5-account-create-update-55l2b" Dec 01 08:56:06 crc kubenswrapper[4689]: I1201 08:56:06.353540 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Dec 01 08:56:06 crc kubenswrapper[4689]: I1201 08:56:06.356790 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-rg78r" Dec 01 08:56:06 crc kubenswrapper[4689]: I1201 08:56:06.364106 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-1ae5-account-create-update-55l2b"] Dec 01 08:56:06 crc kubenswrapper[4689]: I1201 08:56:06.366570 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmqsm\" (UniqueName: \"kubernetes.io/projected/8502e89e-509a-4cf5-8308-d769dba3a547-kube-api-access-hmqsm\") pod \"keystone-c798-account-create-update-6jt58\" (UID: \"8502e89e-509a-4cf5-8308-d769dba3a547\") " pod="openstack/keystone-c798-account-create-update-6jt58" Dec 01 08:56:06 crc kubenswrapper[4689]: I1201 08:56:06.444281 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxkp4\" (UniqueName: \"kubernetes.io/projected/fcefa00f-aa74-4399-8e77-df956e479367-kube-api-access-qxkp4\") pod \"placement-db-create-pxsb5\" (UID: \"fcefa00f-aa74-4399-8e77-df956e479367\") " pod="openstack/placement-db-create-pxsb5" Dec 01 08:56:06 crc kubenswrapper[4689]: I1201 08:56:06.444455 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b96ed96a-7a15-4ab1-add3-930f95896b44-operator-scripts\") pod \"placement-1ae5-account-create-update-55l2b\" (UID: \"b96ed96a-7a15-4ab1-add3-930f95896b44\") " pod="openstack/placement-1ae5-account-create-update-55l2b" Dec 01 08:56:06 crc kubenswrapper[4689]: I1201 08:56:06.444549 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fcefa00f-aa74-4399-8e77-df956e479367-operator-scripts\") pod \"placement-db-create-pxsb5\" (UID: \"fcefa00f-aa74-4399-8e77-df956e479367\") " pod="openstack/placement-db-create-pxsb5" Dec 01 08:56:06 crc kubenswrapper[4689]: I1201 08:56:06.445341 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fcefa00f-aa74-4399-8e77-df956e479367-operator-scripts\") pod \"placement-db-create-pxsb5\" (UID: \"fcefa00f-aa74-4399-8e77-df956e479367\") " pod="openstack/placement-db-create-pxsb5" Dec 01 08:56:06 crc kubenswrapper[4689]: I1201 08:56:06.445413 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzvj2\" (UniqueName: \"kubernetes.io/projected/b96ed96a-7a15-4ab1-add3-930f95896b44-kube-api-access-bzvj2\") pod \"placement-1ae5-account-create-update-55l2b\" (UID: \"b96ed96a-7a15-4ab1-add3-930f95896b44\") " pod="openstack/placement-1ae5-account-create-update-55l2b" Dec 01 08:56:06 crc kubenswrapper[4689]: I1201 08:56:06.465270 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxkp4\" (UniqueName: \"kubernetes.io/projected/fcefa00f-aa74-4399-8e77-df956e479367-kube-api-access-qxkp4\") pod \"placement-db-create-pxsb5\" (UID: \"fcefa00f-aa74-4399-8e77-df956e479367\") " pod="openstack/placement-db-create-pxsb5" Dec 01 08:56:06 crc kubenswrapper[4689]: I1201 08:56:06.466111 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-c798-account-create-update-6jt58" Dec 01 08:56:06 crc kubenswrapper[4689]: I1201 08:56:06.543479 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-2w7pr"] Dec 01 08:56:06 crc kubenswrapper[4689]: I1201 08:56:06.546496 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzvj2\" (UniqueName: \"kubernetes.io/projected/b96ed96a-7a15-4ab1-add3-930f95896b44-kube-api-access-bzvj2\") pod \"placement-1ae5-account-create-update-55l2b\" (UID: \"b96ed96a-7a15-4ab1-add3-930f95896b44\") " pod="openstack/placement-1ae5-account-create-update-55l2b" Dec 01 08:56:06 crc kubenswrapper[4689]: I1201 08:56:06.546602 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b96ed96a-7a15-4ab1-add3-930f95896b44-operator-scripts\") pod \"placement-1ae5-account-create-update-55l2b\" (UID: \"b96ed96a-7a15-4ab1-add3-930f95896b44\") " pod="openstack/placement-1ae5-account-create-update-55l2b" Dec 01 08:56:06 crc kubenswrapper[4689]: I1201 08:56:06.547460 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b96ed96a-7a15-4ab1-add3-930f95896b44-operator-scripts\") pod \"placement-1ae5-account-create-update-55l2b\" (UID: \"b96ed96a-7a15-4ab1-add3-930f95896b44\") " pod="openstack/placement-1ae5-account-create-update-55l2b" Dec 01 08:56:06 crc kubenswrapper[4689]: I1201 08:56:06.571211 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-2w7pr"] Dec 01 08:56:06 crc kubenswrapper[4689]: I1201 08:56:06.571333 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-2w7pr" Dec 01 08:56:06 crc kubenswrapper[4689]: I1201 08:56:06.578492 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-pxsb5" Dec 01 08:56:06 crc kubenswrapper[4689]: I1201 08:56:06.609477 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzvj2\" (UniqueName: \"kubernetes.io/projected/b96ed96a-7a15-4ab1-add3-930f95896b44-kube-api-access-bzvj2\") pod \"placement-1ae5-account-create-update-55l2b\" (UID: \"b96ed96a-7a15-4ab1-add3-930f95896b44\") " pod="openstack/placement-1ae5-account-create-update-55l2b" Dec 01 08:56:06 crc kubenswrapper[4689]: I1201 08:56:06.652236 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b0afecc-7b1c-43f8-b9cd-7595bbd87459-operator-scripts\") pod \"glance-db-create-2w7pr\" (UID: \"3b0afecc-7b1c-43f8-b9cd-7595bbd87459\") " pod="openstack/glance-db-create-2w7pr" Dec 01 08:56:06 crc kubenswrapper[4689]: I1201 08:56:06.652352 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmzzw\" (UniqueName: \"kubernetes.io/projected/3b0afecc-7b1c-43f8-b9cd-7595bbd87459-kube-api-access-kmzzw\") pod \"glance-db-create-2w7pr\" (UID: \"3b0afecc-7b1c-43f8-b9cd-7595bbd87459\") " pod="openstack/glance-db-create-2w7pr" Dec 01 08:56:06 crc kubenswrapper[4689]: I1201 08:56:06.666451 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-54f1-account-create-update-p6hrg"] Dec 01 08:56:06 crc kubenswrapper[4689]: I1201 08:56:06.670575 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-54f1-account-create-update-p6hrg" Dec 01 08:56:06 crc kubenswrapper[4689]: I1201 08:56:06.682545 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Dec 01 08:56:06 crc kubenswrapper[4689]: I1201 08:56:06.733576 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-48955-config-gg8jq" event={"ID":"61e872f8-178c-4fc2-9445-78bb85dcc09e","Type":"ContainerStarted","Data":"fe398f9040591043645828b4c3af1611dc689f24ef0923add8083692d56d8a25"} Dec 01 08:56:06 crc kubenswrapper[4689]: I1201 08:56:06.754594 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32bcbe2d-59ae-41f7-861e-04b27330d055-operator-scripts\") pod \"glance-54f1-account-create-update-p6hrg\" (UID: \"32bcbe2d-59ae-41f7-861e-04b27330d055\") " pod="openstack/glance-54f1-account-create-update-p6hrg" Dec 01 08:56:06 crc kubenswrapper[4689]: I1201 08:56:06.754676 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b0afecc-7b1c-43f8-b9cd-7595bbd87459-operator-scripts\") pod \"glance-db-create-2w7pr\" (UID: \"3b0afecc-7b1c-43f8-b9cd-7595bbd87459\") " pod="openstack/glance-db-create-2w7pr" Dec 01 08:56:06 crc kubenswrapper[4689]: I1201 08:56:06.754777 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmzzw\" (UniqueName: \"kubernetes.io/projected/3b0afecc-7b1c-43f8-b9cd-7595bbd87459-kube-api-access-kmzzw\") pod \"glance-db-create-2w7pr\" (UID: \"3b0afecc-7b1c-43f8-b9cd-7595bbd87459\") " pod="openstack/glance-db-create-2w7pr" Dec 01 08:56:06 crc kubenswrapper[4689]: I1201 08:56:06.754891 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jxs8\" (UniqueName: \"kubernetes.io/projected/32bcbe2d-59ae-41f7-861e-04b27330d055-kube-api-access-5jxs8\") pod \"glance-54f1-account-create-update-p6hrg\" (UID: \"32bcbe2d-59ae-41f7-861e-04b27330d055\") " pod="openstack/glance-54f1-account-create-update-p6hrg" Dec 01 08:56:06 crc kubenswrapper[4689]: I1201 08:56:06.757070 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b0afecc-7b1c-43f8-b9cd-7595bbd87459-operator-scripts\") pod \"glance-db-create-2w7pr\" (UID: \"3b0afecc-7b1c-43f8-b9cd-7595bbd87459\") " pod="openstack/glance-db-create-2w7pr" Dec 01 08:56:06 crc kubenswrapper[4689]: I1201 08:56:06.777031 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-54f1-account-create-update-p6hrg"] Dec 01 08:56:06 crc kubenswrapper[4689]: I1201 08:56:06.800962 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmzzw\" (UniqueName: \"kubernetes.io/projected/3b0afecc-7b1c-43f8-b9cd-7595bbd87459-kube-api-access-kmzzw\") pod \"glance-db-create-2w7pr\" (UID: \"3b0afecc-7b1c-43f8-b9cd-7595bbd87459\") " pod="openstack/glance-db-create-2w7pr" Dec 01 08:56:06 crc kubenswrapper[4689]: I1201 08:56:06.857412 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32bcbe2d-59ae-41f7-861e-04b27330d055-operator-scripts\") pod \"glance-54f1-account-create-update-p6hrg\" (UID: \"32bcbe2d-59ae-41f7-861e-04b27330d055\") " pod="openstack/glance-54f1-account-create-update-p6hrg" Dec 01 08:56:06 crc kubenswrapper[4689]: I1201 08:56:06.857565 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jxs8\" (UniqueName: \"kubernetes.io/projected/32bcbe2d-59ae-41f7-861e-04b27330d055-kube-api-access-5jxs8\") pod \"glance-54f1-account-create-update-p6hrg\" (UID: \"32bcbe2d-59ae-41f7-861e-04b27330d055\") " pod="openstack/glance-54f1-account-create-update-p6hrg" Dec 01 08:56:06 crc kubenswrapper[4689]: I1201 08:56:06.858567 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32bcbe2d-59ae-41f7-861e-04b27330d055-operator-scripts\") pod \"glance-54f1-account-create-update-p6hrg\" (UID: \"32bcbe2d-59ae-41f7-861e-04b27330d055\") " pod="openstack/glance-54f1-account-create-update-p6hrg" Dec 01 08:56:06 crc kubenswrapper[4689]: I1201 08:56:06.871774 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-1ae5-account-create-update-55l2b" Dec 01 08:56:06 crc kubenswrapper[4689]: I1201 08:56:06.883977 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jxs8\" (UniqueName: \"kubernetes.io/projected/32bcbe2d-59ae-41f7-861e-04b27330d055-kube-api-access-5jxs8\") pod \"glance-54f1-account-create-update-p6hrg\" (UID: \"32bcbe2d-59ae-41f7-861e-04b27330d055\") " pod="openstack/glance-54f1-account-create-update-p6hrg" Dec 01 08:56:06 crc kubenswrapper[4689]: I1201 08:56:06.970401 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-2w7pr" Dec 01 08:56:07 crc kubenswrapper[4689]: I1201 08:56:07.019534 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-54f1-account-create-update-p6hrg" Dec 01 08:56:07 crc kubenswrapper[4689]: I1201 08:56:07.078166 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-rg78r"] Dec 01 08:56:07 crc kubenswrapper[4689]: W1201 08:56:07.109146 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda07bdee7_df02_4a87_ab8e_68939cd995bd.slice/crio-f6e634416107477c3663d3abc6b5c8d3c91e1c43b62fa8db6cbff6182c03bc8b WatchSource:0}: Error finding container f6e634416107477c3663d3abc6b5c8d3c91e1c43b62fa8db6cbff6182c03bc8b: Status 404 returned error can't find the container with id f6e634416107477c3663d3abc6b5c8d3c91e1c43b62fa8db6cbff6182c03bc8b Dec 01 08:56:07 crc kubenswrapper[4689]: I1201 08:56:07.185797 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-c798-account-create-update-6jt58"] Dec 01 08:56:07 crc kubenswrapper[4689]: W1201 08:56:07.207413 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8502e89e_509a_4cf5_8308_d769dba3a547.slice/crio-24e6fa8066ed11d43b5314c7f6da13e2bd2d48859caaf079876fb27cea4ea7df WatchSource:0}: Error finding container 24e6fa8066ed11d43b5314c7f6da13e2bd2d48859caaf079876fb27cea4ea7df: Status 404 returned error can't find the container with id 24e6fa8066ed11d43b5314c7f6da13e2bd2d48859caaf079876fb27cea4ea7df Dec 01 08:56:07 crc kubenswrapper[4689]: I1201 08:56:07.294455 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-pxsb5"] Dec 01 08:56:07 crc kubenswrapper[4689]: I1201 08:56:07.426489 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-1ae5-account-create-update-55l2b"] Dec 01 08:56:07 crc kubenswrapper[4689]: E1201 08:56:07.434193 4689 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod61e872f8_178c_4fc2_9445_78bb85dcc09e.slice/crio-a1c61da5e27022bc3257486369e76e7126d83239ca19ba64d25e4cb765522ca6.scope\": RecentStats: unable to find data in memory cache]" Dec 01 08:56:07 crc kubenswrapper[4689]: I1201 08:56:07.533470 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-2w7pr"] Dec 01 08:56:07 crc kubenswrapper[4689]: I1201 08:56:07.651946 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-54f1-account-create-update-p6hrg"] Dec 01 08:56:07 crc kubenswrapper[4689]: I1201 08:56:07.783282 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c18c4a63-48ba-42e2-a7f0-d5750963b90f-etc-swift\") pod \"swift-storage-0\" (UID: \"c18c4a63-48ba-42e2-a7f0-d5750963b90f\") " pod="openstack/swift-storage-0" Dec 01 08:56:07 crc kubenswrapper[4689]: E1201 08:56:07.783720 4689 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 01 08:56:07 crc kubenswrapper[4689]: E1201 08:56:07.783778 4689 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 01 08:56:07 crc kubenswrapper[4689]: E1201 08:56:07.783869 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c18c4a63-48ba-42e2-a7f0-d5750963b90f-etc-swift podName:c18c4a63-48ba-42e2-a7f0-d5750963b90f nodeName:}" failed. No retries permitted until 2025-12-01 08:56:15.783841429 +0000 UTC m=+1055.856129333 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c18c4a63-48ba-42e2-a7f0-d5750963b90f-etc-swift") pod "swift-storage-0" (UID: "c18c4a63-48ba-42e2-a7f0-d5750963b90f") : configmap "swift-ring-files" not found Dec 01 08:56:07 crc kubenswrapper[4689]: I1201 08:56:07.864086 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-2w7pr" event={"ID":"3b0afecc-7b1c-43f8-b9cd-7595bbd87459","Type":"ContainerStarted","Data":"e80500fdb3a56308606847ec7578d5c5878b45a72eafd5d8aadaa2a84eea3c37"} Dec 01 08:56:07 crc kubenswrapper[4689]: I1201 08:56:07.894575 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-pxsb5" event={"ID":"fcefa00f-aa74-4399-8e77-df956e479367","Type":"ContainerStarted","Data":"4ed72779f865512f901b1307a388fa4c0f2fe63920312786edf5fef4882a1982"} Dec 01 08:56:07 crc kubenswrapper[4689]: I1201 08:56:07.903047 4689 generic.go:334] "Generic (PLEG): container finished" podID="61e872f8-178c-4fc2-9445-78bb85dcc09e" containerID="a1c61da5e27022bc3257486369e76e7126d83239ca19ba64d25e4cb765522ca6" exitCode=0 Dec 01 08:56:07 crc kubenswrapper[4689]: I1201 08:56:07.903177 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-48955-config-gg8jq" event={"ID":"61e872f8-178c-4fc2-9445-78bb85dcc09e","Type":"ContainerDied","Data":"a1c61da5e27022bc3257486369e76e7126d83239ca19ba64d25e4cb765522ca6"} Dec 01 08:56:07 crc kubenswrapper[4689]: I1201 08:56:07.910003 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-rg78r" event={"ID":"a07bdee7-df02-4a87-ab8e-68939cd995bd","Type":"ContainerStarted","Data":"8a40b23d6ad02fadffbef6461a975a8da162769f0b19fe8b3fc9f29dda89da0d"} Dec 01 08:56:07 crc kubenswrapper[4689]: I1201 08:56:07.910053 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-rg78r" event={"ID":"a07bdee7-df02-4a87-ab8e-68939cd995bd","Type":"ContainerStarted","Data":"f6e634416107477c3663d3abc6b5c8d3c91e1c43b62fa8db6cbff6182c03bc8b"} Dec 01 08:56:07 crc kubenswrapper[4689]: I1201 08:56:07.918509 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-1ae5-account-create-update-55l2b" event={"ID":"b96ed96a-7a15-4ab1-add3-930f95896b44","Type":"ContainerStarted","Data":"23687374f1b2b6f8dbcdbddb75faf8b62b5878e827e0e2e3f1a655327f332f32"} Dec 01 08:56:07 crc kubenswrapper[4689]: I1201 08:56:07.939903 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-54f1-account-create-update-p6hrg" event={"ID":"32bcbe2d-59ae-41f7-861e-04b27330d055","Type":"ContainerStarted","Data":"1f01bee9ad928ebe2cb044e779c027ed5bdb79b27995a4a9268ed8736387123a"} Dec 01 08:56:07 crc kubenswrapper[4689]: I1201 08:56:07.947652 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-c798-account-create-update-6jt58" event={"ID":"8502e89e-509a-4cf5-8308-d769dba3a547","Type":"ContainerStarted","Data":"8a5d0037c0e01786c1948e970feceedb7d0defff2fc354f6a572480941380d75"} Dec 01 08:56:07 crc kubenswrapper[4689]: I1201 08:56:07.947849 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-c798-account-create-update-6jt58" event={"ID":"8502e89e-509a-4cf5-8308-d769dba3a547","Type":"ContainerStarted","Data":"24e6fa8066ed11d43b5314c7f6da13e2bd2d48859caaf079876fb27cea4ea7df"} Dec 01 08:56:07 crc kubenswrapper[4689]: I1201 08:56:07.966551 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-1ae5-account-create-update-55l2b" podStartSLOduration=1.9664787540000002 podStartE2EDuration="1.966478754s" podCreationTimestamp="2025-12-01 08:56:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:56:07.95685865 +0000 UTC m=+1048.029146564" watchObservedRunningTime="2025-12-01 08:56:07.966478754 +0000 UTC m=+1048.038766658" Dec 01 08:56:07 crc kubenswrapper[4689]: I1201 08:56:07.982417 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-rg78r" podStartSLOduration=2.98239104 podStartE2EDuration="2.98239104s" podCreationTimestamp="2025-12-01 08:56:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:56:07.974195486 +0000 UTC m=+1048.046483390" watchObservedRunningTime="2025-12-01 08:56:07.98239104 +0000 UTC m=+1048.054678944" Dec 01 08:56:08 crc kubenswrapper[4689]: I1201 08:56:08.005284 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-c798-account-create-update-6jt58" podStartSLOduration=2.005259937 podStartE2EDuration="2.005259937s" podCreationTimestamp="2025-12-01 08:56:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:56:07.998410809 +0000 UTC m=+1048.070698713" watchObservedRunningTime="2025-12-01 08:56:08.005259937 +0000 UTC m=+1048.077547841" Dec 01 08:56:08 crc kubenswrapper[4689]: I1201 08:56:08.970783 4689 generic.go:334] "Generic (PLEG): container finished" podID="3b0afecc-7b1c-43f8-b9cd-7595bbd87459" containerID="3ad5281bd9650caaec47667d7f2d0a6171010575626cbaaafe0e8ce406d6d73f" exitCode=0 Dec 01 08:56:08 crc kubenswrapper[4689]: I1201 08:56:08.970905 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-2w7pr" event={"ID":"3b0afecc-7b1c-43f8-b9cd-7595bbd87459","Type":"ContainerDied","Data":"3ad5281bd9650caaec47667d7f2d0a6171010575626cbaaafe0e8ce406d6d73f"} Dec 01 08:56:08 crc kubenswrapper[4689]: I1201 08:56:08.977497 4689 generic.go:334] "Generic (PLEG): container finished" podID="fcefa00f-aa74-4399-8e77-df956e479367" containerID="35abe5af8a910c47f78d78d2cca418a15ad2aff3ea98171e0d5f158f808e9a37" exitCode=0 Dec 01 08:56:08 crc kubenswrapper[4689]: I1201 08:56:08.977566 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-pxsb5" event={"ID":"fcefa00f-aa74-4399-8e77-df956e479367","Type":"ContainerDied","Data":"35abe5af8a910c47f78d78d2cca418a15ad2aff3ea98171e0d5f158f808e9a37"} Dec 01 08:56:08 crc kubenswrapper[4689]: I1201 08:56:08.984970 4689 generic.go:334] "Generic (PLEG): container finished" podID="a07bdee7-df02-4a87-ab8e-68939cd995bd" containerID="8a40b23d6ad02fadffbef6461a975a8da162769f0b19fe8b3fc9f29dda89da0d" exitCode=0 Dec 01 08:56:08 crc kubenswrapper[4689]: I1201 08:56:08.985192 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-rg78r" event={"ID":"a07bdee7-df02-4a87-ab8e-68939cd995bd","Type":"ContainerDied","Data":"8a40b23d6ad02fadffbef6461a975a8da162769f0b19fe8b3fc9f29dda89da0d"} Dec 01 08:56:08 crc kubenswrapper[4689]: I1201 08:56:08.986701 4689 generic.go:334] "Generic (PLEG): container finished" podID="b96ed96a-7a15-4ab1-add3-930f95896b44" containerID="b27f162348fa9714141b251ad2f6e1ee1b3b107f4421258f0498d55df45806a4" exitCode=0 Dec 01 08:56:08 crc kubenswrapper[4689]: I1201 08:56:08.986766 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-1ae5-account-create-update-55l2b" event={"ID":"b96ed96a-7a15-4ab1-add3-930f95896b44","Type":"ContainerDied","Data":"b27f162348fa9714141b251ad2f6e1ee1b3b107f4421258f0498d55df45806a4"} Dec 01 08:56:08 crc kubenswrapper[4689]: I1201 08:56:08.996575 4689 generic.go:334] "Generic (PLEG): container finished" podID="32bcbe2d-59ae-41f7-861e-04b27330d055" containerID="b5070d66b6d35927be05ee7f821923bcda2c5f89c09741acaab5dd0f84e0fe72" exitCode=0 Dec 01 08:56:08 crc kubenswrapper[4689]: I1201 08:56:08.996655 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-54f1-account-create-update-p6hrg" event={"ID":"32bcbe2d-59ae-41f7-861e-04b27330d055","Type":"ContainerDied","Data":"b5070d66b6d35927be05ee7f821923bcda2c5f89c09741acaab5dd0f84e0fe72"} Dec 01 08:56:08 crc kubenswrapper[4689]: I1201 08:56:08.998795 4689 generic.go:334] "Generic (PLEG): container finished" podID="8502e89e-509a-4cf5-8308-d769dba3a547" containerID="8a5d0037c0e01786c1948e970feceedb7d0defff2fc354f6a572480941380d75" exitCode=0 Dec 01 08:56:08 crc kubenswrapper[4689]: I1201 08:56:08.998893 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-c798-account-create-update-6jt58" event={"ID":"8502e89e-509a-4cf5-8308-d769dba3a547","Type":"ContainerDied","Data":"8a5d0037c0e01786c1948e970feceedb7d0defff2fc354f6a572480941380d75"} Dec 01 08:56:09 crc kubenswrapper[4689]: I1201 08:56:09.174688 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-lmql7" Dec 01 08:56:09 crc kubenswrapper[4689]: I1201 08:56:09.272479 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-tlzl7"] Dec 01 08:56:09 crc kubenswrapper[4689]: I1201 08:56:09.272733 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-tlzl7" podUID="6161a545-1c73-42c1-8176-c311aad98fed" containerName="dnsmasq-dns" containerID="cri-o://fc91af50ca133f0cace1425809c5db035ee0d26aabac842b9ed4b0ed3a035e9b" gracePeriod=10 Dec 01 08:56:09 crc kubenswrapper[4689]: I1201 08:56:09.430833 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-48955-config-gg8jq" Dec 01 08:56:09 crc kubenswrapper[4689]: I1201 08:56:09.499161 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-86db49b7ff-tlzl7" podUID="6161a545-1c73-42c1-8176-c311aad98fed" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.110:5353: connect: connection refused" Dec 01 08:56:09 crc kubenswrapper[4689]: I1201 08:56:09.522081 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/61e872f8-178c-4fc2-9445-78bb85dcc09e-var-run-ovn\") pod \"61e872f8-178c-4fc2-9445-78bb85dcc09e\" (UID: \"61e872f8-178c-4fc2-9445-78bb85dcc09e\") " Dec 01 08:56:09 crc kubenswrapper[4689]: I1201 08:56:09.522123 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/61e872f8-178c-4fc2-9445-78bb85dcc09e-var-run\") pod \"61e872f8-178c-4fc2-9445-78bb85dcc09e\" (UID: \"61e872f8-178c-4fc2-9445-78bb85dcc09e\") " Dec 01 08:56:09 crc kubenswrapper[4689]: I1201 08:56:09.522149 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/61e872f8-178c-4fc2-9445-78bb85dcc09e-additional-scripts\") pod \"61e872f8-178c-4fc2-9445-78bb85dcc09e\" (UID: \"61e872f8-178c-4fc2-9445-78bb85dcc09e\") " Dec 01 08:56:09 crc kubenswrapper[4689]: I1201 08:56:09.522187 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/61e872f8-178c-4fc2-9445-78bb85dcc09e-scripts\") pod \"61e872f8-178c-4fc2-9445-78bb85dcc09e\" (UID: \"61e872f8-178c-4fc2-9445-78bb85dcc09e\") " Dec 01 08:56:09 crc kubenswrapper[4689]: I1201 08:56:09.522239 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/61e872f8-178c-4fc2-9445-78bb85dcc09e-var-log-ovn\") pod \"61e872f8-178c-4fc2-9445-78bb85dcc09e\" (UID: \"61e872f8-178c-4fc2-9445-78bb85dcc09e\") " Dec 01 08:56:09 crc kubenswrapper[4689]: I1201 08:56:09.522263 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5fclk\" (UniqueName: \"kubernetes.io/projected/61e872f8-178c-4fc2-9445-78bb85dcc09e-kube-api-access-5fclk\") pod \"61e872f8-178c-4fc2-9445-78bb85dcc09e\" (UID: \"61e872f8-178c-4fc2-9445-78bb85dcc09e\") " Dec 01 08:56:09 crc kubenswrapper[4689]: I1201 08:56:09.522294 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/61e872f8-178c-4fc2-9445-78bb85dcc09e-var-run" (OuterVolumeSpecName: "var-run") pod "61e872f8-178c-4fc2-9445-78bb85dcc09e" (UID: "61e872f8-178c-4fc2-9445-78bb85dcc09e"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 08:56:09 crc kubenswrapper[4689]: I1201 08:56:09.522340 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/61e872f8-178c-4fc2-9445-78bb85dcc09e-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "61e872f8-178c-4fc2-9445-78bb85dcc09e" (UID: "61e872f8-178c-4fc2-9445-78bb85dcc09e"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 08:56:09 crc kubenswrapper[4689]: I1201 08:56:09.523577 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/61e872f8-178c-4fc2-9445-78bb85dcc09e-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "61e872f8-178c-4fc2-9445-78bb85dcc09e" (UID: "61e872f8-178c-4fc2-9445-78bb85dcc09e"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 08:56:09 crc kubenswrapper[4689]: I1201 08:56:09.523865 4689 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/61e872f8-178c-4fc2-9445-78bb85dcc09e-var-log-ovn\") on node \"crc\" DevicePath \"\"" Dec 01 08:56:09 crc kubenswrapper[4689]: I1201 08:56:09.523874 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61e872f8-178c-4fc2-9445-78bb85dcc09e-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "61e872f8-178c-4fc2-9445-78bb85dcc09e" (UID: "61e872f8-178c-4fc2-9445-78bb85dcc09e"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:56:09 crc kubenswrapper[4689]: I1201 08:56:09.523888 4689 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/61e872f8-178c-4fc2-9445-78bb85dcc09e-var-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 01 08:56:09 crc kubenswrapper[4689]: I1201 08:56:09.523898 4689 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/61e872f8-178c-4fc2-9445-78bb85dcc09e-var-run\") on node \"crc\" DevicePath \"\"" Dec 01 08:56:09 crc kubenswrapper[4689]: I1201 08:56:09.524588 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61e872f8-178c-4fc2-9445-78bb85dcc09e-scripts" (OuterVolumeSpecName: "scripts") pod "61e872f8-178c-4fc2-9445-78bb85dcc09e" (UID: "61e872f8-178c-4fc2-9445-78bb85dcc09e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:56:09 crc kubenswrapper[4689]: I1201 08:56:09.533716 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61e872f8-178c-4fc2-9445-78bb85dcc09e-kube-api-access-5fclk" (OuterVolumeSpecName: "kube-api-access-5fclk") pod "61e872f8-178c-4fc2-9445-78bb85dcc09e" (UID: "61e872f8-178c-4fc2-9445-78bb85dcc09e"). InnerVolumeSpecName "kube-api-access-5fclk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:56:09 crc kubenswrapper[4689]: I1201 08:56:09.625748 4689 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/61e872f8-178c-4fc2-9445-78bb85dcc09e-additional-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 08:56:09 crc kubenswrapper[4689]: I1201 08:56:09.625784 4689 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/61e872f8-178c-4fc2-9445-78bb85dcc09e-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 08:56:09 crc kubenswrapper[4689]: I1201 08:56:09.625796 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5fclk\" (UniqueName: \"kubernetes.io/projected/61e872f8-178c-4fc2-9445-78bb85dcc09e-kube-api-access-5fclk\") on node \"crc\" DevicePath \"\"" Dec 01 08:56:10 crc kubenswrapper[4689]: I1201 08:56:10.009747 4689 generic.go:334] "Generic (PLEG): container finished" podID="6161a545-1c73-42c1-8176-c311aad98fed" containerID="fc91af50ca133f0cace1425809c5db035ee0d26aabac842b9ed4b0ed3a035e9b" exitCode=0 Dec 01 08:56:10 crc kubenswrapper[4689]: I1201 08:56:10.010033 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-tlzl7" event={"ID":"6161a545-1c73-42c1-8176-c311aad98fed","Type":"ContainerDied","Data":"fc91af50ca133f0cace1425809c5db035ee0d26aabac842b9ed4b0ed3a035e9b"} Dec 01 08:56:10 crc kubenswrapper[4689]: I1201 08:56:10.011850 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-48955-config-gg8jq" event={"ID":"61e872f8-178c-4fc2-9445-78bb85dcc09e","Type":"ContainerDied","Data":"fe398f9040591043645828b4c3af1611dc689f24ef0923add8083692d56d8a25"} Dec 01 08:56:10 crc kubenswrapper[4689]: I1201 08:56:10.011889 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe398f9040591043645828b4c3af1611dc689f24ef0923add8083692d56d8a25" Dec 01 08:56:10 crc kubenswrapper[4689]: I1201 08:56:10.012075 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-48955-config-gg8jq" Dec 01 08:56:10 crc kubenswrapper[4689]: I1201 08:56:10.446339 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-1ae5-account-create-update-55l2b" Dec 01 08:56:10 crc kubenswrapper[4689]: I1201 08:56:10.539666 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bzvj2\" (UniqueName: \"kubernetes.io/projected/b96ed96a-7a15-4ab1-add3-930f95896b44-kube-api-access-bzvj2\") pod \"b96ed96a-7a15-4ab1-add3-930f95896b44\" (UID: \"b96ed96a-7a15-4ab1-add3-930f95896b44\") " Dec 01 08:56:10 crc kubenswrapper[4689]: I1201 08:56:10.539754 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b96ed96a-7a15-4ab1-add3-930f95896b44-operator-scripts\") pod \"b96ed96a-7a15-4ab1-add3-930f95896b44\" (UID: \"b96ed96a-7a15-4ab1-add3-930f95896b44\") " Dec 01 08:56:10 crc kubenswrapper[4689]: I1201 08:56:10.540933 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b96ed96a-7a15-4ab1-add3-930f95896b44-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b96ed96a-7a15-4ab1-add3-930f95896b44" (UID: "b96ed96a-7a15-4ab1-add3-930f95896b44"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:56:10 crc kubenswrapper[4689]: I1201 08:56:10.548610 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b96ed96a-7a15-4ab1-add3-930f95896b44-kube-api-access-bzvj2" (OuterVolumeSpecName: "kube-api-access-bzvj2") pod "b96ed96a-7a15-4ab1-add3-930f95896b44" (UID: "b96ed96a-7a15-4ab1-add3-930f95896b44"). InnerVolumeSpecName "kube-api-access-bzvj2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:56:10 crc kubenswrapper[4689]: I1201 08:56:10.580525 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-48955-config-gg8jq"] Dec 01 08:56:10 crc kubenswrapper[4689]: I1201 08:56:10.589491 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-48955-config-gg8jq"] Dec 01 08:56:10 crc kubenswrapper[4689]: I1201 08:56:10.646307 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bzvj2\" (UniqueName: \"kubernetes.io/projected/b96ed96a-7a15-4ab1-add3-930f95896b44-kube-api-access-bzvj2\") on node \"crc\" DevicePath \"\"" Dec 01 08:56:10 crc kubenswrapper[4689]: I1201 08:56:10.660108 4689 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b96ed96a-7a15-4ab1-add3-930f95896b44-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 08:56:10 crc kubenswrapper[4689]: I1201 08:56:10.681286 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-48955-config-8clj4"] Dec 01 08:56:10 crc kubenswrapper[4689]: E1201 08:56:10.681932 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61e872f8-178c-4fc2-9445-78bb85dcc09e" containerName="ovn-config" Dec 01 08:56:10 crc kubenswrapper[4689]: I1201 08:56:10.681948 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="61e872f8-178c-4fc2-9445-78bb85dcc09e" containerName="ovn-config" Dec 01 08:56:10 crc kubenswrapper[4689]: E1201 08:56:10.681963 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b96ed96a-7a15-4ab1-add3-930f95896b44" containerName="mariadb-account-create-update" Dec 01 08:56:10 crc kubenswrapper[4689]: I1201 08:56:10.681972 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="b96ed96a-7a15-4ab1-add3-930f95896b44" containerName="mariadb-account-create-update" Dec 01 08:56:10 crc kubenswrapper[4689]: I1201 08:56:10.686781 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="b96ed96a-7a15-4ab1-add3-930f95896b44" containerName="mariadb-account-create-update" Dec 01 08:56:10 crc kubenswrapper[4689]: I1201 08:56:10.686821 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="61e872f8-178c-4fc2-9445-78bb85dcc09e" containerName="ovn-config" Dec 01 08:56:10 crc kubenswrapper[4689]: I1201 08:56:10.687711 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-48955-config-8clj4" Dec 01 08:56:10 crc kubenswrapper[4689]: I1201 08:56:10.690710 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Dec 01 08:56:10 crc kubenswrapper[4689]: I1201 08:56:10.732897 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-48955-config-8clj4"] Dec 01 08:56:10 crc kubenswrapper[4689]: I1201 08:56:10.762430 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/1502e410-0332-4b3c-beb1-c8b987ab6538-additional-scripts\") pod \"ovn-controller-48955-config-8clj4\" (UID: \"1502e410-0332-4b3c-beb1-c8b987ab6538\") " pod="openstack/ovn-controller-48955-config-8clj4" Dec 01 08:56:10 crc kubenswrapper[4689]: I1201 08:56:10.773280 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1502e410-0332-4b3c-beb1-c8b987ab6538-scripts\") pod \"ovn-controller-48955-config-8clj4\" (UID: \"1502e410-0332-4b3c-beb1-c8b987ab6538\") " pod="openstack/ovn-controller-48955-config-8clj4" Dec 01 08:56:10 crc kubenswrapper[4689]: I1201 08:56:10.775808 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/1502e410-0332-4b3c-beb1-c8b987ab6538-var-log-ovn\") pod \"ovn-controller-48955-config-8clj4\" (UID: \"1502e410-0332-4b3c-beb1-c8b987ab6538\") " pod="openstack/ovn-controller-48955-config-8clj4" Dec 01 08:56:10 crc kubenswrapper[4689]: I1201 08:56:10.775894 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1502e410-0332-4b3c-beb1-c8b987ab6538-var-run\") pod \"ovn-controller-48955-config-8clj4\" (UID: \"1502e410-0332-4b3c-beb1-c8b987ab6538\") " pod="openstack/ovn-controller-48955-config-8clj4" Dec 01 08:56:10 crc kubenswrapper[4689]: I1201 08:56:10.776023 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/1502e410-0332-4b3c-beb1-c8b987ab6538-var-run-ovn\") pod \"ovn-controller-48955-config-8clj4\" (UID: \"1502e410-0332-4b3c-beb1-c8b987ab6538\") " pod="openstack/ovn-controller-48955-config-8clj4" Dec 01 08:56:10 crc kubenswrapper[4689]: I1201 08:56:10.776052 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bplk\" (UniqueName: \"kubernetes.io/projected/1502e410-0332-4b3c-beb1-c8b987ab6538-kube-api-access-8bplk\") pod \"ovn-controller-48955-config-8clj4\" (UID: \"1502e410-0332-4b3c-beb1-c8b987ab6538\") " pod="openstack/ovn-controller-48955-config-8clj4" Dec 01 08:56:10 crc kubenswrapper[4689]: I1201 08:56:10.824233 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-c798-account-create-update-6jt58" Dec 01 08:56:10 crc kubenswrapper[4689]: I1201 08:56:10.840260 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-tlzl7" Dec 01 08:56:10 crc kubenswrapper[4689]: I1201 08:56:10.841949 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-54f1-account-create-update-p6hrg" Dec 01 08:56:10 crc kubenswrapper[4689]: I1201 08:56:10.879656 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6161a545-1c73-42c1-8176-c311aad98fed-ovsdbserver-nb\") pod \"6161a545-1c73-42c1-8176-c311aad98fed\" (UID: \"6161a545-1c73-42c1-8176-c311aad98fed\") " Dec 01 08:56:10 crc kubenswrapper[4689]: I1201 08:56:10.879714 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32bcbe2d-59ae-41f7-861e-04b27330d055-operator-scripts\") pod \"32bcbe2d-59ae-41f7-861e-04b27330d055\" (UID: \"32bcbe2d-59ae-41f7-861e-04b27330d055\") " Dec 01 08:56:10 crc kubenswrapper[4689]: I1201 08:56:10.879761 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wr7m6\" (UniqueName: \"kubernetes.io/projected/6161a545-1c73-42c1-8176-c311aad98fed-kube-api-access-wr7m6\") pod \"6161a545-1c73-42c1-8176-c311aad98fed\" (UID: \"6161a545-1c73-42c1-8176-c311aad98fed\") " Dec 01 08:56:10 crc kubenswrapper[4689]: I1201 08:56:10.879783 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jxs8\" (UniqueName: \"kubernetes.io/projected/32bcbe2d-59ae-41f7-861e-04b27330d055-kube-api-access-5jxs8\") pod \"32bcbe2d-59ae-41f7-861e-04b27330d055\" (UID: \"32bcbe2d-59ae-41f7-861e-04b27330d055\") " Dec 01 08:56:10 crc kubenswrapper[4689]: I1201 08:56:10.879804 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6161a545-1c73-42c1-8176-c311aad98fed-config\") pod \"6161a545-1c73-42c1-8176-c311aad98fed\" (UID: \"6161a545-1c73-42c1-8176-c311aad98fed\") " Dec 01 08:56:10 crc kubenswrapper[4689]: I1201 08:56:10.879824 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hmqsm\" (UniqueName: \"kubernetes.io/projected/8502e89e-509a-4cf5-8308-d769dba3a547-kube-api-access-hmqsm\") pod \"8502e89e-509a-4cf5-8308-d769dba3a547\" (UID: \"8502e89e-509a-4cf5-8308-d769dba3a547\") " Dec 01 08:56:10 crc kubenswrapper[4689]: I1201 08:56:10.879849 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6161a545-1c73-42c1-8176-c311aad98fed-ovsdbserver-sb\") pod \"6161a545-1c73-42c1-8176-c311aad98fed\" (UID: \"6161a545-1c73-42c1-8176-c311aad98fed\") " Dec 01 08:56:10 crc kubenswrapper[4689]: I1201 08:56:10.879928 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6161a545-1c73-42c1-8176-c311aad98fed-dns-svc\") pod \"6161a545-1c73-42c1-8176-c311aad98fed\" (UID: \"6161a545-1c73-42c1-8176-c311aad98fed\") " Dec 01 08:56:10 crc kubenswrapper[4689]: I1201 08:56:10.879985 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8502e89e-509a-4cf5-8308-d769dba3a547-operator-scripts\") pod \"8502e89e-509a-4cf5-8308-d769dba3a547\" (UID: \"8502e89e-509a-4cf5-8308-d769dba3a547\") " Dec 01 08:56:10 crc kubenswrapper[4689]: I1201 08:56:10.880202 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1502e410-0332-4b3c-beb1-c8b987ab6538-scripts\") pod \"ovn-controller-48955-config-8clj4\" (UID: \"1502e410-0332-4b3c-beb1-c8b987ab6538\") " pod="openstack/ovn-controller-48955-config-8clj4" Dec 01 08:56:10 crc kubenswrapper[4689]: I1201 08:56:10.880262 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/1502e410-0332-4b3c-beb1-c8b987ab6538-var-log-ovn\") pod \"ovn-controller-48955-config-8clj4\" (UID: \"1502e410-0332-4b3c-beb1-c8b987ab6538\") " pod="openstack/ovn-controller-48955-config-8clj4" Dec 01 08:56:10 crc kubenswrapper[4689]: I1201 08:56:10.880280 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1502e410-0332-4b3c-beb1-c8b987ab6538-var-run\") pod \"ovn-controller-48955-config-8clj4\" (UID: \"1502e410-0332-4b3c-beb1-c8b987ab6538\") " pod="openstack/ovn-controller-48955-config-8clj4" Dec 01 08:56:10 crc kubenswrapper[4689]: I1201 08:56:10.880318 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/1502e410-0332-4b3c-beb1-c8b987ab6538-var-run-ovn\") pod \"ovn-controller-48955-config-8clj4\" (UID: \"1502e410-0332-4b3c-beb1-c8b987ab6538\") " pod="openstack/ovn-controller-48955-config-8clj4" Dec 01 08:56:10 crc kubenswrapper[4689]: I1201 08:56:10.880336 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bplk\" (UniqueName: \"kubernetes.io/projected/1502e410-0332-4b3c-beb1-c8b987ab6538-kube-api-access-8bplk\") pod \"ovn-controller-48955-config-8clj4\" (UID: \"1502e410-0332-4b3c-beb1-c8b987ab6538\") " pod="openstack/ovn-controller-48955-config-8clj4" Dec 01 08:56:10 crc kubenswrapper[4689]: I1201 08:56:10.880438 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/1502e410-0332-4b3c-beb1-c8b987ab6538-additional-scripts\") pod \"ovn-controller-48955-config-8clj4\" (UID: \"1502e410-0332-4b3c-beb1-c8b987ab6538\") " pod="openstack/ovn-controller-48955-config-8clj4" Dec 01 08:56:10 crc kubenswrapper[4689]: I1201 08:56:10.881104 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/1502e410-0332-4b3c-beb1-c8b987ab6538-additional-scripts\") pod \"ovn-controller-48955-config-8clj4\" (UID: \"1502e410-0332-4b3c-beb1-c8b987ab6538\") " pod="openstack/ovn-controller-48955-config-8clj4" Dec 01 08:56:10 crc kubenswrapper[4689]: I1201 08:56:10.887482 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32bcbe2d-59ae-41f7-861e-04b27330d055-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "32bcbe2d-59ae-41f7-861e-04b27330d055" (UID: "32bcbe2d-59ae-41f7-861e-04b27330d055"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:56:10 crc kubenswrapper[4689]: I1201 08:56:10.892520 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8502e89e-509a-4cf5-8308-d769dba3a547-kube-api-access-hmqsm" (OuterVolumeSpecName: "kube-api-access-hmqsm") pod "8502e89e-509a-4cf5-8308-d769dba3a547" (UID: "8502e89e-509a-4cf5-8308-d769dba3a547"). InnerVolumeSpecName "kube-api-access-hmqsm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:56:10 crc kubenswrapper[4689]: I1201 08:56:10.899561 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8502e89e-509a-4cf5-8308-d769dba3a547-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8502e89e-509a-4cf5-8308-d769dba3a547" (UID: "8502e89e-509a-4cf5-8308-d769dba3a547"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:56:10 crc kubenswrapper[4689]: I1201 08:56:10.901279 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1502e410-0332-4b3c-beb1-c8b987ab6538-scripts\") pod \"ovn-controller-48955-config-8clj4\" (UID: \"1502e410-0332-4b3c-beb1-c8b987ab6538\") " pod="openstack/ovn-controller-48955-config-8clj4" Dec 01 08:56:10 crc kubenswrapper[4689]: I1201 08:56:10.901431 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32bcbe2d-59ae-41f7-861e-04b27330d055-kube-api-access-5jxs8" (OuterVolumeSpecName: "kube-api-access-5jxs8") pod "32bcbe2d-59ae-41f7-861e-04b27330d055" (UID: "32bcbe2d-59ae-41f7-861e-04b27330d055"). InnerVolumeSpecName "kube-api-access-5jxs8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:56:10 crc kubenswrapper[4689]: I1201 08:56:10.901654 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/1502e410-0332-4b3c-beb1-c8b987ab6538-var-run-ovn\") pod \"ovn-controller-48955-config-8clj4\" (UID: \"1502e410-0332-4b3c-beb1-c8b987ab6538\") " pod="openstack/ovn-controller-48955-config-8clj4" Dec 01 08:56:10 crc kubenswrapper[4689]: I1201 08:56:10.901694 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1502e410-0332-4b3c-beb1-c8b987ab6538-var-run\") pod \"ovn-controller-48955-config-8clj4\" (UID: \"1502e410-0332-4b3c-beb1-c8b987ab6538\") " pod="openstack/ovn-controller-48955-config-8clj4" Dec 01 08:56:10 crc kubenswrapper[4689]: I1201 08:56:10.905088 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/1502e410-0332-4b3c-beb1-c8b987ab6538-var-log-ovn\") pod \"ovn-controller-48955-config-8clj4\" (UID: \"1502e410-0332-4b3c-beb1-c8b987ab6538\") " pod="openstack/ovn-controller-48955-config-8clj4" Dec 01 08:56:10 crc kubenswrapper[4689]: I1201 08:56:10.906322 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6161a545-1c73-42c1-8176-c311aad98fed-kube-api-access-wr7m6" (OuterVolumeSpecName: "kube-api-access-wr7m6") pod "6161a545-1c73-42c1-8176-c311aad98fed" (UID: "6161a545-1c73-42c1-8176-c311aad98fed"). InnerVolumeSpecName "kube-api-access-wr7m6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:56:10 crc kubenswrapper[4689]: I1201 08:56:10.914234 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-2w7pr" Dec 01 08:56:10 crc kubenswrapper[4689]: I1201 08:56:10.918878 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bplk\" (UniqueName: \"kubernetes.io/projected/1502e410-0332-4b3c-beb1-c8b987ab6538-kube-api-access-8bplk\") pod \"ovn-controller-48955-config-8clj4\" (UID: \"1502e410-0332-4b3c-beb1-c8b987ab6538\") " pod="openstack/ovn-controller-48955-config-8clj4" Dec 01 08:56:10 crc kubenswrapper[4689]: I1201 08:56:10.919480 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-rg78r" Dec 01 08:56:10 crc kubenswrapper[4689]: I1201 08:56:10.934416 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-pxsb5" Dec 01 08:56:10 crc kubenswrapper[4689]: I1201 08:56:10.959548 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6161a545-1c73-42c1-8176-c311aad98fed-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6161a545-1c73-42c1-8176-c311aad98fed" (UID: "6161a545-1c73-42c1-8176-c311aad98fed"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:56:10 crc kubenswrapper[4689]: I1201 08:56:10.982069 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmzzw\" (UniqueName: \"kubernetes.io/projected/3b0afecc-7b1c-43f8-b9cd-7595bbd87459-kube-api-access-kmzzw\") pod \"3b0afecc-7b1c-43f8-b9cd-7595bbd87459\" (UID: \"3b0afecc-7b1c-43f8-b9cd-7595bbd87459\") " Dec 01 08:56:10 crc kubenswrapper[4689]: I1201 08:56:10.982180 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fcefa00f-aa74-4399-8e77-df956e479367-operator-scripts\") pod \"fcefa00f-aa74-4399-8e77-df956e479367\" (UID: \"fcefa00f-aa74-4399-8e77-df956e479367\") " Dec 01 08:56:10 crc kubenswrapper[4689]: I1201 08:56:10.982221 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qxkp4\" (UniqueName: \"kubernetes.io/projected/fcefa00f-aa74-4399-8e77-df956e479367-kube-api-access-qxkp4\") pod \"fcefa00f-aa74-4399-8e77-df956e479367\" (UID: \"fcefa00f-aa74-4399-8e77-df956e479367\") " Dec 01 08:56:10 crc kubenswrapper[4689]: I1201 08:56:10.982293 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b0afecc-7b1c-43f8-b9cd-7595bbd87459-operator-scripts\") pod \"3b0afecc-7b1c-43f8-b9cd-7595bbd87459\" (UID: \"3b0afecc-7b1c-43f8-b9cd-7595bbd87459\") " Dec 01 08:56:10 crc kubenswrapper[4689]: I1201 08:56:10.982329 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a07bdee7-df02-4a87-ab8e-68939cd995bd-operator-scripts\") pod \"a07bdee7-df02-4a87-ab8e-68939cd995bd\" (UID: \"a07bdee7-df02-4a87-ab8e-68939cd995bd\") " Dec 01 08:56:10 crc kubenswrapper[4689]: I1201 08:56:10.982351 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t4tv5\" (UniqueName: \"kubernetes.io/projected/a07bdee7-df02-4a87-ab8e-68939cd995bd-kube-api-access-t4tv5\") pod \"a07bdee7-df02-4a87-ab8e-68939cd995bd\" (UID: \"a07bdee7-df02-4a87-ab8e-68939cd995bd\") " Dec 01 08:56:10 crc kubenswrapper[4689]: I1201 08:56:10.982698 4689 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8502e89e-509a-4cf5-8308-d769dba3a547-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 08:56:10 crc kubenswrapper[4689]: I1201 08:56:10.982710 4689 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32bcbe2d-59ae-41f7-861e-04b27330d055-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 08:56:10 crc kubenswrapper[4689]: I1201 08:56:10.982718 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wr7m6\" (UniqueName: \"kubernetes.io/projected/6161a545-1c73-42c1-8176-c311aad98fed-kube-api-access-wr7m6\") on node \"crc\" DevicePath \"\"" Dec 01 08:56:10 crc kubenswrapper[4689]: I1201 08:56:10.982728 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5jxs8\" (UniqueName: \"kubernetes.io/projected/32bcbe2d-59ae-41f7-861e-04b27330d055-kube-api-access-5jxs8\") on node \"crc\" DevicePath \"\"" Dec 01 08:56:10 crc kubenswrapper[4689]: I1201 08:56:10.982738 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hmqsm\" (UniqueName: \"kubernetes.io/projected/8502e89e-509a-4cf5-8308-d769dba3a547-kube-api-access-hmqsm\") on node \"crc\" DevicePath \"\"" Dec 01 08:56:10 crc kubenswrapper[4689]: I1201 08:56:10.982746 4689 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6161a545-1c73-42c1-8176-c311aad98fed-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 01 08:56:10 crc kubenswrapper[4689]: I1201 08:56:10.983619 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b0afecc-7b1c-43f8-b9cd-7595bbd87459-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3b0afecc-7b1c-43f8-b9cd-7595bbd87459" (UID: "3b0afecc-7b1c-43f8-b9cd-7595bbd87459"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:56:10 crc kubenswrapper[4689]: I1201 08:56:10.984109 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fcefa00f-aa74-4399-8e77-df956e479367-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fcefa00f-aa74-4399-8e77-df956e479367" (UID: "fcefa00f-aa74-4399-8e77-df956e479367"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:56:10 crc kubenswrapper[4689]: I1201 08:56:10.984256 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a07bdee7-df02-4a87-ab8e-68939cd995bd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a07bdee7-df02-4a87-ab8e-68939cd995bd" (UID: "a07bdee7-df02-4a87-ab8e-68939cd995bd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:56:10 crc kubenswrapper[4689]: I1201 08:56:10.991078 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b0afecc-7b1c-43f8-b9cd-7595bbd87459-kube-api-access-kmzzw" (OuterVolumeSpecName: "kube-api-access-kmzzw") pod "3b0afecc-7b1c-43f8-b9cd-7595bbd87459" (UID: "3b0afecc-7b1c-43f8-b9cd-7595bbd87459"). InnerVolumeSpecName "kube-api-access-kmzzw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:56:10 crc kubenswrapper[4689]: I1201 08:56:10.991133 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fcefa00f-aa74-4399-8e77-df956e479367-kube-api-access-qxkp4" (OuterVolumeSpecName: "kube-api-access-qxkp4") pod "fcefa00f-aa74-4399-8e77-df956e479367" (UID: "fcefa00f-aa74-4399-8e77-df956e479367"). InnerVolumeSpecName "kube-api-access-qxkp4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:56:10 crc kubenswrapper[4689]: I1201 08:56:10.991528 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6161a545-1c73-42c1-8176-c311aad98fed-config" (OuterVolumeSpecName: "config") pod "6161a545-1c73-42c1-8176-c311aad98fed" (UID: "6161a545-1c73-42c1-8176-c311aad98fed"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:56:10 crc kubenswrapper[4689]: I1201 08:56:10.997589 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a07bdee7-df02-4a87-ab8e-68939cd995bd-kube-api-access-t4tv5" (OuterVolumeSpecName: "kube-api-access-t4tv5") pod "a07bdee7-df02-4a87-ab8e-68939cd995bd" (UID: "a07bdee7-df02-4a87-ab8e-68939cd995bd"). InnerVolumeSpecName "kube-api-access-t4tv5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:56:11 crc kubenswrapper[4689]: I1201 08:56:11.007487 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6161a545-1c73-42c1-8176-c311aad98fed-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6161a545-1c73-42c1-8176-c311aad98fed" (UID: "6161a545-1c73-42c1-8176-c311aad98fed"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:56:11 crc kubenswrapper[4689]: I1201 08:56:11.017897 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6161a545-1c73-42c1-8176-c311aad98fed-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6161a545-1c73-42c1-8176-c311aad98fed" (UID: "6161a545-1c73-42c1-8176-c311aad98fed"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:56:11 crc kubenswrapper[4689]: I1201 08:56:11.030652 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-1ae5-account-create-update-55l2b" event={"ID":"b96ed96a-7a15-4ab1-add3-930f95896b44","Type":"ContainerDied","Data":"23687374f1b2b6f8dbcdbddb75faf8b62b5878e827e0e2e3f1a655327f332f32"} Dec 01 08:56:11 crc kubenswrapper[4689]: I1201 08:56:11.030712 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="23687374f1b2b6f8dbcdbddb75faf8b62b5878e827e0e2e3f1a655327f332f32" Dec 01 08:56:11 crc kubenswrapper[4689]: I1201 08:56:11.030797 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-1ae5-account-create-update-55l2b" Dec 01 08:56:11 crc kubenswrapper[4689]: I1201 08:56:11.041495 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-tlzl7" event={"ID":"6161a545-1c73-42c1-8176-c311aad98fed","Type":"ContainerDied","Data":"6ed64f9027398840c804f06f609402a5aa2a7c8e5431c01d49855455e258d7c8"} Dec 01 08:56:11 crc kubenswrapper[4689]: I1201 08:56:11.041550 4689 scope.go:117] "RemoveContainer" containerID="fc91af50ca133f0cace1425809c5db035ee0d26aabac842b9ed4b0ed3a035e9b" Dec 01 08:56:11 crc kubenswrapper[4689]: I1201 08:56:11.041673 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-tlzl7" Dec 01 08:56:11 crc kubenswrapper[4689]: I1201 08:56:11.055111 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-c798-account-create-update-6jt58" Dec 01 08:56:11 crc kubenswrapper[4689]: I1201 08:56:11.058831 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-54f1-account-create-update-p6hrg" Dec 01 08:56:11 crc kubenswrapper[4689]: I1201 08:56:11.060638 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-2w7pr" Dec 01 08:56:11 crc kubenswrapper[4689]: I1201 08:56:11.069801 4689 scope.go:117] "RemoveContainer" containerID="6b3853ac6b776a5558ee9a5e0acd3df9f1721b31474e7b25bc88a1961fc1c25e" Dec 01 08:56:11 crc kubenswrapper[4689]: I1201 08:56:11.070103 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-pxsb5" Dec 01 08:56:11 crc kubenswrapper[4689]: I1201 08:56:11.079053 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-rg78r" Dec 01 08:56:11 crc kubenswrapper[4689]: I1201 08:56:11.089611 4689 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6161a545-1c73-42c1-8176-c311aad98fed-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 01 08:56:11 crc kubenswrapper[4689]: I1201 08:56:11.089651 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kmzzw\" (UniqueName: \"kubernetes.io/projected/3b0afecc-7b1c-43f8-b9cd-7595bbd87459-kube-api-access-kmzzw\") on node \"crc\" DevicePath \"\"" Dec 01 08:56:11 crc kubenswrapper[4689]: I1201 08:56:11.089665 4689 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fcefa00f-aa74-4399-8e77-df956e479367-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 08:56:11 crc kubenswrapper[4689]: I1201 08:56:11.089675 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qxkp4\" (UniqueName: \"kubernetes.io/projected/fcefa00f-aa74-4399-8e77-df956e479367-kube-api-access-qxkp4\") on node \"crc\" DevicePath \"\"" Dec 01 08:56:11 crc kubenswrapper[4689]: I1201 08:56:11.089684 4689 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b0afecc-7b1c-43f8-b9cd-7595bbd87459-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 08:56:11 crc kubenswrapper[4689]: I1201 08:56:11.089697 4689 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6161a545-1c73-42c1-8176-c311aad98fed-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 01 08:56:11 crc kubenswrapper[4689]: I1201 08:56:11.089710 4689 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a07bdee7-df02-4a87-ab8e-68939cd995bd-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 08:56:11 crc kubenswrapper[4689]: I1201 08:56:11.089723 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t4tv5\" (UniqueName: \"kubernetes.io/projected/a07bdee7-df02-4a87-ab8e-68939cd995bd-kube-api-access-t4tv5\") on node \"crc\" DevicePath \"\"" Dec 01 08:56:11 crc kubenswrapper[4689]: I1201 08:56:11.089736 4689 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6161a545-1c73-42c1-8176-c311aad98fed-config\") on node \"crc\" DevicePath \"\"" Dec 01 08:56:11 crc kubenswrapper[4689]: I1201 08:56:11.092653 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61e872f8-178c-4fc2-9445-78bb85dcc09e" path="/var/lib/kubelet/pods/61e872f8-178c-4fc2-9445-78bb85dcc09e/volumes" Dec 01 08:56:11 crc kubenswrapper[4689]: I1201 08:56:11.094044 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-54f1-account-create-update-p6hrg" event={"ID":"32bcbe2d-59ae-41f7-861e-04b27330d055","Type":"ContainerDied","Data":"1f01bee9ad928ebe2cb044e779c027ed5bdb79b27995a4a9268ed8736387123a"} Dec 01 08:56:11 crc kubenswrapper[4689]: I1201 08:56:11.094076 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f01bee9ad928ebe2cb044e779c027ed5bdb79b27995a4a9268ed8736387123a" Dec 01 08:56:11 crc kubenswrapper[4689]: I1201 08:56:11.094088 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-c798-account-create-update-6jt58" event={"ID":"8502e89e-509a-4cf5-8308-d769dba3a547","Type":"ContainerDied","Data":"24e6fa8066ed11d43b5314c7f6da13e2bd2d48859caaf079876fb27cea4ea7df"} Dec 01 08:56:11 crc kubenswrapper[4689]: I1201 08:56:11.094099 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="24e6fa8066ed11d43b5314c7f6da13e2bd2d48859caaf079876fb27cea4ea7df" Dec 01 08:56:11 crc kubenswrapper[4689]: I1201 08:56:11.094107 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-2w7pr" event={"ID":"3b0afecc-7b1c-43f8-b9cd-7595bbd87459","Type":"ContainerDied","Data":"e80500fdb3a56308606847ec7578d5c5878b45a72eafd5d8aadaa2a84eea3c37"} Dec 01 08:56:11 crc kubenswrapper[4689]: I1201 08:56:11.094115 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e80500fdb3a56308606847ec7578d5c5878b45a72eafd5d8aadaa2a84eea3c37" Dec 01 08:56:11 crc kubenswrapper[4689]: I1201 08:56:11.094122 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-pxsb5" event={"ID":"fcefa00f-aa74-4399-8e77-df956e479367","Type":"ContainerDied","Data":"4ed72779f865512f901b1307a388fa4c0f2fe63920312786edf5fef4882a1982"} Dec 01 08:56:11 crc kubenswrapper[4689]: I1201 08:56:11.094130 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ed72779f865512f901b1307a388fa4c0f2fe63920312786edf5fef4882a1982" Dec 01 08:56:11 crc kubenswrapper[4689]: I1201 08:56:11.094138 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-rg78r" event={"ID":"a07bdee7-df02-4a87-ab8e-68939cd995bd","Type":"ContainerDied","Data":"f6e634416107477c3663d3abc6b5c8d3c91e1c43b62fa8db6cbff6182c03bc8b"} Dec 01 08:56:11 crc kubenswrapper[4689]: I1201 08:56:11.094146 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f6e634416107477c3663d3abc6b5c8d3c91e1c43b62fa8db6cbff6182c03bc8b" Dec 01 08:56:11 crc kubenswrapper[4689]: I1201 08:56:11.115230 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-48955-config-8clj4" Dec 01 08:56:11 crc kubenswrapper[4689]: I1201 08:56:11.127930 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-tlzl7"] Dec 01 08:56:11 crc kubenswrapper[4689]: I1201 08:56:11.138159 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-tlzl7"] Dec 01 08:56:11 crc kubenswrapper[4689]: I1201 08:56:11.603573 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-48955-config-8clj4"] Dec 01 08:56:12 crc kubenswrapper[4689]: I1201 08:56:12.091947 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-48955-config-8clj4" event={"ID":"1502e410-0332-4b3c-beb1-c8b987ab6538","Type":"ContainerStarted","Data":"b6f6423cbe2982b657e3eb40927c63d6b8572708b83e6bdb30bced8a25fee115"} Dec 01 08:56:12 crc kubenswrapper[4689]: I1201 08:56:12.092307 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-48955-config-8clj4" event={"ID":"1502e410-0332-4b3c-beb1-c8b987ab6538","Type":"ContainerStarted","Data":"a2f484b19e8bc8e4a033e50b2226792d8b905bb20848046238036cbad6b54016"} Dec 01 08:56:12 crc kubenswrapper[4689]: I1201 08:56:12.116225 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-48955-config-8clj4" podStartSLOduration=2.116194325 podStartE2EDuration="2.116194325s" podCreationTimestamp="2025-12-01 08:56:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:56:12.116038311 +0000 UTC m=+1052.188326225" watchObservedRunningTime="2025-12-01 08:56:12.116194325 +0000 UTC m=+1052.188482229" Dec 01 08:56:13 crc kubenswrapper[4689]: I1201 08:56:13.061830 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6161a545-1c73-42c1-8176-c311aad98fed" path="/var/lib/kubelet/pods/6161a545-1c73-42c1-8176-c311aad98fed/volumes" Dec 01 08:56:13 crc kubenswrapper[4689]: I1201 08:56:13.102640 4689 generic.go:334] "Generic (PLEG): container finished" podID="1502e410-0332-4b3c-beb1-c8b987ab6538" containerID="b6f6423cbe2982b657e3eb40927c63d6b8572708b83e6bdb30bced8a25fee115" exitCode=0 Dec 01 08:56:13 crc kubenswrapper[4689]: I1201 08:56:13.102705 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-48955-config-8clj4" event={"ID":"1502e410-0332-4b3c-beb1-c8b987ab6538","Type":"ContainerDied","Data":"b6f6423cbe2982b657e3eb40927c63d6b8572708b83e6bdb30bced8a25fee115"} Dec 01 08:56:14 crc kubenswrapper[4689]: I1201 08:56:14.113106 4689 generic.go:334] "Generic (PLEG): container finished" podID="edc6a475-296b-4f29-a48b-6876138662fd" containerID="9696664d4002a6911085236ecd7df6fa9f9eb259ffb735c43de9d7035c79daf7" exitCode=0 Dec 01 08:56:14 crc kubenswrapper[4689]: I1201 08:56:14.113193 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"edc6a475-296b-4f29-a48b-6876138662fd","Type":"ContainerDied","Data":"9696664d4002a6911085236ecd7df6fa9f9eb259ffb735c43de9d7035c79daf7"} Dec 01 08:56:14 crc kubenswrapper[4689]: I1201 08:56:14.459557 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-48955-config-8clj4" Dec 01 08:56:14 crc kubenswrapper[4689]: I1201 08:56:14.574634 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/1502e410-0332-4b3c-beb1-c8b987ab6538-var-log-ovn\") pod \"1502e410-0332-4b3c-beb1-c8b987ab6538\" (UID: \"1502e410-0332-4b3c-beb1-c8b987ab6538\") " Dec 01 08:56:14 crc kubenswrapper[4689]: I1201 08:56:14.574699 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/1502e410-0332-4b3c-beb1-c8b987ab6538-additional-scripts\") pod \"1502e410-0332-4b3c-beb1-c8b987ab6538\" (UID: \"1502e410-0332-4b3c-beb1-c8b987ab6538\") " Dec 01 08:56:14 crc kubenswrapper[4689]: I1201 08:56:14.574759 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8bplk\" (UniqueName: \"kubernetes.io/projected/1502e410-0332-4b3c-beb1-c8b987ab6538-kube-api-access-8bplk\") pod \"1502e410-0332-4b3c-beb1-c8b987ab6538\" (UID: \"1502e410-0332-4b3c-beb1-c8b987ab6538\") " Dec 01 08:56:14 crc kubenswrapper[4689]: I1201 08:56:14.574833 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1502e410-0332-4b3c-beb1-c8b987ab6538-scripts\") pod \"1502e410-0332-4b3c-beb1-c8b987ab6538\" (UID: \"1502e410-0332-4b3c-beb1-c8b987ab6538\") " Dec 01 08:56:14 crc kubenswrapper[4689]: I1201 08:56:14.574894 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/1502e410-0332-4b3c-beb1-c8b987ab6538-var-run-ovn\") pod \"1502e410-0332-4b3c-beb1-c8b987ab6538\" (UID: \"1502e410-0332-4b3c-beb1-c8b987ab6538\") " Dec 01 08:56:14 crc kubenswrapper[4689]: I1201 08:56:14.574923 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1502e410-0332-4b3c-beb1-c8b987ab6538-var-run\") pod \"1502e410-0332-4b3c-beb1-c8b987ab6538\" (UID: \"1502e410-0332-4b3c-beb1-c8b987ab6538\") " Dec 01 08:56:14 crc kubenswrapper[4689]: I1201 08:56:14.575336 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1502e410-0332-4b3c-beb1-c8b987ab6538-var-run" (OuterVolumeSpecName: "var-run") pod "1502e410-0332-4b3c-beb1-c8b987ab6538" (UID: "1502e410-0332-4b3c-beb1-c8b987ab6538"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 08:56:14 crc kubenswrapper[4689]: I1201 08:56:14.576124 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1502e410-0332-4b3c-beb1-c8b987ab6538-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "1502e410-0332-4b3c-beb1-c8b987ab6538" (UID: "1502e410-0332-4b3c-beb1-c8b987ab6538"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 08:56:14 crc kubenswrapper[4689]: I1201 08:56:14.576258 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1502e410-0332-4b3c-beb1-c8b987ab6538-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "1502e410-0332-4b3c-beb1-c8b987ab6538" (UID: "1502e410-0332-4b3c-beb1-c8b987ab6538"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 08:56:14 crc kubenswrapper[4689]: I1201 08:56:14.576417 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1502e410-0332-4b3c-beb1-c8b987ab6538-scripts" (OuterVolumeSpecName: "scripts") pod "1502e410-0332-4b3c-beb1-c8b987ab6538" (UID: "1502e410-0332-4b3c-beb1-c8b987ab6538"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:56:14 crc kubenswrapper[4689]: I1201 08:56:14.576569 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1502e410-0332-4b3c-beb1-c8b987ab6538-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "1502e410-0332-4b3c-beb1-c8b987ab6538" (UID: "1502e410-0332-4b3c-beb1-c8b987ab6538"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:56:14 crc kubenswrapper[4689]: I1201 08:56:14.580787 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1502e410-0332-4b3c-beb1-c8b987ab6538-kube-api-access-8bplk" (OuterVolumeSpecName: "kube-api-access-8bplk") pod "1502e410-0332-4b3c-beb1-c8b987ab6538" (UID: "1502e410-0332-4b3c-beb1-c8b987ab6538"). InnerVolumeSpecName "kube-api-access-8bplk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:56:14 crc kubenswrapper[4689]: I1201 08:56:14.676731 4689 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/1502e410-0332-4b3c-beb1-c8b987ab6538-var-log-ovn\") on node \"crc\" DevicePath \"\"" Dec 01 08:56:14 crc kubenswrapper[4689]: I1201 08:56:14.677069 4689 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/1502e410-0332-4b3c-beb1-c8b987ab6538-additional-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 08:56:14 crc kubenswrapper[4689]: I1201 08:56:14.677143 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8bplk\" (UniqueName: \"kubernetes.io/projected/1502e410-0332-4b3c-beb1-c8b987ab6538-kube-api-access-8bplk\") on node \"crc\" DevicePath \"\"" Dec 01 08:56:14 crc kubenswrapper[4689]: I1201 08:56:14.677205 4689 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1502e410-0332-4b3c-beb1-c8b987ab6538-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 08:56:14 crc kubenswrapper[4689]: I1201 08:56:14.677278 4689 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/1502e410-0332-4b3c-beb1-c8b987ab6538-var-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 01 08:56:14 crc kubenswrapper[4689]: I1201 08:56:14.677334 4689 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1502e410-0332-4b3c-beb1-c8b987ab6538-var-run\") on node \"crc\" DevicePath \"\"" Dec 01 08:56:14 crc kubenswrapper[4689]: I1201 08:56:14.691326 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-48955-config-8clj4"] Dec 01 08:56:14 crc kubenswrapper[4689]: I1201 08:56:14.697568 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-48955-config-8clj4"] Dec 01 08:56:15 crc kubenswrapper[4689]: I1201 08:56:15.059541 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1502e410-0332-4b3c-beb1-c8b987ab6538" path="/var/lib/kubelet/pods/1502e410-0332-4b3c-beb1-c8b987ab6538/volumes" Dec 01 08:56:15 crc kubenswrapper[4689]: I1201 08:56:15.122802 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"edc6a475-296b-4f29-a48b-6876138662fd","Type":"ContainerStarted","Data":"a0ba8e18b86610c8300a20ae45e8e15decc2210ba29bef24323021ce9062f808"} Dec 01 08:56:15 crc kubenswrapper[4689]: I1201 08:56:15.124169 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Dec 01 08:56:15 crc kubenswrapper[4689]: I1201 08:56:15.126522 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-48955-config-8clj4" Dec 01 08:56:15 crc kubenswrapper[4689]: I1201 08:56:15.126565 4689 scope.go:117] "RemoveContainer" containerID="b6f6423cbe2982b657e3eb40927c63d6b8572708b83e6bdb30bced8a25fee115" Dec 01 08:56:15 crc kubenswrapper[4689]: I1201 08:56:15.130808 4689 generic.go:334] "Generic (PLEG): container finished" podID="0e57c646-4b20-4bb9-9c89-bad52b7a1c07" containerID="e1f5456c5ad2dd0431725b9d0b0072efa58d45ce6971689d51ab59578545315e" exitCode=0 Dec 01 08:56:15 crc kubenswrapper[4689]: I1201 08:56:15.130849 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-66t7q" event={"ID":"0e57c646-4b20-4bb9-9c89-bad52b7a1c07","Type":"ContainerDied","Data":"e1f5456c5ad2dd0431725b9d0b0072efa58d45ce6971689d51ab59578545315e"} Dec 01 08:56:15 crc kubenswrapper[4689]: I1201 08:56:15.191540 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.086928205 podStartE2EDuration="1m24.191517003s" podCreationTimestamp="2025-12-01 08:54:51 +0000 UTC" firstStartedPulling="2025-12-01 08:54:53.372428592 +0000 UTC m=+973.444716496" lastFinishedPulling="2025-12-01 08:55:40.47701739 +0000 UTC m=+1020.549305294" observedRunningTime="2025-12-01 08:56:15.163534216 +0000 UTC m=+1055.235822120" watchObservedRunningTime="2025-12-01 08:56:15.191517003 +0000 UTC m=+1055.263804907" Dec 01 08:56:15 crc kubenswrapper[4689]: I1201 08:56:15.832590 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c18c4a63-48ba-42e2-a7f0-d5750963b90f-etc-swift\") pod \"swift-storage-0\" (UID: \"c18c4a63-48ba-42e2-a7f0-d5750963b90f\") " pod="openstack/swift-storage-0" Dec 01 08:56:15 crc kubenswrapper[4689]: I1201 08:56:15.839465 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c18c4a63-48ba-42e2-a7f0-d5750963b90f-etc-swift\") pod \"swift-storage-0\" (UID: \"c18c4a63-48ba-42e2-a7f0-d5750963b90f\") " pod="openstack/swift-storage-0" Dec 01 08:56:16 crc kubenswrapper[4689]: I1201 08:56:16.134281 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Dec 01 08:56:16 crc kubenswrapper[4689]: I1201 08:56:16.574312 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-66t7q" Dec 01 08:56:16 crc kubenswrapper[4689]: I1201 08:56:16.748877 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/0e57c646-4b20-4bb9-9c89-bad52b7a1c07-dispersionconf\") pod \"0e57c646-4b20-4bb9-9c89-bad52b7a1c07\" (UID: \"0e57c646-4b20-4bb9-9c89-bad52b7a1c07\") " Dec 01 08:56:16 crc kubenswrapper[4689]: I1201 08:56:16.749026 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/0e57c646-4b20-4bb9-9c89-bad52b7a1c07-swiftconf\") pod \"0e57c646-4b20-4bb9-9c89-bad52b7a1c07\" (UID: \"0e57c646-4b20-4bb9-9c89-bad52b7a1c07\") " Dec 01 08:56:16 crc kubenswrapper[4689]: I1201 08:56:16.749152 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e57c646-4b20-4bb9-9c89-bad52b7a1c07-combined-ca-bundle\") pod \"0e57c646-4b20-4bb9-9c89-bad52b7a1c07\" (UID: \"0e57c646-4b20-4bb9-9c89-bad52b7a1c07\") " Dec 01 08:56:16 crc kubenswrapper[4689]: I1201 08:56:16.749200 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/0e57c646-4b20-4bb9-9c89-bad52b7a1c07-ring-data-devices\") pod \"0e57c646-4b20-4bb9-9c89-bad52b7a1c07\" (UID: \"0e57c646-4b20-4bb9-9c89-bad52b7a1c07\") " Dec 01 08:56:16 crc kubenswrapper[4689]: I1201 08:56:16.749234 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/0e57c646-4b20-4bb9-9c89-bad52b7a1c07-etc-swift\") pod \"0e57c646-4b20-4bb9-9c89-bad52b7a1c07\" (UID: \"0e57c646-4b20-4bb9-9c89-bad52b7a1c07\") " Dec 01 08:56:16 crc kubenswrapper[4689]: I1201 08:56:16.749289 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xzvqb\" (UniqueName: \"kubernetes.io/projected/0e57c646-4b20-4bb9-9c89-bad52b7a1c07-kube-api-access-xzvqb\") pod \"0e57c646-4b20-4bb9-9c89-bad52b7a1c07\" (UID: \"0e57c646-4b20-4bb9-9c89-bad52b7a1c07\") " Dec 01 08:56:16 crc kubenswrapper[4689]: I1201 08:56:16.749385 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0e57c646-4b20-4bb9-9c89-bad52b7a1c07-scripts\") pod \"0e57c646-4b20-4bb9-9c89-bad52b7a1c07\" (UID: \"0e57c646-4b20-4bb9-9c89-bad52b7a1c07\") " Dec 01 08:56:16 crc kubenswrapper[4689]: I1201 08:56:16.750385 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e57c646-4b20-4bb9-9c89-bad52b7a1c07-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "0e57c646-4b20-4bb9-9c89-bad52b7a1c07" (UID: "0e57c646-4b20-4bb9-9c89-bad52b7a1c07"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:56:16 crc kubenswrapper[4689]: I1201 08:56:16.750726 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e57c646-4b20-4bb9-9c89-bad52b7a1c07-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "0e57c646-4b20-4bb9-9c89-bad52b7a1c07" (UID: "0e57c646-4b20-4bb9-9c89-bad52b7a1c07"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 08:56:16 crc kubenswrapper[4689]: I1201 08:56:16.756665 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e57c646-4b20-4bb9-9c89-bad52b7a1c07-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "0e57c646-4b20-4bb9-9c89-bad52b7a1c07" (UID: "0e57c646-4b20-4bb9-9c89-bad52b7a1c07"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:56:16 crc kubenswrapper[4689]: I1201 08:56:16.765661 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e57c646-4b20-4bb9-9c89-bad52b7a1c07-kube-api-access-xzvqb" (OuterVolumeSpecName: "kube-api-access-xzvqb") pod "0e57c646-4b20-4bb9-9c89-bad52b7a1c07" (UID: "0e57c646-4b20-4bb9-9c89-bad52b7a1c07"). InnerVolumeSpecName "kube-api-access-xzvqb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:56:16 crc kubenswrapper[4689]: I1201 08:56:16.772792 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e57c646-4b20-4bb9-9c89-bad52b7a1c07-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0e57c646-4b20-4bb9-9c89-bad52b7a1c07" (UID: "0e57c646-4b20-4bb9-9c89-bad52b7a1c07"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:56:16 crc kubenswrapper[4689]: I1201 08:56:16.775951 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e57c646-4b20-4bb9-9c89-bad52b7a1c07-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "0e57c646-4b20-4bb9-9c89-bad52b7a1c07" (UID: "0e57c646-4b20-4bb9-9c89-bad52b7a1c07"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:56:16 crc kubenswrapper[4689]: I1201 08:56:16.791793 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e57c646-4b20-4bb9-9c89-bad52b7a1c07-scripts" (OuterVolumeSpecName: "scripts") pod "0e57c646-4b20-4bb9-9c89-bad52b7a1c07" (UID: "0e57c646-4b20-4bb9-9c89-bad52b7a1c07"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:56:16 crc kubenswrapper[4689]: I1201 08:56:16.805034 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Dec 01 08:56:16 crc kubenswrapper[4689]: W1201 08:56:16.809927 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc18c4a63_48ba_42e2_a7f0_d5750963b90f.slice/crio-dde2eb3646c55654ebb297726552f62ea2b1fa7ef5096af80689a1b0e9ed1a02 WatchSource:0}: Error finding container dde2eb3646c55654ebb297726552f62ea2b1fa7ef5096af80689a1b0e9ed1a02: Status 404 returned error can't find the container with id dde2eb3646c55654ebb297726552f62ea2b1fa7ef5096af80689a1b0e9ed1a02 Dec 01 08:56:16 crc kubenswrapper[4689]: I1201 08:56:16.848203 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-5qxkw"] Dec 01 08:56:16 crc kubenswrapper[4689]: E1201 08:56:16.848853 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32bcbe2d-59ae-41f7-861e-04b27330d055" containerName="mariadb-account-create-update" Dec 01 08:56:16 crc kubenswrapper[4689]: I1201 08:56:16.848976 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="32bcbe2d-59ae-41f7-861e-04b27330d055" containerName="mariadb-account-create-update" Dec 01 08:56:16 crc kubenswrapper[4689]: E1201 08:56:16.849052 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6161a545-1c73-42c1-8176-c311aad98fed" containerName="init" Dec 01 08:56:16 crc kubenswrapper[4689]: I1201 08:56:16.849132 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="6161a545-1c73-42c1-8176-c311aad98fed" containerName="init" Dec 01 08:56:16 crc kubenswrapper[4689]: E1201 08:56:16.849217 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b0afecc-7b1c-43f8-b9cd-7595bbd87459" containerName="mariadb-database-create" Dec 01 08:56:16 crc kubenswrapper[4689]: I1201 08:56:16.849299 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b0afecc-7b1c-43f8-b9cd-7595bbd87459" containerName="mariadb-database-create" Dec 01 08:56:16 crc kubenswrapper[4689]: E1201 08:56:16.849414 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1502e410-0332-4b3c-beb1-c8b987ab6538" containerName="ovn-config" Dec 01 08:56:16 crc kubenswrapper[4689]: I1201 08:56:16.849499 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="1502e410-0332-4b3c-beb1-c8b987ab6538" containerName="ovn-config" Dec 01 08:56:16 crc kubenswrapper[4689]: E1201 08:56:16.849586 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8502e89e-509a-4cf5-8308-d769dba3a547" containerName="mariadb-account-create-update" Dec 01 08:56:16 crc kubenswrapper[4689]: I1201 08:56:16.849692 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="8502e89e-509a-4cf5-8308-d769dba3a547" containerName="mariadb-account-create-update" Dec 01 08:56:16 crc kubenswrapper[4689]: E1201 08:56:16.849774 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e57c646-4b20-4bb9-9c89-bad52b7a1c07" containerName="swift-ring-rebalance" Dec 01 08:56:16 crc kubenswrapper[4689]: I1201 08:56:16.849849 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e57c646-4b20-4bb9-9c89-bad52b7a1c07" containerName="swift-ring-rebalance" Dec 01 08:56:16 crc kubenswrapper[4689]: E1201 08:56:16.849960 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6161a545-1c73-42c1-8176-c311aad98fed" containerName="dnsmasq-dns" Dec 01 08:56:16 crc kubenswrapper[4689]: I1201 08:56:16.850050 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="6161a545-1c73-42c1-8176-c311aad98fed" containerName="dnsmasq-dns" Dec 01 08:56:16 crc kubenswrapper[4689]: E1201 08:56:16.850141 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a07bdee7-df02-4a87-ab8e-68939cd995bd" containerName="mariadb-database-create" Dec 01 08:56:16 crc kubenswrapper[4689]: I1201 08:56:16.850214 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="a07bdee7-df02-4a87-ab8e-68939cd995bd" containerName="mariadb-database-create" Dec 01 08:56:16 crc kubenswrapper[4689]: E1201 08:56:16.850310 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcefa00f-aa74-4399-8e77-df956e479367" containerName="mariadb-database-create" Dec 01 08:56:16 crc kubenswrapper[4689]: I1201 08:56:16.850397 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcefa00f-aa74-4399-8e77-df956e479367" containerName="mariadb-database-create" Dec 01 08:56:16 crc kubenswrapper[4689]: I1201 08:56:16.850662 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="8502e89e-509a-4cf5-8308-d769dba3a547" containerName="mariadb-account-create-update" Dec 01 08:56:16 crc kubenswrapper[4689]: I1201 08:56:16.850796 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="fcefa00f-aa74-4399-8e77-df956e479367" containerName="mariadb-database-create" Dec 01 08:56:16 crc kubenswrapper[4689]: I1201 08:56:16.850886 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b0afecc-7b1c-43f8-b9cd-7595bbd87459" containerName="mariadb-database-create" Dec 01 08:56:16 crc kubenswrapper[4689]: I1201 08:56:16.850979 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="1502e410-0332-4b3c-beb1-c8b987ab6538" containerName="ovn-config" Dec 01 08:56:16 crc kubenswrapper[4689]: I1201 08:56:16.851095 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="6161a545-1c73-42c1-8176-c311aad98fed" containerName="dnsmasq-dns" Dec 01 08:56:16 crc kubenswrapper[4689]: I1201 08:56:16.851189 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="32bcbe2d-59ae-41f7-861e-04b27330d055" containerName="mariadb-account-create-update" Dec 01 08:56:16 crc kubenswrapper[4689]: I1201 08:56:16.851272 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="a07bdee7-df02-4a87-ab8e-68939cd995bd" containerName="mariadb-database-create" Dec 01 08:56:16 crc kubenswrapper[4689]: I1201 08:56:16.851243 4689 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e57c646-4b20-4bb9-9c89-bad52b7a1c07-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 08:56:16 crc kubenswrapper[4689]: I1201 08:56:16.851379 4689 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/0e57c646-4b20-4bb9-9c89-bad52b7a1c07-ring-data-devices\") on node \"crc\" DevicePath \"\"" Dec 01 08:56:16 crc kubenswrapper[4689]: I1201 08:56:16.851391 4689 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/0e57c646-4b20-4bb9-9c89-bad52b7a1c07-etc-swift\") on node \"crc\" DevicePath \"\"" Dec 01 08:56:16 crc kubenswrapper[4689]: I1201 08:56:16.851401 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xzvqb\" (UniqueName: \"kubernetes.io/projected/0e57c646-4b20-4bb9-9c89-bad52b7a1c07-kube-api-access-xzvqb\") on node \"crc\" DevicePath \"\"" Dec 01 08:56:16 crc kubenswrapper[4689]: I1201 08:56:16.851412 4689 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0e57c646-4b20-4bb9-9c89-bad52b7a1c07-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 08:56:16 crc kubenswrapper[4689]: I1201 08:56:16.851419 4689 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/0e57c646-4b20-4bb9-9c89-bad52b7a1c07-dispersionconf\") on node \"crc\" DevicePath \"\"" Dec 01 08:56:16 crc kubenswrapper[4689]: I1201 08:56:16.851427 4689 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/0e57c646-4b20-4bb9-9c89-bad52b7a1c07-swiftconf\") on node \"crc\" DevicePath \"\"" Dec 01 08:56:16 crc kubenswrapper[4689]: I1201 08:56:16.851346 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e57c646-4b20-4bb9-9c89-bad52b7a1c07" containerName="swift-ring-rebalance" Dec 01 08:56:16 crc kubenswrapper[4689]: I1201 08:56:16.851952 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-5qxkw" Dec 01 08:56:16 crc kubenswrapper[4689]: I1201 08:56:16.859583 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Dec 01 08:56:16 crc kubenswrapper[4689]: I1201 08:56:16.864899 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-5qxkw"] Dec 01 08:56:16 crc kubenswrapper[4689]: I1201 08:56:16.865780 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-p2bzt" Dec 01 08:56:16 crc kubenswrapper[4689]: I1201 08:56:16.952560 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbecbbce-632b-4832-b4aa-6834ff6541e5-config-data\") pod \"glance-db-sync-5qxkw\" (UID: \"fbecbbce-632b-4832-b4aa-6834ff6541e5\") " pod="openstack/glance-db-sync-5qxkw" Dec 01 08:56:16 crc kubenswrapper[4689]: I1201 08:56:16.952664 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrqw4\" (UniqueName: \"kubernetes.io/projected/fbecbbce-632b-4832-b4aa-6834ff6541e5-kube-api-access-qrqw4\") pod \"glance-db-sync-5qxkw\" (UID: \"fbecbbce-632b-4832-b4aa-6834ff6541e5\") " pod="openstack/glance-db-sync-5qxkw" Dec 01 08:56:16 crc kubenswrapper[4689]: I1201 08:56:16.952728 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fbecbbce-632b-4832-b4aa-6834ff6541e5-db-sync-config-data\") pod \"glance-db-sync-5qxkw\" (UID: \"fbecbbce-632b-4832-b4aa-6834ff6541e5\") " pod="openstack/glance-db-sync-5qxkw" Dec 01 08:56:16 crc kubenswrapper[4689]: I1201 08:56:16.952794 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbecbbce-632b-4832-b4aa-6834ff6541e5-combined-ca-bundle\") pod \"glance-db-sync-5qxkw\" (UID: \"fbecbbce-632b-4832-b4aa-6834ff6541e5\") " pod="openstack/glance-db-sync-5qxkw" Dec 01 08:56:17 crc kubenswrapper[4689]: I1201 08:56:17.054224 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbecbbce-632b-4832-b4aa-6834ff6541e5-config-data\") pod \"glance-db-sync-5qxkw\" (UID: \"fbecbbce-632b-4832-b4aa-6834ff6541e5\") " pod="openstack/glance-db-sync-5qxkw" Dec 01 08:56:17 crc kubenswrapper[4689]: I1201 08:56:17.054305 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrqw4\" (UniqueName: \"kubernetes.io/projected/fbecbbce-632b-4832-b4aa-6834ff6541e5-kube-api-access-qrqw4\") pod \"glance-db-sync-5qxkw\" (UID: \"fbecbbce-632b-4832-b4aa-6834ff6541e5\") " pod="openstack/glance-db-sync-5qxkw" Dec 01 08:56:17 crc kubenswrapper[4689]: I1201 08:56:17.054352 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fbecbbce-632b-4832-b4aa-6834ff6541e5-db-sync-config-data\") pod \"glance-db-sync-5qxkw\" (UID: \"fbecbbce-632b-4832-b4aa-6834ff6541e5\") " pod="openstack/glance-db-sync-5qxkw" Dec 01 08:56:17 crc kubenswrapper[4689]: I1201 08:56:17.054406 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbecbbce-632b-4832-b4aa-6834ff6541e5-combined-ca-bundle\") pod \"glance-db-sync-5qxkw\" (UID: \"fbecbbce-632b-4832-b4aa-6834ff6541e5\") " pod="openstack/glance-db-sync-5qxkw" Dec 01 08:56:17 crc kubenswrapper[4689]: I1201 08:56:17.059510 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbecbbce-632b-4832-b4aa-6834ff6541e5-config-data\") pod \"glance-db-sync-5qxkw\" (UID: \"fbecbbce-632b-4832-b4aa-6834ff6541e5\") " pod="openstack/glance-db-sync-5qxkw" Dec 01 08:56:17 crc kubenswrapper[4689]: I1201 08:56:17.059539 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbecbbce-632b-4832-b4aa-6834ff6541e5-combined-ca-bundle\") pod \"glance-db-sync-5qxkw\" (UID: \"fbecbbce-632b-4832-b4aa-6834ff6541e5\") " pod="openstack/glance-db-sync-5qxkw" Dec 01 08:56:17 crc kubenswrapper[4689]: I1201 08:56:17.059685 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fbecbbce-632b-4832-b4aa-6834ff6541e5-db-sync-config-data\") pod \"glance-db-sync-5qxkw\" (UID: \"fbecbbce-632b-4832-b4aa-6834ff6541e5\") " pod="openstack/glance-db-sync-5qxkw" Dec 01 08:56:17 crc kubenswrapper[4689]: I1201 08:56:17.074724 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrqw4\" (UniqueName: \"kubernetes.io/projected/fbecbbce-632b-4832-b4aa-6834ff6541e5-kube-api-access-qrqw4\") pod \"glance-db-sync-5qxkw\" (UID: \"fbecbbce-632b-4832-b4aa-6834ff6541e5\") " pod="openstack/glance-db-sync-5qxkw" Dec 01 08:56:17 crc kubenswrapper[4689]: I1201 08:56:17.148144 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-66t7q" event={"ID":"0e57c646-4b20-4bb9-9c89-bad52b7a1c07","Type":"ContainerDied","Data":"10c3cadf99a13fd24ef73c67b0e8c9b020165fda6c1a37f5fd46e3f256c97d14"} Dec 01 08:56:17 crc kubenswrapper[4689]: I1201 08:56:17.148193 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="10c3cadf99a13fd24ef73c67b0e8c9b020165fda6c1a37f5fd46e3f256c97d14" Dec 01 08:56:17 crc kubenswrapper[4689]: I1201 08:56:17.149132 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c18c4a63-48ba-42e2-a7f0-d5750963b90f","Type":"ContainerStarted","Data":"dde2eb3646c55654ebb297726552f62ea2b1fa7ef5096af80689a1b0e9ed1a02"} Dec 01 08:56:17 crc kubenswrapper[4689]: I1201 08:56:17.149413 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-66t7q" Dec 01 08:56:17 crc kubenswrapper[4689]: I1201 08:56:17.175294 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-5qxkw" Dec 01 08:56:17 crc kubenswrapper[4689]: I1201 08:56:17.881561 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-5qxkw"] Dec 01 08:56:18 crc kubenswrapper[4689]: I1201 08:56:18.158743 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-5qxkw" event={"ID":"fbecbbce-632b-4832-b4aa-6834ff6541e5","Type":"ContainerStarted","Data":"01c03df410ccc9d22c394ab5e5ba4fd3efd443996d7ab61f0c76a775d9670eb1"} Dec 01 08:56:19 crc kubenswrapper[4689]: I1201 08:56:19.178683 4689 generic.go:334] "Generic (PLEG): container finished" podID="50bb385d-f9f3-4a0d-8d26-c0a69a6eba87" containerID="6c94d18bc981b2dbe3f34fea2eb76e6d8e0b233b1a1d374cb8c1ed08c12aed49" exitCode=0 Dec 01 08:56:19 crc kubenswrapper[4689]: I1201 08:56:19.178778 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"50bb385d-f9f3-4a0d-8d26-c0a69a6eba87","Type":"ContainerDied","Data":"6c94d18bc981b2dbe3f34fea2eb76e6d8e0b233b1a1d374cb8c1ed08c12aed49"} Dec 01 08:56:19 crc kubenswrapper[4689]: I1201 08:56:19.196354 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c18c4a63-48ba-42e2-a7f0-d5750963b90f","Type":"ContainerStarted","Data":"146b290eb04c7c6324bf8e017e3a8e339fad3560d692ea798dffa436081011db"} Dec 01 08:56:19 crc kubenswrapper[4689]: I1201 08:56:19.197661 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c18c4a63-48ba-42e2-a7f0-d5750963b90f","Type":"ContainerStarted","Data":"bbe04e207731d282ba5342fdffa90c6674e2973e849bb728e722d27ed58cc3a3"} Dec 01 08:56:19 crc kubenswrapper[4689]: I1201 08:56:19.197683 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c18c4a63-48ba-42e2-a7f0-d5750963b90f","Type":"ContainerStarted","Data":"61e9bdcc5119313a120f8a63d2e70c656fe29cfd492e346f8a76de6005b5843f"} Dec 01 08:56:19 crc kubenswrapper[4689]: I1201 08:56:19.197695 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c18c4a63-48ba-42e2-a7f0-d5750963b90f","Type":"ContainerStarted","Data":"c7e87c4a4f65468f0b910724ab2875353ef01f39098e3d1367e77e4aa09fc2ce"} Dec 01 08:56:19 crc kubenswrapper[4689]: I1201 08:56:19.687922 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-48955" Dec 01 08:56:20 crc kubenswrapper[4689]: I1201 08:56:20.224222 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"50bb385d-f9f3-4a0d-8d26-c0a69a6eba87","Type":"ContainerStarted","Data":"328e6e999625f74b66c204928991821928098f5fd4af8cc7282a217fdf897259"} Dec 01 08:56:20 crc kubenswrapper[4689]: I1201 08:56:20.226298 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Dec 01 08:56:20 crc kubenswrapper[4689]: I1201 08:56:20.246258 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=-9223371947.608541 podStartE2EDuration="1m29.246233994s" podCreationTimestamp="2025-12-01 08:54:51 +0000 UTC" firstStartedPulling="2025-12-01 08:54:53.744146429 +0000 UTC m=+973.816434333" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:56:20.243770467 +0000 UTC m=+1060.316058371" watchObservedRunningTime="2025-12-01 08:56:20.246233994 +0000 UTC m=+1060.318521898" Dec 01 08:56:21 crc kubenswrapper[4689]: I1201 08:56:21.256776 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c18c4a63-48ba-42e2-a7f0-d5750963b90f","Type":"ContainerStarted","Data":"b730e7e1a8b9bc91473f0de82c948692a415a1a1e8bd02477ea54179ba3b2e47"} Dec 01 08:56:21 crc kubenswrapper[4689]: I1201 08:56:21.259133 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c18c4a63-48ba-42e2-a7f0-d5750963b90f","Type":"ContainerStarted","Data":"f2bf524b92665febebe21118eac43a0a8aaf76a6d19e294bdf60734dfb21d6b4"} Dec 01 08:56:21 crc kubenswrapper[4689]: I1201 08:56:21.259165 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c18c4a63-48ba-42e2-a7f0-d5750963b90f","Type":"ContainerStarted","Data":"cf180d9e100089b69d822af8d931d1c4d875d16ca5dc5d606bb83117d6bc163c"} Dec 01 08:56:22 crc kubenswrapper[4689]: I1201 08:56:22.290423 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c18c4a63-48ba-42e2-a7f0-d5750963b90f","Type":"ContainerStarted","Data":"068d063668b26219a2abd1283e72950f6c566f7a7fdf4ff0851f87aac12239b1"} Dec 01 08:56:24 crc kubenswrapper[4689]: I1201 08:56:24.328438 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c18c4a63-48ba-42e2-a7f0-d5750963b90f","Type":"ContainerStarted","Data":"3543072407c3c996031ff4366392338992824cbbe0ac2c1c594bb980a1a684ff"} Dec 01 08:56:24 crc kubenswrapper[4689]: I1201 08:56:24.328771 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c18c4a63-48ba-42e2-a7f0-d5750963b90f","Type":"ContainerStarted","Data":"4cd06956be639b4ac0269dd130c350bfec00106b99f083983942b2a93c367425"} Dec 01 08:56:24 crc kubenswrapper[4689]: I1201 08:56:24.328782 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c18c4a63-48ba-42e2-a7f0-d5750963b90f","Type":"ContainerStarted","Data":"7606be32d60c31f0a6f865e5fcb61a54a66e7492399adb06c110e562a2bc591b"} Dec 01 08:56:24 crc kubenswrapper[4689]: I1201 08:56:24.328790 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c18c4a63-48ba-42e2-a7f0-d5750963b90f","Type":"ContainerStarted","Data":"0e04168b0f767ddeb861ddc1c7161bdcfb546fcccdc08eebf21ccae936d320ee"} Dec 01 08:56:25 crc kubenswrapper[4689]: I1201 08:56:25.344609 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c18c4a63-48ba-42e2-a7f0-d5750963b90f","Type":"ContainerStarted","Data":"cd44b8e57c569df16130327e0520999feebd1e70bd512245da6a8a528946c2ab"} Dec 01 08:56:25 crc kubenswrapper[4689]: I1201 08:56:25.344922 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c18c4a63-48ba-42e2-a7f0-d5750963b90f","Type":"ContainerStarted","Data":"1f269911344f02e11630f6da04d63e68d4f0bec07b1e50e0e50158ab27c40b02"} Dec 01 08:56:25 crc kubenswrapper[4689]: I1201 08:56:25.344939 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c18c4a63-48ba-42e2-a7f0-d5750963b90f","Type":"ContainerStarted","Data":"f7284f5f02d8e4b48cbcbfc188e0c607d367feb3e57292733d9a721773556f38"} Dec 01 08:56:25 crc kubenswrapper[4689]: I1201 08:56:25.390592 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=21.002683935 podStartE2EDuration="27.390564313s" podCreationTimestamp="2025-12-01 08:55:58 +0000 UTC" firstStartedPulling="2025-12-01 08:56:16.812285078 +0000 UTC m=+1056.884572982" lastFinishedPulling="2025-12-01 08:56:23.200165456 +0000 UTC m=+1063.272453360" observedRunningTime="2025-12-01 08:56:25.377210217 +0000 UTC m=+1065.449498131" watchObservedRunningTime="2025-12-01 08:56:25.390564313 +0000 UTC m=+1065.462852217" Dec 01 08:56:25 crc kubenswrapper[4689]: I1201 08:56:25.680541 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-bv9wg"] Dec 01 08:56:25 crc kubenswrapper[4689]: I1201 08:56:25.682127 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-bv9wg" Dec 01 08:56:25 crc kubenswrapper[4689]: I1201 08:56:25.684182 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Dec 01 08:56:25 crc kubenswrapper[4689]: I1201 08:56:25.701443 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-bv9wg"] Dec 01 08:56:25 crc kubenswrapper[4689]: I1201 08:56:25.759637 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f3fc4aaf-1747-4ced-877d-63533218e8f1-dns-swift-storage-0\") pod \"dnsmasq-dns-77585f5f8c-bv9wg\" (UID: \"f3fc4aaf-1747-4ced-877d-63533218e8f1\") " pod="openstack/dnsmasq-dns-77585f5f8c-bv9wg" Dec 01 08:56:25 crc kubenswrapper[4689]: I1201 08:56:25.759734 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f3fc4aaf-1747-4ced-877d-63533218e8f1-ovsdbserver-sb\") pod \"dnsmasq-dns-77585f5f8c-bv9wg\" (UID: \"f3fc4aaf-1747-4ced-877d-63533218e8f1\") " pod="openstack/dnsmasq-dns-77585f5f8c-bv9wg" Dec 01 08:56:25 crc kubenswrapper[4689]: I1201 08:56:25.759777 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f3fc4aaf-1747-4ced-877d-63533218e8f1-ovsdbserver-nb\") pod \"dnsmasq-dns-77585f5f8c-bv9wg\" (UID: \"f3fc4aaf-1747-4ced-877d-63533218e8f1\") " pod="openstack/dnsmasq-dns-77585f5f8c-bv9wg" Dec 01 08:56:25 crc kubenswrapper[4689]: I1201 08:56:25.759832 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3fc4aaf-1747-4ced-877d-63533218e8f1-config\") pod \"dnsmasq-dns-77585f5f8c-bv9wg\" (UID: \"f3fc4aaf-1747-4ced-877d-63533218e8f1\") " pod="openstack/dnsmasq-dns-77585f5f8c-bv9wg" Dec 01 08:56:25 crc kubenswrapper[4689]: I1201 08:56:25.759848 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnd66\" (UniqueName: \"kubernetes.io/projected/f3fc4aaf-1747-4ced-877d-63533218e8f1-kube-api-access-pnd66\") pod \"dnsmasq-dns-77585f5f8c-bv9wg\" (UID: \"f3fc4aaf-1747-4ced-877d-63533218e8f1\") " pod="openstack/dnsmasq-dns-77585f5f8c-bv9wg" Dec 01 08:56:25 crc kubenswrapper[4689]: I1201 08:56:25.759882 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f3fc4aaf-1747-4ced-877d-63533218e8f1-dns-svc\") pod \"dnsmasq-dns-77585f5f8c-bv9wg\" (UID: \"f3fc4aaf-1747-4ced-877d-63533218e8f1\") " pod="openstack/dnsmasq-dns-77585f5f8c-bv9wg" Dec 01 08:56:25 crc kubenswrapper[4689]: I1201 08:56:25.860902 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3fc4aaf-1747-4ced-877d-63533218e8f1-config\") pod \"dnsmasq-dns-77585f5f8c-bv9wg\" (UID: \"f3fc4aaf-1747-4ced-877d-63533218e8f1\") " pod="openstack/dnsmasq-dns-77585f5f8c-bv9wg" Dec 01 08:56:25 crc kubenswrapper[4689]: I1201 08:56:25.860940 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnd66\" (UniqueName: \"kubernetes.io/projected/f3fc4aaf-1747-4ced-877d-63533218e8f1-kube-api-access-pnd66\") pod \"dnsmasq-dns-77585f5f8c-bv9wg\" (UID: \"f3fc4aaf-1747-4ced-877d-63533218e8f1\") " pod="openstack/dnsmasq-dns-77585f5f8c-bv9wg" Dec 01 08:56:25 crc kubenswrapper[4689]: I1201 08:56:25.861397 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f3fc4aaf-1747-4ced-877d-63533218e8f1-dns-svc\") pod \"dnsmasq-dns-77585f5f8c-bv9wg\" (UID: \"f3fc4aaf-1747-4ced-877d-63533218e8f1\") " pod="openstack/dnsmasq-dns-77585f5f8c-bv9wg" Dec 01 08:56:25 crc kubenswrapper[4689]: I1201 08:56:25.861585 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f3fc4aaf-1747-4ced-877d-63533218e8f1-dns-swift-storage-0\") pod \"dnsmasq-dns-77585f5f8c-bv9wg\" (UID: \"f3fc4aaf-1747-4ced-877d-63533218e8f1\") " pod="openstack/dnsmasq-dns-77585f5f8c-bv9wg" Dec 01 08:56:25 crc kubenswrapper[4689]: I1201 08:56:25.862047 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f3fc4aaf-1747-4ced-877d-63533218e8f1-ovsdbserver-sb\") pod \"dnsmasq-dns-77585f5f8c-bv9wg\" (UID: \"f3fc4aaf-1747-4ced-877d-63533218e8f1\") " pod="openstack/dnsmasq-dns-77585f5f8c-bv9wg" Dec 01 08:56:25 crc kubenswrapper[4689]: I1201 08:56:25.862216 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f3fc4aaf-1747-4ced-877d-63533218e8f1-ovsdbserver-nb\") pod \"dnsmasq-dns-77585f5f8c-bv9wg\" (UID: \"f3fc4aaf-1747-4ced-877d-63533218e8f1\") " pod="openstack/dnsmasq-dns-77585f5f8c-bv9wg" Dec 01 08:56:25 crc kubenswrapper[4689]: I1201 08:56:25.862443 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f3fc4aaf-1747-4ced-877d-63533218e8f1-dns-swift-storage-0\") pod \"dnsmasq-dns-77585f5f8c-bv9wg\" (UID: \"f3fc4aaf-1747-4ced-877d-63533218e8f1\") " pod="openstack/dnsmasq-dns-77585f5f8c-bv9wg" Dec 01 08:56:25 crc kubenswrapper[4689]: I1201 08:56:25.863123 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f3fc4aaf-1747-4ced-877d-63533218e8f1-ovsdbserver-nb\") pod \"dnsmasq-dns-77585f5f8c-bv9wg\" (UID: \"f3fc4aaf-1747-4ced-877d-63533218e8f1\") " pod="openstack/dnsmasq-dns-77585f5f8c-bv9wg" Dec 01 08:56:25 crc kubenswrapper[4689]: I1201 08:56:25.863317 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3fc4aaf-1747-4ced-877d-63533218e8f1-config\") pod \"dnsmasq-dns-77585f5f8c-bv9wg\" (UID: \"f3fc4aaf-1747-4ced-877d-63533218e8f1\") " pod="openstack/dnsmasq-dns-77585f5f8c-bv9wg" Dec 01 08:56:25 crc kubenswrapper[4689]: I1201 08:56:25.863778 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f3fc4aaf-1747-4ced-877d-63533218e8f1-ovsdbserver-sb\") pod \"dnsmasq-dns-77585f5f8c-bv9wg\" (UID: \"f3fc4aaf-1747-4ced-877d-63533218e8f1\") " pod="openstack/dnsmasq-dns-77585f5f8c-bv9wg" Dec 01 08:56:25 crc kubenswrapper[4689]: I1201 08:56:25.864587 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f3fc4aaf-1747-4ced-877d-63533218e8f1-dns-svc\") pod \"dnsmasq-dns-77585f5f8c-bv9wg\" (UID: \"f3fc4aaf-1747-4ced-877d-63533218e8f1\") " pod="openstack/dnsmasq-dns-77585f5f8c-bv9wg" Dec 01 08:56:25 crc kubenswrapper[4689]: I1201 08:56:25.879926 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnd66\" (UniqueName: \"kubernetes.io/projected/f3fc4aaf-1747-4ced-877d-63533218e8f1-kube-api-access-pnd66\") pod \"dnsmasq-dns-77585f5f8c-bv9wg\" (UID: \"f3fc4aaf-1747-4ced-877d-63533218e8f1\") " pod="openstack/dnsmasq-dns-77585f5f8c-bv9wg" Dec 01 08:56:26 crc kubenswrapper[4689]: I1201 08:56:26.002939 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-bv9wg" Dec 01 08:56:32 crc kubenswrapper[4689]: I1201 08:56:32.905603 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Dec 01 08:56:33 crc kubenswrapper[4689]: I1201 08:56:33.141957 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-bv9wg"] Dec 01 08:56:33 crc kubenswrapper[4689]: W1201 08:56:33.161270 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf3fc4aaf_1747_4ced_877d_63533218e8f1.slice/crio-6667675247a46d058418cafabd7b71cf70099ee91dc0eb6c0f41332112e3ecc9 WatchSource:0}: Error finding container 6667675247a46d058418cafabd7b71cf70099ee91dc0eb6c0f41332112e3ecc9: Status 404 returned error can't find the container with id 6667675247a46d058418cafabd7b71cf70099ee91dc0eb6c0f41332112e3ecc9 Dec 01 08:56:33 crc kubenswrapper[4689]: I1201 08:56:33.246451 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Dec 01 08:56:33 crc kubenswrapper[4689]: I1201 08:56:33.263799 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-2cjdr"] Dec 01 08:56:33 crc kubenswrapper[4689]: I1201 08:56:33.267111 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-2cjdr" Dec 01 08:56:33 crc kubenswrapper[4689]: I1201 08:56:33.307959 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-2cjdr"] Dec 01 08:56:33 crc kubenswrapper[4689]: I1201 08:56:33.412282 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45efc88a-3b6d-41f2-91fa-8025cfed0b11-operator-scripts\") pod \"cinder-db-create-2cjdr\" (UID: \"45efc88a-3b6d-41f2-91fa-8025cfed0b11\") " pod="openstack/cinder-db-create-2cjdr" Dec 01 08:56:33 crc kubenswrapper[4689]: I1201 08:56:33.412567 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffbr7\" (UniqueName: \"kubernetes.io/projected/45efc88a-3b6d-41f2-91fa-8025cfed0b11-kube-api-access-ffbr7\") pod \"cinder-db-create-2cjdr\" (UID: \"45efc88a-3b6d-41f2-91fa-8025cfed0b11\") " pod="openstack/cinder-db-create-2cjdr" Dec 01 08:56:33 crc kubenswrapper[4689]: I1201 08:56:33.446188 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-bv9wg" event={"ID":"f3fc4aaf-1747-4ced-877d-63533218e8f1","Type":"ContainerStarted","Data":"6667675247a46d058418cafabd7b71cf70099ee91dc0eb6c0f41332112e3ecc9"} Dec 01 08:56:33 crc kubenswrapper[4689]: I1201 08:56:33.452683 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-e1d0-account-create-update-922g5"] Dec 01 08:56:33 crc kubenswrapper[4689]: I1201 08:56:33.453708 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-e1d0-account-create-update-922g5" Dec 01 08:56:33 crc kubenswrapper[4689]: I1201 08:56:33.460682 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Dec 01 08:56:33 crc kubenswrapper[4689]: I1201 08:56:33.469585 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-e1d0-account-create-update-922g5"] Dec 01 08:56:33 crc kubenswrapper[4689]: I1201 08:56:33.514133 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45efc88a-3b6d-41f2-91fa-8025cfed0b11-operator-scripts\") pod \"cinder-db-create-2cjdr\" (UID: \"45efc88a-3b6d-41f2-91fa-8025cfed0b11\") " pod="openstack/cinder-db-create-2cjdr" Dec 01 08:56:33 crc kubenswrapper[4689]: I1201 08:56:33.514216 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hql7j\" (UniqueName: \"kubernetes.io/projected/2761a9b0-bfa4-4992-83d9-532157d688c4-kube-api-access-hql7j\") pod \"cinder-e1d0-account-create-update-922g5\" (UID: \"2761a9b0-bfa4-4992-83d9-532157d688c4\") " pod="openstack/cinder-e1d0-account-create-update-922g5" Dec 01 08:56:33 crc kubenswrapper[4689]: I1201 08:56:33.514320 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2761a9b0-bfa4-4992-83d9-532157d688c4-operator-scripts\") pod \"cinder-e1d0-account-create-update-922g5\" (UID: \"2761a9b0-bfa4-4992-83d9-532157d688c4\") " pod="openstack/cinder-e1d0-account-create-update-922g5" Dec 01 08:56:33 crc kubenswrapper[4689]: I1201 08:56:33.514352 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffbr7\" (UniqueName: \"kubernetes.io/projected/45efc88a-3b6d-41f2-91fa-8025cfed0b11-kube-api-access-ffbr7\") pod \"cinder-db-create-2cjdr\" (UID: \"45efc88a-3b6d-41f2-91fa-8025cfed0b11\") " pod="openstack/cinder-db-create-2cjdr" Dec 01 08:56:33 crc kubenswrapper[4689]: I1201 08:56:33.515567 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45efc88a-3b6d-41f2-91fa-8025cfed0b11-operator-scripts\") pod \"cinder-db-create-2cjdr\" (UID: \"45efc88a-3b6d-41f2-91fa-8025cfed0b11\") " pod="openstack/cinder-db-create-2cjdr" Dec 01 08:56:33 crc kubenswrapper[4689]: I1201 08:56:33.551387 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-w42fl"] Dec 01 08:56:33 crc kubenswrapper[4689]: I1201 08:56:33.552756 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-w42fl" Dec 01 08:56:33 crc kubenswrapper[4689]: I1201 08:56:33.578505 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-w42fl"] Dec 01 08:56:33 crc kubenswrapper[4689]: I1201 08:56:33.581771 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffbr7\" (UniqueName: \"kubernetes.io/projected/45efc88a-3b6d-41f2-91fa-8025cfed0b11-kube-api-access-ffbr7\") pod \"cinder-db-create-2cjdr\" (UID: \"45efc88a-3b6d-41f2-91fa-8025cfed0b11\") " pod="openstack/cinder-db-create-2cjdr" Dec 01 08:56:33 crc kubenswrapper[4689]: I1201 08:56:33.601806 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-22b7-account-create-update-l4xst"] Dec 01 08:56:33 crc kubenswrapper[4689]: I1201 08:56:33.603873 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-22b7-account-create-update-l4xst" Dec 01 08:56:33 crc kubenswrapper[4689]: I1201 08:56:33.616160 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-2cjdr" Dec 01 08:56:33 crc kubenswrapper[4689]: I1201 08:56:33.617273 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2761a9b0-bfa4-4992-83d9-532157d688c4-operator-scripts\") pod \"cinder-e1d0-account-create-update-922g5\" (UID: \"2761a9b0-bfa4-4992-83d9-532157d688c4\") " pod="openstack/cinder-e1d0-account-create-update-922g5" Dec 01 08:56:33 crc kubenswrapper[4689]: I1201 08:56:33.622587 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Dec 01 08:56:33 crc kubenswrapper[4689]: I1201 08:56:33.623314 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2761a9b0-bfa4-4992-83d9-532157d688c4-operator-scripts\") pod \"cinder-e1d0-account-create-update-922g5\" (UID: \"2761a9b0-bfa4-4992-83d9-532157d688c4\") " pod="openstack/cinder-e1d0-account-create-update-922g5" Dec 01 08:56:33 crc kubenswrapper[4689]: I1201 08:56:33.617328 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljx4t\" (UniqueName: \"kubernetes.io/projected/9fbffd63-b0cb-41e8-a4a7-d995432ad88c-kube-api-access-ljx4t\") pod \"barbican-db-create-w42fl\" (UID: \"9fbffd63-b0cb-41e8-a4a7-d995432ad88c\") " pod="openstack/barbican-db-create-w42fl" Dec 01 08:56:33 crc kubenswrapper[4689]: I1201 08:56:33.631532 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9fbffd63-b0cb-41e8-a4a7-d995432ad88c-operator-scripts\") pod \"barbican-db-create-w42fl\" (UID: \"9fbffd63-b0cb-41e8-a4a7-d995432ad88c\") " pod="openstack/barbican-db-create-w42fl" Dec 01 08:56:33 crc kubenswrapper[4689]: I1201 08:56:33.631659 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hql7j\" (UniqueName: \"kubernetes.io/projected/2761a9b0-bfa4-4992-83d9-532157d688c4-kube-api-access-hql7j\") pod \"cinder-e1d0-account-create-update-922g5\" (UID: \"2761a9b0-bfa4-4992-83d9-532157d688c4\") " pod="openstack/cinder-e1d0-account-create-update-922g5" Dec 01 08:56:33 crc kubenswrapper[4689]: I1201 08:56:33.651976 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-22b7-account-create-update-l4xst"] Dec 01 08:56:33 crc kubenswrapper[4689]: I1201 08:56:33.695814 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hql7j\" (UniqueName: \"kubernetes.io/projected/2761a9b0-bfa4-4992-83d9-532157d688c4-kube-api-access-hql7j\") pod \"cinder-e1d0-account-create-update-922g5\" (UID: \"2761a9b0-bfa4-4992-83d9-532157d688c4\") " pod="openstack/cinder-e1d0-account-create-update-922g5" Dec 01 08:56:33 crc kubenswrapper[4689]: I1201 08:56:33.746910 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlmfg\" (UniqueName: \"kubernetes.io/projected/75b770c2-67d6-4b04-9a4d-a3a1cad52cc6-kube-api-access-qlmfg\") pod \"barbican-22b7-account-create-update-l4xst\" (UID: \"75b770c2-67d6-4b04-9a4d-a3a1cad52cc6\") " pod="openstack/barbican-22b7-account-create-update-l4xst" Dec 01 08:56:33 crc kubenswrapper[4689]: I1201 08:56:33.755060 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75b770c2-67d6-4b04-9a4d-a3a1cad52cc6-operator-scripts\") pod \"barbican-22b7-account-create-update-l4xst\" (UID: \"75b770c2-67d6-4b04-9a4d-a3a1cad52cc6\") " pod="openstack/barbican-22b7-account-create-update-l4xst" Dec 01 08:56:33 crc kubenswrapper[4689]: I1201 08:56:33.755344 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljx4t\" (UniqueName: \"kubernetes.io/projected/9fbffd63-b0cb-41e8-a4a7-d995432ad88c-kube-api-access-ljx4t\") pod \"barbican-db-create-w42fl\" (UID: \"9fbffd63-b0cb-41e8-a4a7-d995432ad88c\") " pod="openstack/barbican-db-create-w42fl" Dec 01 08:56:33 crc kubenswrapper[4689]: I1201 08:56:33.770826 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9fbffd63-b0cb-41e8-a4a7-d995432ad88c-operator-scripts\") pod \"barbican-db-create-w42fl\" (UID: \"9fbffd63-b0cb-41e8-a4a7-d995432ad88c\") " pod="openstack/barbican-db-create-w42fl" Dec 01 08:56:33 crc kubenswrapper[4689]: I1201 08:56:33.771682 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9fbffd63-b0cb-41e8-a4a7-d995432ad88c-operator-scripts\") pod \"barbican-db-create-w42fl\" (UID: \"9fbffd63-b0cb-41e8-a4a7-d995432ad88c\") " pod="openstack/barbican-db-create-w42fl" Dec 01 08:56:33 crc kubenswrapper[4689]: I1201 08:56:33.772008 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-wkgqk"] Dec 01 08:56:33 crc kubenswrapper[4689]: I1201 08:56:33.777915 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-e1d0-account-create-update-922g5" Dec 01 08:56:33 crc kubenswrapper[4689]: I1201 08:56:33.798460 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-wkgqk" Dec 01 08:56:33 crc kubenswrapper[4689]: I1201 08:56:33.812528 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 01 08:56:33 crc kubenswrapper[4689]: I1201 08:56:33.814482 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-jq84v"] Dec 01 08:56:33 crc kubenswrapper[4689]: I1201 08:56:33.825900 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 01 08:56:33 crc kubenswrapper[4689]: I1201 08:56:33.828857 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-jq84v" Dec 01 08:56:33 crc kubenswrapper[4689]: I1201 08:56:33.852849 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljx4t\" (UniqueName: \"kubernetes.io/projected/9fbffd63-b0cb-41e8-a4a7-d995432ad88c-kube-api-access-ljx4t\") pod \"barbican-db-create-w42fl\" (UID: \"9fbffd63-b0cb-41e8-a4a7-d995432ad88c\") " pod="openstack/barbican-db-create-w42fl" Dec 01 08:56:33 crc kubenswrapper[4689]: I1201 08:56:33.872037 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 01 08:56:33 crc kubenswrapper[4689]: I1201 08:56:33.872629 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-44r55" Dec 01 08:56:33 crc kubenswrapper[4689]: I1201 08:56:33.873017 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlmfg\" (UniqueName: \"kubernetes.io/projected/75b770c2-67d6-4b04-9a4d-a3a1cad52cc6-kube-api-access-qlmfg\") pod \"barbican-22b7-account-create-update-l4xst\" (UID: \"75b770c2-67d6-4b04-9a4d-a3a1cad52cc6\") " pod="openstack/barbican-22b7-account-create-update-l4xst" Dec 01 08:56:33 crc kubenswrapper[4689]: I1201 08:56:33.873064 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75b770c2-67d6-4b04-9a4d-a3a1cad52cc6-operator-scripts\") pod \"barbican-22b7-account-create-update-l4xst\" (UID: \"75b770c2-67d6-4b04-9a4d-a3a1cad52cc6\") " pod="openstack/barbican-22b7-account-create-update-l4xst" Dec 01 08:56:33 crc kubenswrapper[4689]: I1201 08:56:33.873875 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75b770c2-67d6-4b04-9a4d-a3a1cad52cc6-operator-scripts\") pod \"barbican-22b7-account-create-update-l4xst\" (UID: \"75b770c2-67d6-4b04-9a4d-a3a1cad52cc6\") " pod="openstack/barbican-22b7-account-create-update-l4xst" Dec 01 08:56:33 crc kubenswrapper[4689]: I1201 08:56:33.893802 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-w42fl" Dec 01 08:56:33 crc kubenswrapper[4689]: I1201 08:56:33.899767 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlmfg\" (UniqueName: \"kubernetes.io/projected/75b770c2-67d6-4b04-9a4d-a3a1cad52cc6-kube-api-access-qlmfg\") pod \"barbican-22b7-account-create-update-l4xst\" (UID: \"75b770c2-67d6-4b04-9a4d-a3a1cad52cc6\") " pod="openstack/barbican-22b7-account-create-update-l4xst" Dec 01 08:56:33 crc kubenswrapper[4689]: I1201 08:56:33.920501 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-wkgqk"] Dec 01 08:56:33 crc kubenswrapper[4689]: I1201 08:56:33.950460 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-jq84v"] Dec 01 08:56:33 crc kubenswrapper[4689]: I1201 08:56:33.974357 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5a21dc6-917e-454e-a0ef-c4f21af302b3-operator-scripts\") pod \"neutron-db-create-jq84v\" (UID: \"f5a21dc6-917e-454e-a0ef-c4f21af302b3\") " pod="openstack/neutron-db-create-jq84v" Dec 01 08:56:33 crc kubenswrapper[4689]: I1201 08:56:33.974529 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wljvz\" (UniqueName: \"kubernetes.io/projected/21498a51-fbab-4263-88dd-9c30df75721c-kube-api-access-wljvz\") pod \"keystone-db-sync-wkgqk\" (UID: \"21498a51-fbab-4263-88dd-9c30df75721c\") " pod="openstack/keystone-db-sync-wkgqk" Dec 01 08:56:33 crc kubenswrapper[4689]: I1201 08:56:33.974606 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21498a51-fbab-4263-88dd-9c30df75721c-combined-ca-bundle\") pod \"keystone-db-sync-wkgqk\" (UID: \"21498a51-fbab-4263-88dd-9c30df75721c\") " pod="openstack/keystone-db-sync-wkgqk" Dec 01 08:56:33 crc kubenswrapper[4689]: I1201 08:56:33.974645 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9bbz\" (UniqueName: \"kubernetes.io/projected/f5a21dc6-917e-454e-a0ef-c4f21af302b3-kube-api-access-v9bbz\") pod \"neutron-db-create-jq84v\" (UID: \"f5a21dc6-917e-454e-a0ef-c4f21af302b3\") " pod="openstack/neutron-db-create-jq84v" Dec 01 08:56:33 crc kubenswrapper[4689]: I1201 08:56:33.974701 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21498a51-fbab-4263-88dd-9c30df75721c-config-data\") pod \"keystone-db-sync-wkgqk\" (UID: \"21498a51-fbab-4263-88dd-9c30df75721c\") " pod="openstack/keystone-db-sync-wkgqk" Dec 01 08:56:33 crc kubenswrapper[4689]: I1201 08:56:33.988454 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-a825-account-create-update-7f62d"] Dec 01 08:56:33 crc kubenswrapper[4689]: I1201 08:56:33.989711 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-a825-account-create-update-7f62d" Dec 01 08:56:33 crc kubenswrapper[4689]: I1201 08:56:33.993260 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Dec 01 08:56:33 crc kubenswrapper[4689]: I1201 08:56:33.995138 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-a825-account-create-update-7f62d"] Dec 01 08:56:34 crc kubenswrapper[4689]: I1201 08:56:34.075963 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9bbz\" (UniqueName: \"kubernetes.io/projected/f5a21dc6-917e-454e-a0ef-c4f21af302b3-kube-api-access-v9bbz\") pod \"neutron-db-create-jq84v\" (UID: \"f5a21dc6-917e-454e-a0ef-c4f21af302b3\") " pod="openstack/neutron-db-create-jq84v" Dec 01 08:56:34 crc kubenswrapper[4689]: I1201 08:56:34.076605 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21498a51-fbab-4263-88dd-9c30df75721c-config-data\") pod \"keystone-db-sync-wkgqk\" (UID: \"21498a51-fbab-4263-88dd-9c30df75721c\") " pod="openstack/keystone-db-sync-wkgqk" Dec 01 08:56:34 crc kubenswrapper[4689]: I1201 08:56:34.076761 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08afe837-f3ba-42bb-b61b-492d30229c45-operator-scripts\") pod \"neutron-a825-account-create-update-7f62d\" (UID: \"08afe837-f3ba-42bb-b61b-492d30229c45\") " pod="openstack/neutron-a825-account-create-update-7f62d" Dec 01 08:56:34 crc kubenswrapper[4689]: I1201 08:56:34.076869 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5a21dc6-917e-454e-a0ef-c4f21af302b3-operator-scripts\") pod \"neutron-db-create-jq84v\" (UID: \"f5a21dc6-917e-454e-a0ef-c4f21af302b3\") " pod="openstack/neutron-db-create-jq84v" Dec 01 08:56:34 crc kubenswrapper[4689]: I1201 08:56:34.076959 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wljvz\" (UniqueName: \"kubernetes.io/projected/21498a51-fbab-4263-88dd-9c30df75721c-kube-api-access-wljvz\") pod \"keystone-db-sync-wkgqk\" (UID: \"21498a51-fbab-4263-88dd-9c30df75721c\") " pod="openstack/keystone-db-sync-wkgqk" Dec 01 08:56:34 crc kubenswrapper[4689]: I1201 08:56:34.077064 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jzwt\" (UniqueName: \"kubernetes.io/projected/08afe837-f3ba-42bb-b61b-492d30229c45-kube-api-access-2jzwt\") pod \"neutron-a825-account-create-update-7f62d\" (UID: \"08afe837-f3ba-42bb-b61b-492d30229c45\") " pod="openstack/neutron-a825-account-create-update-7f62d" Dec 01 08:56:34 crc kubenswrapper[4689]: I1201 08:56:34.077938 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21498a51-fbab-4263-88dd-9c30df75721c-combined-ca-bundle\") pod \"keystone-db-sync-wkgqk\" (UID: \"21498a51-fbab-4263-88dd-9c30df75721c\") " pod="openstack/keystone-db-sync-wkgqk" Dec 01 08:56:34 crc kubenswrapper[4689]: I1201 08:56:34.087039 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5a21dc6-917e-454e-a0ef-c4f21af302b3-operator-scripts\") pod \"neutron-db-create-jq84v\" (UID: \"f5a21dc6-917e-454e-a0ef-c4f21af302b3\") " pod="openstack/neutron-db-create-jq84v" Dec 01 08:56:34 crc kubenswrapper[4689]: I1201 08:56:34.097276 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21498a51-fbab-4263-88dd-9c30df75721c-combined-ca-bundle\") pod \"keystone-db-sync-wkgqk\" (UID: \"21498a51-fbab-4263-88dd-9c30df75721c\") " pod="openstack/keystone-db-sync-wkgqk" Dec 01 08:56:34 crc kubenswrapper[4689]: I1201 08:56:34.163831 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-22b7-account-create-update-l4xst" Dec 01 08:56:34 crc kubenswrapper[4689]: I1201 08:56:34.168092 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wljvz\" (UniqueName: \"kubernetes.io/projected/21498a51-fbab-4263-88dd-9c30df75721c-kube-api-access-wljvz\") pod \"keystone-db-sync-wkgqk\" (UID: \"21498a51-fbab-4263-88dd-9c30df75721c\") " pod="openstack/keystone-db-sync-wkgqk" Dec 01 08:56:34 crc kubenswrapper[4689]: I1201 08:56:34.172291 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21498a51-fbab-4263-88dd-9c30df75721c-config-data\") pod \"keystone-db-sync-wkgqk\" (UID: \"21498a51-fbab-4263-88dd-9c30df75721c\") " pod="openstack/keystone-db-sync-wkgqk" Dec 01 08:56:34 crc kubenswrapper[4689]: I1201 08:56:34.178944 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9bbz\" (UniqueName: \"kubernetes.io/projected/f5a21dc6-917e-454e-a0ef-c4f21af302b3-kube-api-access-v9bbz\") pod \"neutron-db-create-jq84v\" (UID: \"f5a21dc6-917e-454e-a0ef-c4f21af302b3\") " pod="openstack/neutron-db-create-jq84v" Dec 01 08:56:34 crc kubenswrapper[4689]: I1201 08:56:34.182117 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-wkgqk" Dec 01 08:56:34 crc kubenswrapper[4689]: I1201 08:56:34.187911 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jzwt\" (UniqueName: \"kubernetes.io/projected/08afe837-f3ba-42bb-b61b-492d30229c45-kube-api-access-2jzwt\") pod \"neutron-a825-account-create-update-7f62d\" (UID: \"08afe837-f3ba-42bb-b61b-492d30229c45\") " pod="openstack/neutron-a825-account-create-update-7f62d" Dec 01 08:56:34 crc kubenswrapper[4689]: I1201 08:56:34.188269 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08afe837-f3ba-42bb-b61b-492d30229c45-operator-scripts\") pod \"neutron-a825-account-create-update-7f62d\" (UID: \"08afe837-f3ba-42bb-b61b-492d30229c45\") " pod="openstack/neutron-a825-account-create-update-7f62d" Dec 01 08:56:34 crc kubenswrapper[4689]: I1201 08:56:34.189104 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08afe837-f3ba-42bb-b61b-492d30229c45-operator-scripts\") pod \"neutron-a825-account-create-update-7f62d\" (UID: \"08afe837-f3ba-42bb-b61b-492d30229c45\") " pod="openstack/neutron-a825-account-create-update-7f62d" Dec 01 08:56:34 crc kubenswrapper[4689]: I1201 08:56:34.218138 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-jq84v" Dec 01 08:56:34 crc kubenswrapper[4689]: I1201 08:56:34.228226 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jzwt\" (UniqueName: \"kubernetes.io/projected/08afe837-f3ba-42bb-b61b-492d30229c45-kube-api-access-2jzwt\") pod \"neutron-a825-account-create-update-7f62d\" (UID: \"08afe837-f3ba-42bb-b61b-492d30229c45\") " pod="openstack/neutron-a825-account-create-update-7f62d" Dec 01 08:56:34 crc kubenswrapper[4689]: I1201 08:56:34.294622 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-a825-account-create-update-7f62d" Dec 01 08:56:34 crc kubenswrapper[4689]: I1201 08:56:34.501445 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-5qxkw" event={"ID":"fbecbbce-632b-4832-b4aa-6834ff6541e5","Type":"ContainerStarted","Data":"fab6e207a5bb22be56610cfc7b26e34ca92f987886681d3a174260301254d349"} Dec 01 08:56:34 crc kubenswrapper[4689]: I1201 08:56:34.538495 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-bv9wg" event={"ID":"f3fc4aaf-1747-4ced-877d-63533218e8f1","Type":"ContainerStarted","Data":"b9350be88f0bc28216b5fdffcd4433e24639a21ad8f7800737108559ad9fb387"} Dec 01 08:56:34 crc kubenswrapper[4689]: I1201 08:56:34.575841 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-2cjdr"] Dec 01 08:56:34 crc kubenswrapper[4689]: I1201 08:56:34.770122 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-e1d0-account-create-update-922g5"] Dec 01 08:56:34 crc kubenswrapper[4689]: W1201 08:56:34.794682 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2761a9b0_bfa4_4992_83d9_532157d688c4.slice/crio-220e404fba8b8e96885339107fb4d87f4f7111fab4cc990651f53700c6aa0235 WatchSource:0}: Error finding container 220e404fba8b8e96885339107fb4d87f4f7111fab4cc990651f53700c6aa0235: Status 404 returned error can't find the container with id 220e404fba8b8e96885339107fb4d87f4f7111fab4cc990651f53700c6aa0235 Dec 01 08:56:34 crc kubenswrapper[4689]: I1201 08:56:34.828628 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-w42fl"] Dec 01 08:56:35 crc kubenswrapper[4689]: I1201 08:56:35.041175 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-wkgqk"] Dec 01 08:56:35 crc kubenswrapper[4689]: I1201 08:56:35.073480 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-22b7-account-create-update-l4xst"] Dec 01 08:56:35 crc kubenswrapper[4689]: I1201 08:56:35.102453 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-jq84v"] Dec 01 08:56:35 crc kubenswrapper[4689]: W1201 08:56:35.112689 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf5a21dc6_917e_454e_a0ef_c4f21af302b3.slice/crio-82c821697a45b956fae7c7021f0e5a523d6a3233e9f4ea625160784dccd8c68a WatchSource:0}: Error finding container 82c821697a45b956fae7c7021f0e5a523d6a3233e9f4ea625160784dccd8c68a: Status 404 returned error can't find the container with id 82c821697a45b956fae7c7021f0e5a523d6a3233e9f4ea625160784dccd8c68a Dec 01 08:56:35 crc kubenswrapper[4689]: I1201 08:56:35.234723 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-a825-account-create-update-7f62d"] Dec 01 08:56:35 crc kubenswrapper[4689]: I1201 08:56:35.547341 4689 generic.go:334] "Generic (PLEG): container finished" podID="45efc88a-3b6d-41f2-91fa-8025cfed0b11" containerID="74a62c071b1cebedb55d068a8db740442722bb5c88139c8bd4ca26e1a950919a" exitCode=0 Dec 01 08:56:35 crc kubenswrapper[4689]: I1201 08:56:35.547407 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-2cjdr" event={"ID":"45efc88a-3b6d-41f2-91fa-8025cfed0b11","Type":"ContainerDied","Data":"74a62c071b1cebedb55d068a8db740442722bb5c88139c8bd4ca26e1a950919a"} Dec 01 08:56:35 crc kubenswrapper[4689]: I1201 08:56:35.547693 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-2cjdr" event={"ID":"45efc88a-3b6d-41f2-91fa-8025cfed0b11","Type":"ContainerStarted","Data":"28e2412e6bbc44ce3cff75aeb04099f612c0a1ec14105f22b913b9de92e051f8"} Dec 01 08:56:35 crc kubenswrapper[4689]: I1201 08:56:35.549109 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-e1d0-account-create-update-922g5" event={"ID":"2761a9b0-bfa4-4992-83d9-532157d688c4","Type":"ContainerStarted","Data":"e063038774bf2c85c2f8a10cfe598939d148c6cab2028e51b7339b821406a104"} Dec 01 08:56:35 crc kubenswrapper[4689]: I1201 08:56:35.549143 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-e1d0-account-create-update-922g5" event={"ID":"2761a9b0-bfa4-4992-83d9-532157d688c4","Type":"ContainerStarted","Data":"220e404fba8b8e96885339107fb4d87f4f7111fab4cc990651f53700c6aa0235"} Dec 01 08:56:35 crc kubenswrapper[4689]: I1201 08:56:35.550171 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-wkgqk" event={"ID":"21498a51-fbab-4263-88dd-9c30df75721c","Type":"ContainerStarted","Data":"793f7f96e6bcbc12ffa09f218a8c5ce03c734662295a7172059da523faae15f6"} Dec 01 08:56:35 crc kubenswrapper[4689]: I1201 08:56:35.551342 4689 generic.go:334] "Generic (PLEG): container finished" podID="f3fc4aaf-1747-4ced-877d-63533218e8f1" containerID="b9350be88f0bc28216b5fdffcd4433e24639a21ad8f7800737108559ad9fb387" exitCode=0 Dec 01 08:56:35 crc kubenswrapper[4689]: I1201 08:56:35.551396 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-bv9wg" event={"ID":"f3fc4aaf-1747-4ced-877d-63533218e8f1","Type":"ContainerDied","Data":"b9350be88f0bc28216b5fdffcd4433e24639a21ad8f7800737108559ad9fb387"} Dec 01 08:56:35 crc kubenswrapper[4689]: I1201 08:56:35.553061 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-22b7-account-create-update-l4xst" event={"ID":"75b770c2-67d6-4b04-9a4d-a3a1cad52cc6","Type":"ContainerStarted","Data":"b420e3d8808e9c474aec7172df259e1b5e282072be07e9360abd024e3d5d8aee"} Dec 01 08:56:35 crc kubenswrapper[4689]: I1201 08:56:35.553122 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-22b7-account-create-update-l4xst" event={"ID":"75b770c2-67d6-4b04-9a4d-a3a1cad52cc6","Type":"ContainerStarted","Data":"ed9ee67081863b059552555d52a097c44791d5b1e0bbc9de47dbdaa2b1a9bd30"} Dec 01 08:56:35 crc kubenswrapper[4689]: I1201 08:56:35.554149 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-a825-account-create-update-7f62d" event={"ID":"08afe837-f3ba-42bb-b61b-492d30229c45","Type":"ContainerStarted","Data":"df5bd295cdd0f16cc45e0f52af00dd0786f840f2a08d17dc2918764d7cf5433e"} Dec 01 08:56:35 crc kubenswrapper[4689]: I1201 08:56:35.554181 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-a825-account-create-update-7f62d" event={"ID":"08afe837-f3ba-42bb-b61b-492d30229c45","Type":"ContainerStarted","Data":"ebfae0592cfa7b4840671c206bd3676e563f538cc6bc0f93d62ea691afe840af"} Dec 01 08:56:35 crc kubenswrapper[4689]: I1201 08:56:35.556476 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-w42fl" event={"ID":"9fbffd63-b0cb-41e8-a4a7-d995432ad88c","Type":"ContainerStarted","Data":"65f43bcc97ef562b23e0c17d852a26bb7fa26fd5811acde8ece17ce5fcf65515"} Dec 01 08:56:35 crc kubenswrapper[4689]: I1201 08:56:35.556513 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-w42fl" event={"ID":"9fbffd63-b0cb-41e8-a4a7-d995432ad88c","Type":"ContainerStarted","Data":"dff3c175d5fbceffba35d7a3d2ef8d0c6c3094e00415651b466b2a7d71cc4a2e"} Dec 01 08:56:35 crc kubenswrapper[4689]: I1201 08:56:35.565503 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-jq84v" event={"ID":"f5a21dc6-917e-454e-a0ef-c4f21af302b3","Type":"ContainerStarted","Data":"d853403846e2f530d1ae5b8dd46cfb48c5bec299612c373667b88ddb6cc00291"} Dec 01 08:56:35 crc kubenswrapper[4689]: I1201 08:56:35.565563 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-jq84v" event={"ID":"f5a21dc6-917e-454e-a0ef-c4f21af302b3","Type":"ContainerStarted","Data":"82c821697a45b956fae7c7021f0e5a523d6a3233e9f4ea625160784dccd8c68a"} Dec 01 08:56:35 crc kubenswrapper[4689]: I1201 08:56:35.644190 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-22b7-account-create-update-l4xst" podStartSLOduration=2.644151946 podStartE2EDuration="2.644151946s" podCreationTimestamp="2025-12-01 08:56:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:56:35.639634762 +0000 UTC m=+1075.711922666" watchObservedRunningTime="2025-12-01 08:56:35.644151946 +0000 UTC m=+1075.716439850" Dec 01 08:56:35 crc kubenswrapper[4689]: I1201 08:56:35.672344 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-e1d0-account-create-update-922g5" podStartSLOduration=2.672323888 podStartE2EDuration="2.672323888s" podCreationTimestamp="2025-12-01 08:56:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:56:35.66473003 +0000 UTC m=+1075.737017934" watchObservedRunningTime="2025-12-01 08:56:35.672323888 +0000 UTC m=+1075.744611792" Dec 01 08:56:35 crc kubenswrapper[4689]: I1201 08:56:35.687064 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-w42fl" podStartSLOduration=2.687038371 podStartE2EDuration="2.687038371s" podCreationTimestamp="2025-12-01 08:56:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:56:35.679107794 +0000 UTC m=+1075.751395698" watchObservedRunningTime="2025-12-01 08:56:35.687038371 +0000 UTC m=+1075.759326275" Dec 01 08:56:35 crc kubenswrapper[4689]: I1201 08:56:35.720207 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-5qxkw" podStartSLOduration=5.131947697 podStartE2EDuration="19.72018971s" podCreationTimestamp="2025-12-01 08:56:16 +0000 UTC" firstStartedPulling="2025-12-01 08:56:18.098145287 +0000 UTC m=+1058.170433191" lastFinishedPulling="2025-12-01 08:56:32.6863873 +0000 UTC m=+1072.758675204" observedRunningTime="2025-12-01 08:56:35.717566158 +0000 UTC m=+1075.789854062" watchObservedRunningTime="2025-12-01 08:56:35.72018971 +0000 UTC m=+1075.792477614" Dec 01 08:56:35 crc kubenswrapper[4689]: I1201 08:56:35.784766 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-jq84v" podStartSLOduration=2.784747279 podStartE2EDuration="2.784747279s" podCreationTimestamp="2025-12-01 08:56:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:56:35.766491208 +0000 UTC m=+1075.838779112" watchObservedRunningTime="2025-12-01 08:56:35.784747279 +0000 UTC m=+1075.857035183" Dec 01 08:56:35 crc kubenswrapper[4689]: I1201 08:56:35.791532 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-a825-account-create-update-7f62d" podStartSLOduration=2.7915103439999998 podStartE2EDuration="2.791510344s" podCreationTimestamp="2025-12-01 08:56:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:56:35.782758384 +0000 UTC m=+1075.855046298" watchObservedRunningTime="2025-12-01 08:56:35.791510344 +0000 UTC m=+1075.863798238" Dec 01 08:56:36 crc kubenswrapper[4689]: I1201 08:56:36.573556 4689 generic.go:334] "Generic (PLEG): container finished" podID="f5a21dc6-917e-454e-a0ef-c4f21af302b3" containerID="d853403846e2f530d1ae5b8dd46cfb48c5bec299612c373667b88ddb6cc00291" exitCode=0 Dec 01 08:56:36 crc kubenswrapper[4689]: I1201 08:56:36.573644 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-jq84v" event={"ID":"f5a21dc6-917e-454e-a0ef-c4f21af302b3","Type":"ContainerDied","Data":"d853403846e2f530d1ae5b8dd46cfb48c5bec299612c373667b88ddb6cc00291"} Dec 01 08:56:36 crc kubenswrapper[4689]: I1201 08:56:36.576685 4689 generic.go:334] "Generic (PLEG): container finished" podID="2761a9b0-bfa4-4992-83d9-532157d688c4" containerID="e063038774bf2c85c2f8a10cfe598939d148c6cab2028e51b7339b821406a104" exitCode=0 Dec 01 08:56:36 crc kubenswrapper[4689]: I1201 08:56:36.576783 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-e1d0-account-create-update-922g5" event={"ID":"2761a9b0-bfa4-4992-83d9-532157d688c4","Type":"ContainerDied","Data":"e063038774bf2c85c2f8a10cfe598939d148c6cab2028e51b7339b821406a104"} Dec 01 08:56:36 crc kubenswrapper[4689]: I1201 08:56:36.580036 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-bv9wg" event={"ID":"f3fc4aaf-1747-4ced-877d-63533218e8f1","Type":"ContainerStarted","Data":"e64ddeef17e1dc2b79ca91d031e2897d8d3287a13dd93cae0a1f4c6c71e4f2e9"} Dec 01 08:56:36 crc kubenswrapper[4689]: I1201 08:56:36.581242 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-77585f5f8c-bv9wg" Dec 01 08:56:36 crc kubenswrapper[4689]: I1201 08:56:36.584315 4689 generic.go:334] "Generic (PLEG): container finished" podID="75b770c2-67d6-4b04-9a4d-a3a1cad52cc6" containerID="b420e3d8808e9c474aec7172df259e1b5e282072be07e9360abd024e3d5d8aee" exitCode=0 Dec 01 08:56:36 crc kubenswrapper[4689]: I1201 08:56:36.584431 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-22b7-account-create-update-l4xst" event={"ID":"75b770c2-67d6-4b04-9a4d-a3a1cad52cc6","Type":"ContainerDied","Data":"b420e3d8808e9c474aec7172df259e1b5e282072be07e9360abd024e3d5d8aee"} Dec 01 08:56:36 crc kubenswrapper[4689]: I1201 08:56:36.587773 4689 generic.go:334] "Generic (PLEG): container finished" podID="08afe837-f3ba-42bb-b61b-492d30229c45" containerID="df5bd295cdd0f16cc45e0f52af00dd0786f840f2a08d17dc2918764d7cf5433e" exitCode=0 Dec 01 08:56:36 crc kubenswrapper[4689]: I1201 08:56:36.587821 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-a825-account-create-update-7f62d" event={"ID":"08afe837-f3ba-42bb-b61b-492d30229c45","Type":"ContainerDied","Data":"df5bd295cdd0f16cc45e0f52af00dd0786f840f2a08d17dc2918764d7cf5433e"} Dec 01 08:56:36 crc kubenswrapper[4689]: I1201 08:56:36.591539 4689 generic.go:334] "Generic (PLEG): container finished" podID="9fbffd63-b0cb-41e8-a4a7-d995432ad88c" containerID="65f43bcc97ef562b23e0c17d852a26bb7fa26fd5811acde8ece17ce5fcf65515" exitCode=0 Dec 01 08:56:36 crc kubenswrapper[4689]: I1201 08:56:36.591981 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-w42fl" event={"ID":"9fbffd63-b0cb-41e8-a4a7-d995432ad88c","Type":"ContainerDied","Data":"65f43bcc97ef562b23e0c17d852a26bb7fa26fd5811acde8ece17ce5fcf65515"} Dec 01 08:56:36 crc kubenswrapper[4689]: I1201 08:56:36.635456 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-77585f5f8c-bv9wg" podStartSLOduration=11.635432222 podStartE2EDuration="11.635432222s" podCreationTimestamp="2025-12-01 08:56:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:56:36.632376148 +0000 UTC m=+1076.704664062" watchObservedRunningTime="2025-12-01 08:56:36.635432222 +0000 UTC m=+1076.707720126" Dec 01 08:56:36 crc kubenswrapper[4689]: I1201 08:56:36.961663 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-2cjdr" Dec 01 08:56:37 crc kubenswrapper[4689]: I1201 08:56:37.042235 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45efc88a-3b6d-41f2-91fa-8025cfed0b11-operator-scripts\") pod \"45efc88a-3b6d-41f2-91fa-8025cfed0b11\" (UID: \"45efc88a-3b6d-41f2-91fa-8025cfed0b11\") " Dec 01 08:56:37 crc kubenswrapper[4689]: I1201 08:56:37.042457 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ffbr7\" (UniqueName: \"kubernetes.io/projected/45efc88a-3b6d-41f2-91fa-8025cfed0b11-kube-api-access-ffbr7\") pod \"45efc88a-3b6d-41f2-91fa-8025cfed0b11\" (UID: \"45efc88a-3b6d-41f2-91fa-8025cfed0b11\") " Dec 01 08:56:37 crc kubenswrapper[4689]: I1201 08:56:37.042811 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45efc88a-3b6d-41f2-91fa-8025cfed0b11-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "45efc88a-3b6d-41f2-91fa-8025cfed0b11" (UID: "45efc88a-3b6d-41f2-91fa-8025cfed0b11"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:56:37 crc kubenswrapper[4689]: I1201 08:56:37.056779 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45efc88a-3b6d-41f2-91fa-8025cfed0b11-kube-api-access-ffbr7" (OuterVolumeSpecName: "kube-api-access-ffbr7") pod "45efc88a-3b6d-41f2-91fa-8025cfed0b11" (UID: "45efc88a-3b6d-41f2-91fa-8025cfed0b11"). InnerVolumeSpecName "kube-api-access-ffbr7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:56:37 crc kubenswrapper[4689]: I1201 08:56:37.146020 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ffbr7\" (UniqueName: \"kubernetes.io/projected/45efc88a-3b6d-41f2-91fa-8025cfed0b11-kube-api-access-ffbr7\") on node \"crc\" DevicePath \"\"" Dec 01 08:56:37 crc kubenswrapper[4689]: I1201 08:56:37.146062 4689 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45efc88a-3b6d-41f2-91fa-8025cfed0b11-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 08:56:37 crc kubenswrapper[4689]: I1201 08:56:37.614631 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-2cjdr" Dec 01 08:56:37 crc kubenswrapper[4689]: I1201 08:56:37.614689 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-2cjdr" event={"ID":"45efc88a-3b6d-41f2-91fa-8025cfed0b11","Type":"ContainerDied","Data":"28e2412e6bbc44ce3cff75aeb04099f612c0a1ec14105f22b913b9de92e051f8"} Dec 01 08:56:37 crc kubenswrapper[4689]: I1201 08:56:37.614734 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28e2412e6bbc44ce3cff75aeb04099f612c0a1ec14105f22b913b9de92e051f8" Dec 01 08:56:38 crc kubenswrapper[4689]: I1201 08:56:38.161186 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-22b7-account-create-update-l4xst" Dec 01 08:56:38 crc kubenswrapper[4689]: I1201 08:56:38.164095 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-a825-account-create-update-7f62d" Dec 01 08:56:38 crc kubenswrapper[4689]: I1201 08:56:38.270395 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qlmfg\" (UniqueName: \"kubernetes.io/projected/75b770c2-67d6-4b04-9a4d-a3a1cad52cc6-kube-api-access-qlmfg\") pod \"75b770c2-67d6-4b04-9a4d-a3a1cad52cc6\" (UID: \"75b770c2-67d6-4b04-9a4d-a3a1cad52cc6\") " Dec 01 08:56:38 crc kubenswrapper[4689]: I1201 08:56:38.270518 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2jzwt\" (UniqueName: \"kubernetes.io/projected/08afe837-f3ba-42bb-b61b-492d30229c45-kube-api-access-2jzwt\") pod \"08afe837-f3ba-42bb-b61b-492d30229c45\" (UID: \"08afe837-f3ba-42bb-b61b-492d30229c45\") " Dec 01 08:56:38 crc kubenswrapper[4689]: I1201 08:56:38.270560 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75b770c2-67d6-4b04-9a4d-a3a1cad52cc6-operator-scripts\") pod \"75b770c2-67d6-4b04-9a4d-a3a1cad52cc6\" (UID: \"75b770c2-67d6-4b04-9a4d-a3a1cad52cc6\") " Dec 01 08:56:38 crc kubenswrapper[4689]: I1201 08:56:38.270611 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08afe837-f3ba-42bb-b61b-492d30229c45-operator-scripts\") pod \"08afe837-f3ba-42bb-b61b-492d30229c45\" (UID: \"08afe837-f3ba-42bb-b61b-492d30229c45\") " Dec 01 08:56:38 crc kubenswrapper[4689]: I1201 08:56:38.271887 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75b770c2-67d6-4b04-9a4d-a3a1cad52cc6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "75b770c2-67d6-4b04-9a4d-a3a1cad52cc6" (UID: "75b770c2-67d6-4b04-9a4d-a3a1cad52cc6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:56:38 crc kubenswrapper[4689]: I1201 08:56:38.271954 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08afe837-f3ba-42bb-b61b-492d30229c45-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "08afe837-f3ba-42bb-b61b-492d30229c45" (UID: "08afe837-f3ba-42bb-b61b-492d30229c45"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:56:38 crc kubenswrapper[4689]: I1201 08:56:38.283082 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08afe837-f3ba-42bb-b61b-492d30229c45-kube-api-access-2jzwt" (OuterVolumeSpecName: "kube-api-access-2jzwt") pod "08afe837-f3ba-42bb-b61b-492d30229c45" (UID: "08afe837-f3ba-42bb-b61b-492d30229c45"). InnerVolumeSpecName "kube-api-access-2jzwt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:56:38 crc kubenswrapper[4689]: I1201 08:56:38.322815 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75b770c2-67d6-4b04-9a4d-a3a1cad52cc6-kube-api-access-qlmfg" (OuterVolumeSpecName: "kube-api-access-qlmfg") pod "75b770c2-67d6-4b04-9a4d-a3a1cad52cc6" (UID: "75b770c2-67d6-4b04-9a4d-a3a1cad52cc6"). InnerVolumeSpecName "kube-api-access-qlmfg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:56:38 crc kubenswrapper[4689]: I1201 08:56:38.355327 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-jq84v" Dec 01 08:56:38 crc kubenswrapper[4689]: I1201 08:56:38.372771 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-w42fl" Dec 01 08:56:38 crc kubenswrapper[4689]: I1201 08:56:38.378915 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5a21dc6-917e-454e-a0ef-c4f21af302b3-operator-scripts\") pod \"f5a21dc6-917e-454e-a0ef-c4f21af302b3\" (UID: \"f5a21dc6-917e-454e-a0ef-c4f21af302b3\") " Dec 01 08:56:38 crc kubenswrapper[4689]: I1201 08:56:38.379190 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ljx4t\" (UniqueName: \"kubernetes.io/projected/9fbffd63-b0cb-41e8-a4a7-d995432ad88c-kube-api-access-ljx4t\") pod \"9fbffd63-b0cb-41e8-a4a7-d995432ad88c\" (UID: \"9fbffd63-b0cb-41e8-a4a7-d995432ad88c\") " Dec 01 08:56:38 crc kubenswrapper[4689]: I1201 08:56:38.379329 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v9bbz\" (UniqueName: \"kubernetes.io/projected/f5a21dc6-917e-454e-a0ef-c4f21af302b3-kube-api-access-v9bbz\") pod \"f5a21dc6-917e-454e-a0ef-c4f21af302b3\" (UID: \"f5a21dc6-917e-454e-a0ef-c4f21af302b3\") " Dec 01 08:56:38 crc kubenswrapper[4689]: I1201 08:56:38.381410 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5a21dc6-917e-454e-a0ef-c4f21af302b3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f5a21dc6-917e-454e-a0ef-c4f21af302b3" (UID: "f5a21dc6-917e-454e-a0ef-c4f21af302b3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:56:38 crc kubenswrapper[4689]: I1201 08:56:38.381796 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qlmfg\" (UniqueName: \"kubernetes.io/projected/75b770c2-67d6-4b04-9a4d-a3a1cad52cc6-kube-api-access-qlmfg\") on node \"crc\" DevicePath \"\"" Dec 01 08:56:38 crc kubenswrapper[4689]: I1201 08:56:38.381899 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2jzwt\" (UniqueName: \"kubernetes.io/projected/08afe837-f3ba-42bb-b61b-492d30229c45-kube-api-access-2jzwt\") on node \"crc\" DevicePath \"\"" Dec 01 08:56:38 crc kubenswrapper[4689]: I1201 08:56:38.382011 4689 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75b770c2-67d6-4b04-9a4d-a3a1cad52cc6-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 08:56:38 crc kubenswrapper[4689]: I1201 08:56:38.382098 4689 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08afe837-f3ba-42bb-b61b-492d30229c45-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 08:56:38 crc kubenswrapper[4689]: I1201 08:56:38.390710 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5a21dc6-917e-454e-a0ef-c4f21af302b3-kube-api-access-v9bbz" (OuterVolumeSpecName: "kube-api-access-v9bbz") pod "f5a21dc6-917e-454e-a0ef-c4f21af302b3" (UID: "f5a21dc6-917e-454e-a0ef-c4f21af302b3"). InnerVolumeSpecName "kube-api-access-v9bbz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:56:38 crc kubenswrapper[4689]: I1201 08:56:38.390815 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fbffd63-b0cb-41e8-a4a7-d995432ad88c-kube-api-access-ljx4t" (OuterVolumeSpecName: "kube-api-access-ljx4t") pod "9fbffd63-b0cb-41e8-a4a7-d995432ad88c" (UID: "9fbffd63-b0cb-41e8-a4a7-d995432ad88c"). InnerVolumeSpecName "kube-api-access-ljx4t". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:56:38 crc kubenswrapper[4689]: I1201 08:56:38.392610 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-e1d0-account-create-update-922g5" Dec 01 08:56:38 crc kubenswrapper[4689]: I1201 08:56:38.483148 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hql7j\" (UniqueName: \"kubernetes.io/projected/2761a9b0-bfa4-4992-83d9-532157d688c4-kube-api-access-hql7j\") pod \"2761a9b0-bfa4-4992-83d9-532157d688c4\" (UID: \"2761a9b0-bfa4-4992-83d9-532157d688c4\") " Dec 01 08:56:38 crc kubenswrapper[4689]: I1201 08:56:38.483740 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2761a9b0-bfa4-4992-83d9-532157d688c4-operator-scripts\") pod \"2761a9b0-bfa4-4992-83d9-532157d688c4\" (UID: \"2761a9b0-bfa4-4992-83d9-532157d688c4\") " Dec 01 08:56:38 crc kubenswrapper[4689]: I1201 08:56:38.483942 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9fbffd63-b0cb-41e8-a4a7-d995432ad88c-operator-scripts\") pod \"9fbffd63-b0cb-41e8-a4a7-d995432ad88c\" (UID: \"9fbffd63-b0cb-41e8-a4a7-d995432ad88c\") " Dec 01 08:56:38 crc kubenswrapper[4689]: I1201 08:56:38.484421 4689 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5a21dc6-917e-454e-a0ef-c4f21af302b3-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 08:56:38 crc kubenswrapper[4689]: I1201 08:56:38.484508 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ljx4t\" (UniqueName: \"kubernetes.io/projected/9fbffd63-b0cb-41e8-a4a7-d995432ad88c-kube-api-access-ljx4t\") on node \"crc\" DevicePath \"\"" Dec 01 08:56:38 crc kubenswrapper[4689]: I1201 08:56:38.484572 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v9bbz\" (UniqueName: \"kubernetes.io/projected/f5a21dc6-917e-454e-a0ef-c4f21af302b3-kube-api-access-v9bbz\") on node \"crc\" DevicePath \"\"" Dec 01 08:56:38 crc kubenswrapper[4689]: I1201 08:56:38.484469 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2761a9b0-bfa4-4992-83d9-532157d688c4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2761a9b0-bfa4-4992-83d9-532157d688c4" (UID: "2761a9b0-bfa4-4992-83d9-532157d688c4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:56:38 crc kubenswrapper[4689]: I1201 08:56:38.484792 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fbffd63-b0cb-41e8-a4a7-d995432ad88c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9fbffd63-b0cb-41e8-a4a7-d995432ad88c" (UID: "9fbffd63-b0cb-41e8-a4a7-d995432ad88c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:56:38 crc kubenswrapper[4689]: I1201 08:56:38.487769 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2761a9b0-bfa4-4992-83d9-532157d688c4-kube-api-access-hql7j" (OuterVolumeSpecName: "kube-api-access-hql7j") pod "2761a9b0-bfa4-4992-83d9-532157d688c4" (UID: "2761a9b0-bfa4-4992-83d9-532157d688c4"). InnerVolumeSpecName "kube-api-access-hql7j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:56:38 crc kubenswrapper[4689]: I1201 08:56:38.586336 4689 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9fbffd63-b0cb-41e8-a4a7-d995432ad88c-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 08:56:38 crc kubenswrapper[4689]: I1201 08:56:38.586378 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hql7j\" (UniqueName: \"kubernetes.io/projected/2761a9b0-bfa4-4992-83d9-532157d688c4-kube-api-access-hql7j\") on node \"crc\" DevicePath \"\"" Dec 01 08:56:38 crc kubenswrapper[4689]: I1201 08:56:38.586391 4689 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2761a9b0-bfa4-4992-83d9-532157d688c4-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 08:56:38 crc kubenswrapper[4689]: I1201 08:56:38.622774 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-w42fl" event={"ID":"9fbffd63-b0cb-41e8-a4a7-d995432ad88c","Type":"ContainerDied","Data":"dff3c175d5fbceffba35d7a3d2ef8d0c6c3094e00415651b466b2a7d71cc4a2e"} Dec 01 08:56:38 crc kubenswrapper[4689]: I1201 08:56:38.622810 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dff3c175d5fbceffba35d7a3d2ef8d0c6c3094e00415651b466b2a7d71cc4a2e" Dec 01 08:56:38 crc kubenswrapper[4689]: I1201 08:56:38.622870 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-w42fl" Dec 01 08:56:38 crc kubenswrapper[4689]: I1201 08:56:38.628192 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-jq84v" event={"ID":"f5a21dc6-917e-454e-a0ef-c4f21af302b3","Type":"ContainerDied","Data":"82c821697a45b956fae7c7021f0e5a523d6a3233e9f4ea625160784dccd8c68a"} Dec 01 08:56:38 crc kubenswrapper[4689]: I1201 08:56:38.628458 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="82c821697a45b956fae7c7021f0e5a523d6a3233e9f4ea625160784dccd8c68a" Dec 01 08:56:38 crc kubenswrapper[4689]: I1201 08:56:38.628426 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-jq84v" Dec 01 08:56:38 crc kubenswrapper[4689]: I1201 08:56:38.630490 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-e1d0-account-create-update-922g5" event={"ID":"2761a9b0-bfa4-4992-83d9-532157d688c4","Type":"ContainerDied","Data":"220e404fba8b8e96885339107fb4d87f4f7111fab4cc990651f53700c6aa0235"} Dec 01 08:56:38 crc kubenswrapper[4689]: I1201 08:56:38.630615 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="220e404fba8b8e96885339107fb4d87f4f7111fab4cc990651f53700c6aa0235" Dec 01 08:56:38 crc kubenswrapper[4689]: I1201 08:56:38.630533 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-e1d0-account-create-update-922g5" Dec 01 08:56:38 crc kubenswrapper[4689]: I1201 08:56:38.632182 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-22b7-account-create-update-l4xst" event={"ID":"75b770c2-67d6-4b04-9a4d-a3a1cad52cc6","Type":"ContainerDied","Data":"ed9ee67081863b059552555d52a097c44791d5b1e0bbc9de47dbdaa2b1a9bd30"} Dec 01 08:56:38 crc kubenswrapper[4689]: I1201 08:56:38.632219 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed9ee67081863b059552555d52a097c44791d5b1e0bbc9de47dbdaa2b1a9bd30" Dec 01 08:56:38 crc kubenswrapper[4689]: I1201 08:56:38.632197 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-22b7-account-create-update-l4xst" Dec 01 08:56:38 crc kubenswrapper[4689]: I1201 08:56:38.634449 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-a825-account-create-update-7f62d" Dec 01 08:56:38 crc kubenswrapper[4689]: I1201 08:56:38.634465 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-a825-account-create-update-7f62d" event={"ID":"08afe837-f3ba-42bb-b61b-492d30229c45","Type":"ContainerDied","Data":"ebfae0592cfa7b4840671c206bd3676e563f538cc6bc0f93d62ea691afe840af"} Dec 01 08:56:38 crc kubenswrapper[4689]: I1201 08:56:38.634502 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ebfae0592cfa7b4840671c206bd3676e563f538cc6bc0f93d62ea691afe840af" Dec 01 08:56:39 crc kubenswrapper[4689]: I1201 08:56:39.147241 4689 patch_prober.go:28] interesting pod/machine-config-daemon-hmdnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 08:56:39 crc kubenswrapper[4689]: I1201 08:56:39.147468 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 08:56:41 crc kubenswrapper[4689]: I1201 08:56:41.004653 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-77585f5f8c-bv9wg" Dec 01 08:56:41 crc kubenswrapper[4689]: I1201 08:56:41.116096 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-lmql7"] Dec 01 08:56:41 crc kubenswrapper[4689]: I1201 08:56:41.116538 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-lmql7" podUID="5ad3fe76-0301-4ad7-be7a-2dea749a1c63" containerName="dnsmasq-dns" containerID="cri-o://d408b61adfcd055eab5b3a8f47f70ae36569c3b40c0eb926958f5a122e594e45" gracePeriod=10 Dec 01 08:56:41 crc kubenswrapper[4689]: I1201 08:56:41.698917 4689 generic.go:334] "Generic (PLEG): container finished" podID="5ad3fe76-0301-4ad7-be7a-2dea749a1c63" containerID="d408b61adfcd055eab5b3a8f47f70ae36569c3b40c0eb926958f5a122e594e45" exitCode=0 Dec 01 08:56:41 crc kubenswrapper[4689]: I1201 08:56:41.698963 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-lmql7" event={"ID":"5ad3fe76-0301-4ad7-be7a-2dea749a1c63","Type":"ContainerDied","Data":"d408b61adfcd055eab5b3a8f47f70ae36569c3b40c0eb926958f5a122e594e45"} Dec 01 08:56:43 crc kubenswrapper[4689]: I1201 08:56:43.993652 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-lmql7" Dec 01 08:56:44 crc kubenswrapper[4689]: I1201 08:56:44.195175 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5ad3fe76-0301-4ad7-be7a-2dea749a1c63-ovsdbserver-sb\") pod \"5ad3fe76-0301-4ad7-be7a-2dea749a1c63\" (UID: \"5ad3fe76-0301-4ad7-be7a-2dea749a1c63\") " Dec 01 08:56:44 crc kubenswrapper[4689]: I1201 08:56:44.195235 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5ad3fe76-0301-4ad7-be7a-2dea749a1c63-dns-svc\") pod \"5ad3fe76-0301-4ad7-be7a-2dea749a1c63\" (UID: \"5ad3fe76-0301-4ad7-be7a-2dea749a1c63\") " Dec 01 08:56:44 crc kubenswrapper[4689]: I1201 08:56:44.195295 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5ad3fe76-0301-4ad7-be7a-2dea749a1c63-ovsdbserver-nb\") pod \"5ad3fe76-0301-4ad7-be7a-2dea749a1c63\" (UID: \"5ad3fe76-0301-4ad7-be7a-2dea749a1c63\") " Dec 01 08:56:44 crc kubenswrapper[4689]: I1201 08:56:44.195413 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ad3fe76-0301-4ad7-be7a-2dea749a1c63-config\") pod \"5ad3fe76-0301-4ad7-be7a-2dea749a1c63\" (UID: \"5ad3fe76-0301-4ad7-be7a-2dea749a1c63\") " Dec 01 08:56:44 crc kubenswrapper[4689]: I1201 08:56:44.195600 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5zcn2\" (UniqueName: \"kubernetes.io/projected/5ad3fe76-0301-4ad7-be7a-2dea749a1c63-kube-api-access-5zcn2\") pod \"5ad3fe76-0301-4ad7-be7a-2dea749a1c63\" (UID: \"5ad3fe76-0301-4ad7-be7a-2dea749a1c63\") " Dec 01 08:56:44 crc kubenswrapper[4689]: I1201 08:56:44.203618 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ad3fe76-0301-4ad7-be7a-2dea749a1c63-kube-api-access-5zcn2" (OuterVolumeSpecName: "kube-api-access-5zcn2") pod "5ad3fe76-0301-4ad7-be7a-2dea749a1c63" (UID: "5ad3fe76-0301-4ad7-be7a-2dea749a1c63"). InnerVolumeSpecName "kube-api-access-5zcn2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:56:44 crc kubenswrapper[4689]: I1201 08:56:44.239937 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ad3fe76-0301-4ad7-be7a-2dea749a1c63-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5ad3fe76-0301-4ad7-be7a-2dea749a1c63" (UID: "5ad3fe76-0301-4ad7-be7a-2dea749a1c63"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:56:44 crc kubenswrapper[4689]: I1201 08:56:44.244150 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ad3fe76-0301-4ad7-be7a-2dea749a1c63-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5ad3fe76-0301-4ad7-be7a-2dea749a1c63" (UID: "5ad3fe76-0301-4ad7-be7a-2dea749a1c63"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:56:44 crc kubenswrapper[4689]: I1201 08:56:44.244555 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ad3fe76-0301-4ad7-be7a-2dea749a1c63-config" (OuterVolumeSpecName: "config") pod "5ad3fe76-0301-4ad7-be7a-2dea749a1c63" (UID: "5ad3fe76-0301-4ad7-be7a-2dea749a1c63"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:56:44 crc kubenswrapper[4689]: I1201 08:56:44.254814 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ad3fe76-0301-4ad7-be7a-2dea749a1c63-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5ad3fe76-0301-4ad7-be7a-2dea749a1c63" (UID: "5ad3fe76-0301-4ad7-be7a-2dea749a1c63"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:56:44 crc kubenswrapper[4689]: I1201 08:56:44.297122 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5zcn2\" (UniqueName: \"kubernetes.io/projected/5ad3fe76-0301-4ad7-be7a-2dea749a1c63-kube-api-access-5zcn2\") on node \"crc\" DevicePath \"\"" Dec 01 08:56:44 crc kubenswrapper[4689]: I1201 08:56:44.297171 4689 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5ad3fe76-0301-4ad7-be7a-2dea749a1c63-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 01 08:56:44 crc kubenswrapper[4689]: I1201 08:56:44.297185 4689 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5ad3fe76-0301-4ad7-be7a-2dea749a1c63-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 01 08:56:44 crc kubenswrapper[4689]: I1201 08:56:44.297196 4689 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5ad3fe76-0301-4ad7-be7a-2dea749a1c63-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 01 08:56:44 crc kubenswrapper[4689]: I1201 08:56:44.297206 4689 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ad3fe76-0301-4ad7-be7a-2dea749a1c63-config\") on node \"crc\" DevicePath \"\"" Dec 01 08:56:44 crc kubenswrapper[4689]: I1201 08:56:44.740273 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-wkgqk" event={"ID":"21498a51-fbab-4263-88dd-9c30df75721c","Type":"ContainerStarted","Data":"83ecdc041efdabd181b74491ceb4c864f3fa2b8f0b1c3cc9ed5539e8ed1f522e"} Dec 01 08:56:44 crc kubenswrapper[4689]: I1201 08:56:44.743220 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-lmql7" event={"ID":"5ad3fe76-0301-4ad7-be7a-2dea749a1c63","Type":"ContainerDied","Data":"f01dfbb24209ac4303a2dad27b14802e9d6a9b59e19c152f730e3c302ccc68d9"} Dec 01 08:56:44 crc kubenswrapper[4689]: I1201 08:56:44.743300 4689 scope.go:117] "RemoveContainer" containerID="d408b61adfcd055eab5b3a8f47f70ae36569c3b40c0eb926958f5a122e594e45" Dec 01 08:56:44 crc kubenswrapper[4689]: I1201 08:56:44.743404 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-lmql7" Dec 01 08:56:44 crc kubenswrapper[4689]: I1201 08:56:44.765770 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-wkgqk" podStartSLOduration=3.262319496 podStartE2EDuration="11.765746089s" podCreationTimestamp="2025-12-01 08:56:33 +0000 UTC" firstStartedPulling="2025-12-01 08:56:35.071745879 +0000 UTC m=+1075.144033783" lastFinishedPulling="2025-12-01 08:56:43.575172432 +0000 UTC m=+1083.647460376" observedRunningTime="2025-12-01 08:56:44.76067638 +0000 UTC m=+1084.832964284" watchObservedRunningTime="2025-12-01 08:56:44.765746089 +0000 UTC m=+1084.838033993" Dec 01 08:56:44 crc kubenswrapper[4689]: I1201 08:56:44.808898 4689 scope.go:117] "RemoveContainer" containerID="22716f9ae4156a8dc3a9e5a6c9aa514050df1b9033502a1cc185c074d0f09301" Dec 01 08:56:44 crc kubenswrapper[4689]: I1201 08:56:44.816415 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-lmql7"] Dec 01 08:56:44 crc kubenswrapper[4689]: I1201 08:56:44.820470 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-lmql7"] Dec 01 08:56:45 crc kubenswrapper[4689]: I1201 08:56:45.063376 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ad3fe76-0301-4ad7-be7a-2dea749a1c63" path="/var/lib/kubelet/pods/5ad3fe76-0301-4ad7-be7a-2dea749a1c63/volumes" Dec 01 08:56:45 crc kubenswrapper[4689]: I1201 08:56:45.754983 4689 generic.go:334] "Generic (PLEG): container finished" podID="fbecbbce-632b-4832-b4aa-6834ff6541e5" containerID="fab6e207a5bb22be56610cfc7b26e34ca92f987886681d3a174260301254d349" exitCode=0 Dec 01 08:56:45 crc kubenswrapper[4689]: I1201 08:56:45.755385 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-5qxkw" event={"ID":"fbecbbce-632b-4832-b4aa-6834ff6541e5","Type":"ContainerDied","Data":"fab6e207a5bb22be56610cfc7b26e34ca92f987886681d3a174260301254d349"} Dec 01 08:56:46 crc kubenswrapper[4689]: I1201 08:56:46.769808 4689 generic.go:334] "Generic (PLEG): container finished" podID="21498a51-fbab-4263-88dd-9c30df75721c" containerID="83ecdc041efdabd181b74491ceb4c864f3fa2b8f0b1c3cc9ed5539e8ed1f522e" exitCode=0 Dec 01 08:56:46 crc kubenswrapper[4689]: I1201 08:56:46.770332 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-wkgqk" event={"ID":"21498a51-fbab-4263-88dd-9c30df75721c","Type":"ContainerDied","Data":"83ecdc041efdabd181b74491ceb4c864f3fa2b8f0b1c3cc9ed5539e8ed1f522e"} Dec 01 08:56:47 crc kubenswrapper[4689]: I1201 08:56:47.141355 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-5qxkw" Dec 01 08:56:47 crc kubenswrapper[4689]: I1201 08:56:47.254231 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fbecbbce-632b-4832-b4aa-6834ff6541e5-db-sync-config-data\") pod \"fbecbbce-632b-4832-b4aa-6834ff6541e5\" (UID: \"fbecbbce-632b-4832-b4aa-6834ff6541e5\") " Dec 01 08:56:47 crc kubenswrapper[4689]: I1201 08:56:47.254604 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qrqw4\" (UniqueName: \"kubernetes.io/projected/fbecbbce-632b-4832-b4aa-6834ff6541e5-kube-api-access-qrqw4\") pod \"fbecbbce-632b-4832-b4aa-6834ff6541e5\" (UID: \"fbecbbce-632b-4832-b4aa-6834ff6541e5\") " Dec 01 08:56:47 crc kubenswrapper[4689]: I1201 08:56:47.254649 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbecbbce-632b-4832-b4aa-6834ff6541e5-config-data\") pod \"fbecbbce-632b-4832-b4aa-6834ff6541e5\" (UID: \"fbecbbce-632b-4832-b4aa-6834ff6541e5\") " Dec 01 08:56:47 crc kubenswrapper[4689]: I1201 08:56:47.254713 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbecbbce-632b-4832-b4aa-6834ff6541e5-combined-ca-bundle\") pod \"fbecbbce-632b-4832-b4aa-6834ff6541e5\" (UID: \"fbecbbce-632b-4832-b4aa-6834ff6541e5\") " Dec 01 08:56:47 crc kubenswrapper[4689]: I1201 08:56:47.268183 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbecbbce-632b-4832-b4aa-6834ff6541e5-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "fbecbbce-632b-4832-b4aa-6834ff6541e5" (UID: "fbecbbce-632b-4832-b4aa-6834ff6541e5"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:56:47 crc kubenswrapper[4689]: I1201 08:56:47.268224 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbecbbce-632b-4832-b4aa-6834ff6541e5-kube-api-access-qrqw4" (OuterVolumeSpecName: "kube-api-access-qrqw4") pod "fbecbbce-632b-4832-b4aa-6834ff6541e5" (UID: "fbecbbce-632b-4832-b4aa-6834ff6541e5"). InnerVolumeSpecName "kube-api-access-qrqw4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:56:47 crc kubenswrapper[4689]: I1201 08:56:47.284461 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbecbbce-632b-4832-b4aa-6834ff6541e5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fbecbbce-632b-4832-b4aa-6834ff6541e5" (UID: "fbecbbce-632b-4832-b4aa-6834ff6541e5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:56:47 crc kubenswrapper[4689]: I1201 08:56:47.316560 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbecbbce-632b-4832-b4aa-6834ff6541e5-config-data" (OuterVolumeSpecName: "config-data") pod "fbecbbce-632b-4832-b4aa-6834ff6541e5" (UID: "fbecbbce-632b-4832-b4aa-6834ff6541e5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:56:47 crc kubenswrapper[4689]: I1201 08:56:47.356241 4689 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fbecbbce-632b-4832-b4aa-6834ff6541e5-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 08:56:47 crc kubenswrapper[4689]: I1201 08:56:47.356272 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qrqw4\" (UniqueName: \"kubernetes.io/projected/fbecbbce-632b-4832-b4aa-6834ff6541e5-kube-api-access-qrqw4\") on node \"crc\" DevicePath \"\"" Dec 01 08:56:47 crc kubenswrapper[4689]: I1201 08:56:47.356282 4689 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbecbbce-632b-4832-b4aa-6834ff6541e5-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 08:56:47 crc kubenswrapper[4689]: I1201 08:56:47.356291 4689 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbecbbce-632b-4832-b4aa-6834ff6541e5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 08:56:47 crc kubenswrapper[4689]: I1201 08:56:47.781722 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-5qxkw" Dec 01 08:56:47 crc kubenswrapper[4689]: I1201 08:56:47.781749 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-5qxkw" event={"ID":"fbecbbce-632b-4832-b4aa-6834ff6541e5","Type":"ContainerDied","Data":"01c03df410ccc9d22c394ab5e5ba4fd3efd443996d7ab61f0c76a775d9670eb1"} Dec 01 08:56:47 crc kubenswrapper[4689]: I1201 08:56:47.781816 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01c03df410ccc9d22c394ab5e5ba4fd3efd443996d7ab61f0c76a775d9670eb1" Dec 01 08:56:48 crc kubenswrapper[4689]: I1201 08:56:48.124003 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-wkgqk" Dec 01 08:56:48 crc kubenswrapper[4689]: I1201 08:56:48.276840 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21498a51-fbab-4263-88dd-9c30df75721c-combined-ca-bundle\") pod \"21498a51-fbab-4263-88dd-9c30df75721c\" (UID: \"21498a51-fbab-4263-88dd-9c30df75721c\") " Dec 01 08:56:48 crc kubenswrapper[4689]: I1201 08:56:48.276943 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21498a51-fbab-4263-88dd-9c30df75721c-config-data\") pod \"21498a51-fbab-4263-88dd-9c30df75721c\" (UID: \"21498a51-fbab-4263-88dd-9c30df75721c\") " Dec 01 08:56:48 crc kubenswrapper[4689]: I1201 08:56:48.277031 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wljvz\" (UniqueName: \"kubernetes.io/projected/21498a51-fbab-4263-88dd-9c30df75721c-kube-api-access-wljvz\") pod \"21498a51-fbab-4263-88dd-9c30df75721c\" (UID: \"21498a51-fbab-4263-88dd-9c30df75721c\") " Dec 01 08:56:48 crc kubenswrapper[4689]: I1201 08:56:48.277951 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7ff5475cc9-6xbtd"] Dec 01 08:56:48 crc kubenswrapper[4689]: E1201 08:56:48.278343 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08afe837-f3ba-42bb-b61b-492d30229c45" containerName="mariadb-account-create-update" Dec 01 08:56:48 crc kubenswrapper[4689]: I1201 08:56:48.282546 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="08afe837-f3ba-42bb-b61b-492d30229c45" containerName="mariadb-account-create-update" Dec 01 08:56:48 crc kubenswrapper[4689]: E1201 08:56:48.282624 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ad3fe76-0301-4ad7-be7a-2dea749a1c63" containerName="dnsmasq-dns" Dec 01 08:56:48 crc kubenswrapper[4689]: I1201 08:56:48.282635 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ad3fe76-0301-4ad7-be7a-2dea749a1c63" containerName="dnsmasq-dns" Dec 01 08:56:48 crc kubenswrapper[4689]: E1201 08:56:48.282658 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbecbbce-632b-4832-b4aa-6834ff6541e5" containerName="glance-db-sync" Dec 01 08:56:48 crc kubenswrapper[4689]: I1201 08:56:48.282666 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbecbbce-632b-4832-b4aa-6834ff6541e5" containerName="glance-db-sync" Dec 01 08:56:48 crc kubenswrapper[4689]: E1201 08:56:48.282674 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ad3fe76-0301-4ad7-be7a-2dea749a1c63" containerName="init" Dec 01 08:56:48 crc kubenswrapper[4689]: I1201 08:56:48.282680 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ad3fe76-0301-4ad7-be7a-2dea749a1c63" containerName="init" Dec 01 08:56:48 crc kubenswrapper[4689]: E1201 08:56:48.282690 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45efc88a-3b6d-41f2-91fa-8025cfed0b11" containerName="mariadb-database-create" Dec 01 08:56:48 crc kubenswrapper[4689]: I1201 08:56:48.282696 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="45efc88a-3b6d-41f2-91fa-8025cfed0b11" containerName="mariadb-database-create" Dec 01 08:56:48 crc kubenswrapper[4689]: E1201 08:56:48.282712 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fbffd63-b0cb-41e8-a4a7-d995432ad88c" containerName="mariadb-database-create" Dec 01 08:56:48 crc kubenswrapper[4689]: I1201 08:56:48.282718 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fbffd63-b0cb-41e8-a4a7-d995432ad88c" containerName="mariadb-database-create" Dec 01 08:56:48 crc kubenswrapper[4689]: E1201 08:56:48.282743 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2761a9b0-bfa4-4992-83d9-532157d688c4" containerName="mariadb-account-create-update" Dec 01 08:56:48 crc kubenswrapper[4689]: I1201 08:56:48.282748 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="2761a9b0-bfa4-4992-83d9-532157d688c4" containerName="mariadb-account-create-update" Dec 01 08:56:48 crc kubenswrapper[4689]: E1201 08:56:48.282760 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5a21dc6-917e-454e-a0ef-c4f21af302b3" containerName="mariadb-database-create" Dec 01 08:56:48 crc kubenswrapper[4689]: I1201 08:56:48.282766 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5a21dc6-917e-454e-a0ef-c4f21af302b3" containerName="mariadb-database-create" Dec 01 08:56:48 crc kubenswrapper[4689]: E1201 08:56:48.282780 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75b770c2-67d6-4b04-9a4d-a3a1cad52cc6" containerName="mariadb-account-create-update" Dec 01 08:56:48 crc kubenswrapper[4689]: I1201 08:56:48.282786 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="75b770c2-67d6-4b04-9a4d-a3a1cad52cc6" containerName="mariadb-account-create-update" Dec 01 08:56:48 crc kubenswrapper[4689]: E1201 08:56:48.282797 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21498a51-fbab-4263-88dd-9c30df75721c" containerName="keystone-db-sync" Dec 01 08:56:48 crc kubenswrapper[4689]: I1201 08:56:48.282806 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="21498a51-fbab-4263-88dd-9c30df75721c" containerName="keystone-db-sync" Dec 01 08:56:48 crc kubenswrapper[4689]: I1201 08:56:48.283088 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="75b770c2-67d6-4b04-9a4d-a3a1cad52cc6" containerName="mariadb-account-create-update" Dec 01 08:56:48 crc kubenswrapper[4689]: I1201 08:56:48.283120 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ad3fe76-0301-4ad7-be7a-2dea749a1c63" containerName="dnsmasq-dns" Dec 01 08:56:48 crc kubenswrapper[4689]: I1201 08:56:48.283129 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="21498a51-fbab-4263-88dd-9c30df75721c" containerName="keystone-db-sync" Dec 01 08:56:48 crc kubenswrapper[4689]: I1201 08:56:48.283142 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="45efc88a-3b6d-41f2-91fa-8025cfed0b11" containerName="mariadb-database-create" Dec 01 08:56:48 crc kubenswrapper[4689]: I1201 08:56:48.283149 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="2761a9b0-bfa4-4992-83d9-532157d688c4" containerName="mariadb-account-create-update" Dec 01 08:56:48 crc kubenswrapper[4689]: I1201 08:56:48.283158 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="08afe837-f3ba-42bb-b61b-492d30229c45" containerName="mariadb-account-create-update" Dec 01 08:56:48 crc kubenswrapper[4689]: I1201 08:56:48.283172 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5a21dc6-917e-454e-a0ef-c4f21af302b3" containerName="mariadb-database-create" Dec 01 08:56:48 crc kubenswrapper[4689]: I1201 08:56:48.283184 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbecbbce-632b-4832-b4aa-6834ff6541e5" containerName="glance-db-sync" Dec 01 08:56:48 crc kubenswrapper[4689]: I1201 08:56:48.283194 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fbffd63-b0cb-41e8-a4a7-d995432ad88c" containerName="mariadb-database-create" Dec 01 08:56:48 crc kubenswrapper[4689]: I1201 08:56:48.284694 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7ff5475cc9-6xbtd" Dec 01 08:56:48 crc kubenswrapper[4689]: I1201 08:56:48.293846 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21498a51-fbab-4263-88dd-9c30df75721c-kube-api-access-wljvz" (OuterVolumeSpecName: "kube-api-access-wljvz") pod "21498a51-fbab-4263-88dd-9c30df75721c" (UID: "21498a51-fbab-4263-88dd-9c30df75721c"). InnerVolumeSpecName "kube-api-access-wljvz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:56:48 crc kubenswrapper[4689]: I1201 08:56:48.331279 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21498a51-fbab-4263-88dd-9c30df75721c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "21498a51-fbab-4263-88dd-9c30df75721c" (UID: "21498a51-fbab-4263-88dd-9c30df75721c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:56:48 crc kubenswrapper[4689]: I1201 08:56:48.331607 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7ff5475cc9-6xbtd"] Dec 01 08:56:48 crc kubenswrapper[4689]: I1201 08:56:48.402923 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c8b7623-5762-411b-9245-598e5b57f9da-config\") pod \"dnsmasq-dns-7ff5475cc9-6xbtd\" (UID: \"7c8b7623-5762-411b-9245-598e5b57f9da\") " pod="openstack/dnsmasq-dns-7ff5475cc9-6xbtd" Dec 01 08:56:48 crc kubenswrapper[4689]: I1201 08:56:48.403048 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lvfw\" (UniqueName: \"kubernetes.io/projected/7c8b7623-5762-411b-9245-598e5b57f9da-kube-api-access-9lvfw\") pod \"dnsmasq-dns-7ff5475cc9-6xbtd\" (UID: \"7c8b7623-5762-411b-9245-598e5b57f9da\") " pod="openstack/dnsmasq-dns-7ff5475cc9-6xbtd" Dec 01 08:56:48 crc kubenswrapper[4689]: I1201 08:56:48.403094 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7c8b7623-5762-411b-9245-598e5b57f9da-dns-swift-storage-0\") pod \"dnsmasq-dns-7ff5475cc9-6xbtd\" (UID: \"7c8b7623-5762-411b-9245-598e5b57f9da\") " pod="openstack/dnsmasq-dns-7ff5475cc9-6xbtd" Dec 01 08:56:48 crc kubenswrapper[4689]: I1201 08:56:48.403420 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7c8b7623-5762-411b-9245-598e5b57f9da-ovsdbserver-nb\") pod \"dnsmasq-dns-7ff5475cc9-6xbtd\" (UID: \"7c8b7623-5762-411b-9245-598e5b57f9da\") " pod="openstack/dnsmasq-dns-7ff5475cc9-6xbtd" Dec 01 08:56:48 crc kubenswrapper[4689]: I1201 08:56:48.403558 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7c8b7623-5762-411b-9245-598e5b57f9da-ovsdbserver-sb\") pod \"dnsmasq-dns-7ff5475cc9-6xbtd\" (UID: \"7c8b7623-5762-411b-9245-598e5b57f9da\") " pod="openstack/dnsmasq-dns-7ff5475cc9-6xbtd" Dec 01 08:56:48 crc kubenswrapper[4689]: I1201 08:56:48.403642 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7c8b7623-5762-411b-9245-598e5b57f9da-dns-svc\") pod \"dnsmasq-dns-7ff5475cc9-6xbtd\" (UID: \"7c8b7623-5762-411b-9245-598e5b57f9da\") " pod="openstack/dnsmasq-dns-7ff5475cc9-6xbtd" Dec 01 08:56:48 crc kubenswrapper[4689]: I1201 08:56:48.404329 4689 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21498a51-fbab-4263-88dd-9c30df75721c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 08:56:48 crc kubenswrapper[4689]: I1201 08:56:48.404344 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wljvz\" (UniqueName: \"kubernetes.io/projected/21498a51-fbab-4263-88dd-9c30df75721c-kube-api-access-wljvz\") on node \"crc\" DevicePath \"\"" Dec 01 08:56:48 crc kubenswrapper[4689]: I1201 08:56:48.440717 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21498a51-fbab-4263-88dd-9c30df75721c-config-data" (OuterVolumeSpecName: "config-data") pod "21498a51-fbab-4263-88dd-9c30df75721c" (UID: "21498a51-fbab-4263-88dd-9c30df75721c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:56:48 crc kubenswrapper[4689]: I1201 08:56:48.504867 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7c8b7623-5762-411b-9245-598e5b57f9da-ovsdbserver-sb\") pod \"dnsmasq-dns-7ff5475cc9-6xbtd\" (UID: \"7c8b7623-5762-411b-9245-598e5b57f9da\") " pod="openstack/dnsmasq-dns-7ff5475cc9-6xbtd" Dec 01 08:56:48 crc kubenswrapper[4689]: I1201 08:56:48.505231 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7c8b7623-5762-411b-9245-598e5b57f9da-dns-svc\") pod \"dnsmasq-dns-7ff5475cc9-6xbtd\" (UID: \"7c8b7623-5762-411b-9245-598e5b57f9da\") " pod="openstack/dnsmasq-dns-7ff5475cc9-6xbtd" Dec 01 08:56:48 crc kubenswrapper[4689]: I1201 08:56:48.505287 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c8b7623-5762-411b-9245-598e5b57f9da-config\") pod \"dnsmasq-dns-7ff5475cc9-6xbtd\" (UID: \"7c8b7623-5762-411b-9245-598e5b57f9da\") " pod="openstack/dnsmasq-dns-7ff5475cc9-6xbtd" Dec 01 08:56:48 crc kubenswrapper[4689]: I1201 08:56:48.505326 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lvfw\" (UniqueName: \"kubernetes.io/projected/7c8b7623-5762-411b-9245-598e5b57f9da-kube-api-access-9lvfw\") pod \"dnsmasq-dns-7ff5475cc9-6xbtd\" (UID: \"7c8b7623-5762-411b-9245-598e5b57f9da\") " pod="openstack/dnsmasq-dns-7ff5475cc9-6xbtd" Dec 01 08:56:48 crc kubenswrapper[4689]: I1201 08:56:48.505379 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7c8b7623-5762-411b-9245-598e5b57f9da-dns-swift-storage-0\") pod \"dnsmasq-dns-7ff5475cc9-6xbtd\" (UID: \"7c8b7623-5762-411b-9245-598e5b57f9da\") " pod="openstack/dnsmasq-dns-7ff5475cc9-6xbtd" Dec 01 08:56:48 crc kubenswrapper[4689]: I1201 08:56:48.505420 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7c8b7623-5762-411b-9245-598e5b57f9da-ovsdbserver-nb\") pod \"dnsmasq-dns-7ff5475cc9-6xbtd\" (UID: \"7c8b7623-5762-411b-9245-598e5b57f9da\") " pod="openstack/dnsmasq-dns-7ff5475cc9-6xbtd" Dec 01 08:56:48 crc kubenswrapper[4689]: I1201 08:56:48.505509 4689 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21498a51-fbab-4263-88dd-9c30df75721c-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 08:56:48 crc kubenswrapper[4689]: I1201 08:56:48.505809 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7c8b7623-5762-411b-9245-598e5b57f9da-ovsdbserver-sb\") pod \"dnsmasq-dns-7ff5475cc9-6xbtd\" (UID: \"7c8b7623-5762-411b-9245-598e5b57f9da\") " pod="openstack/dnsmasq-dns-7ff5475cc9-6xbtd" Dec 01 08:56:48 crc kubenswrapper[4689]: I1201 08:56:48.506100 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7c8b7623-5762-411b-9245-598e5b57f9da-dns-svc\") pod \"dnsmasq-dns-7ff5475cc9-6xbtd\" (UID: \"7c8b7623-5762-411b-9245-598e5b57f9da\") " pod="openstack/dnsmasq-dns-7ff5475cc9-6xbtd" Dec 01 08:56:48 crc kubenswrapper[4689]: I1201 08:56:48.506579 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7c8b7623-5762-411b-9245-598e5b57f9da-ovsdbserver-nb\") pod \"dnsmasq-dns-7ff5475cc9-6xbtd\" (UID: \"7c8b7623-5762-411b-9245-598e5b57f9da\") " pod="openstack/dnsmasq-dns-7ff5475cc9-6xbtd" Dec 01 08:56:48 crc kubenswrapper[4689]: I1201 08:56:48.506613 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c8b7623-5762-411b-9245-598e5b57f9da-config\") pod \"dnsmasq-dns-7ff5475cc9-6xbtd\" (UID: \"7c8b7623-5762-411b-9245-598e5b57f9da\") " pod="openstack/dnsmasq-dns-7ff5475cc9-6xbtd" Dec 01 08:56:48 crc kubenswrapper[4689]: I1201 08:56:48.506901 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7c8b7623-5762-411b-9245-598e5b57f9da-dns-swift-storage-0\") pod \"dnsmasq-dns-7ff5475cc9-6xbtd\" (UID: \"7c8b7623-5762-411b-9245-598e5b57f9da\") " pod="openstack/dnsmasq-dns-7ff5475cc9-6xbtd" Dec 01 08:56:48 crc kubenswrapper[4689]: I1201 08:56:48.525727 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lvfw\" (UniqueName: \"kubernetes.io/projected/7c8b7623-5762-411b-9245-598e5b57f9da-kube-api-access-9lvfw\") pod \"dnsmasq-dns-7ff5475cc9-6xbtd\" (UID: \"7c8b7623-5762-411b-9245-598e5b57f9da\") " pod="openstack/dnsmasq-dns-7ff5475cc9-6xbtd" Dec 01 08:56:48 crc kubenswrapper[4689]: I1201 08:56:48.710705 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7ff5475cc9-6xbtd" Dec 01 08:56:48 crc kubenswrapper[4689]: I1201 08:56:48.790697 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-wkgqk" event={"ID":"21498a51-fbab-4263-88dd-9c30df75721c","Type":"ContainerDied","Data":"793f7f96e6bcbc12ffa09f218a8c5ce03c734662295a7172059da523faae15f6"} Dec 01 08:56:48 crc kubenswrapper[4689]: I1201 08:56:48.790739 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="793f7f96e6bcbc12ffa09f218a8c5ce03c734662295a7172059da523faae15f6" Dec 01 08:56:48 crc kubenswrapper[4689]: I1201 08:56:48.790814 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-wkgqk" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.030448 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-6lm44"] Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.031815 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-6lm44" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.034982 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.039393 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.048456 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-6lm44"] Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.049254 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.049476 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.053428 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-44r55" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.064944 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7ff5475cc9-6xbtd"] Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.122110 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c5cc7c5ff-dgj8k"] Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.123443 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c5cc7c5ff-dgj8k" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.219013 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/923d279f-d980-4e9e-88dd-a0d7aaa266c4-config\") pod \"dnsmasq-dns-5c5cc7c5ff-dgj8k\" (UID: \"923d279f-d980-4e9e-88dd-a0d7aaa266c4\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-dgj8k" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.219261 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e12d10f6-caef-4c9d-9d88-332042911454-credential-keys\") pod \"keystone-bootstrap-6lm44\" (UID: \"e12d10f6-caef-4c9d-9d88-332042911454\") " pod="openstack/keystone-bootstrap-6lm44" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.219286 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e12d10f6-caef-4c9d-9d88-332042911454-scripts\") pod \"keystone-bootstrap-6lm44\" (UID: \"e12d10f6-caef-4c9d-9d88-332042911454\") " pod="openstack/keystone-bootstrap-6lm44" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.219319 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e12d10f6-caef-4c9d-9d88-332042911454-combined-ca-bundle\") pod \"keystone-bootstrap-6lm44\" (UID: \"e12d10f6-caef-4c9d-9d88-332042911454\") " pod="openstack/keystone-bootstrap-6lm44" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.219341 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/923d279f-d980-4e9e-88dd-a0d7aaa266c4-ovsdbserver-nb\") pod \"dnsmasq-dns-5c5cc7c5ff-dgj8k\" (UID: \"923d279f-d980-4e9e-88dd-a0d7aaa266c4\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-dgj8k" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.219388 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/923d279f-d980-4e9e-88dd-a0d7aaa266c4-ovsdbserver-sb\") pod \"dnsmasq-dns-5c5cc7c5ff-dgj8k\" (UID: \"923d279f-d980-4e9e-88dd-a0d7aaa266c4\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-dgj8k" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.219503 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/923d279f-d980-4e9e-88dd-a0d7aaa266c4-dns-svc\") pod \"dnsmasq-dns-5c5cc7c5ff-dgj8k\" (UID: \"923d279f-d980-4e9e-88dd-a0d7aaa266c4\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-dgj8k" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.219530 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4h6k6\" (UniqueName: \"kubernetes.io/projected/923d279f-d980-4e9e-88dd-a0d7aaa266c4-kube-api-access-4h6k6\") pod \"dnsmasq-dns-5c5cc7c5ff-dgj8k\" (UID: \"923d279f-d980-4e9e-88dd-a0d7aaa266c4\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-dgj8k" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.219548 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e12d10f6-caef-4c9d-9d88-332042911454-config-data\") pod \"keystone-bootstrap-6lm44\" (UID: \"e12d10f6-caef-4c9d-9d88-332042911454\") " pod="openstack/keystone-bootstrap-6lm44" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.219591 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnrlk\" (UniqueName: \"kubernetes.io/projected/e12d10f6-caef-4c9d-9d88-332042911454-kube-api-access-mnrlk\") pod \"keystone-bootstrap-6lm44\" (UID: \"e12d10f6-caef-4c9d-9d88-332042911454\") " pod="openstack/keystone-bootstrap-6lm44" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.219607 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/923d279f-d980-4e9e-88dd-a0d7aaa266c4-dns-swift-storage-0\") pod \"dnsmasq-dns-5c5cc7c5ff-dgj8k\" (UID: \"923d279f-d980-4e9e-88dd-a0d7aaa266c4\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-dgj8k" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.219634 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e12d10f6-caef-4c9d-9d88-332042911454-fernet-keys\") pod \"keystone-bootstrap-6lm44\" (UID: \"e12d10f6-caef-4c9d-9d88-332042911454\") " pod="openstack/keystone-bootstrap-6lm44" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.230943 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7ff5475cc9-6xbtd"] Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.277959 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c5cc7c5ff-dgj8k"] Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.305401 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-kx454"] Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.317296 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-kx454" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.320582 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e12d10f6-caef-4c9d-9d88-332042911454-fernet-keys\") pod \"keystone-bootstrap-6lm44\" (UID: \"e12d10f6-caef-4c9d-9d88-332042911454\") " pod="openstack/keystone-bootstrap-6lm44" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.320630 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/767a61f9-7a7d-43df-b53f-efdc8c693381-scripts\") pod \"cinder-db-sync-kx454\" (UID: \"767a61f9-7a7d-43df-b53f-efdc8c693381\") " pod="openstack/cinder-db-sync-kx454" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.320659 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/767a61f9-7a7d-43df-b53f-efdc8c693381-combined-ca-bundle\") pod \"cinder-db-sync-kx454\" (UID: \"767a61f9-7a7d-43df-b53f-efdc8c693381\") " pod="openstack/cinder-db-sync-kx454" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.320685 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/923d279f-d980-4e9e-88dd-a0d7aaa266c4-config\") pod \"dnsmasq-dns-5c5cc7c5ff-dgj8k\" (UID: \"923d279f-d980-4e9e-88dd-a0d7aaa266c4\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-dgj8k" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.320707 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e12d10f6-caef-4c9d-9d88-332042911454-credential-keys\") pod \"keystone-bootstrap-6lm44\" (UID: \"e12d10f6-caef-4c9d-9d88-332042911454\") " pod="openstack/keystone-bootstrap-6lm44" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.320724 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e12d10f6-caef-4c9d-9d88-332042911454-scripts\") pod \"keystone-bootstrap-6lm44\" (UID: \"e12d10f6-caef-4c9d-9d88-332042911454\") " pod="openstack/keystone-bootstrap-6lm44" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.320756 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e12d10f6-caef-4c9d-9d88-332042911454-combined-ca-bundle\") pod \"keystone-bootstrap-6lm44\" (UID: \"e12d10f6-caef-4c9d-9d88-332042911454\") " pod="openstack/keystone-bootstrap-6lm44" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.320772 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqlzj\" (UniqueName: \"kubernetes.io/projected/767a61f9-7a7d-43df-b53f-efdc8c693381-kube-api-access-jqlzj\") pod \"cinder-db-sync-kx454\" (UID: \"767a61f9-7a7d-43df-b53f-efdc8c693381\") " pod="openstack/cinder-db-sync-kx454" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.320792 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/767a61f9-7a7d-43df-b53f-efdc8c693381-db-sync-config-data\") pod \"cinder-db-sync-kx454\" (UID: \"767a61f9-7a7d-43df-b53f-efdc8c693381\") " pod="openstack/cinder-db-sync-kx454" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.320813 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/923d279f-d980-4e9e-88dd-a0d7aaa266c4-ovsdbserver-nb\") pod \"dnsmasq-dns-5c5cc7c5ff-dgj8k\" (UID: \"923d279f-d980-4e9e-88dd-a0d7aaa266c4\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-dgj8k" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.320851 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/923d279f-d980-4e9e-88dd-a0d7aaa266c4-ovsdbserver-sb\") pod \"dnsmasq-dns-5c5cc7c5ff-dgj8k\" (UID: \"923d279f-d980-4e9e-88dd-a0d7aaa266c4\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-dgj8k" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.320880 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/923d279f-d980-4e9e-88dd-a0d7aaa266c4-dns-svc\") pod \"dnsmasq-dns-5c5cc7c5ff-dgj8k\" (UID: \"923d279f-d980-4e9e-88dd-a0d7aaa266c4\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-dgj8k" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.320901 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4h6k6\" (UniqueName: \"kubernetes.io/projected/923d279f-d980-4e9e-88dd-a0d7aaa266c4-kube-api-access-4h6k6\") pod \"dnsmasq-dns-5c5cc7c5ff-dgj8k\" (UID: \"923d279f-d980-4e9e-88dd-a0d7aaa266c4\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-dgj8k" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.320916 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e12d10f6-caef-4c9d-9d88-332042911454-config-data\") pod \"keystone-bootstrap-6lm44\" (UID: \"e12d10f6-caef-4c9d-9d88-332042911454\") " pod="openstack/keystone-bootstrap-6lm44" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.320942 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/767a61f9-7a7d-43df-b53f-efdc8c693381-config-data\") pod \"cinder-db-sync-kx454\" (UID: \"767a61f9-7a7d-43df-b53f-efdc8c693381\") " pod="openstack/cinder-db-sync-kx454" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.320973 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/767a61f9-7a7d-43df-b53f-efdc8c693381-etc-machine-id\") pod \"cinder-db-sync-kx454\" (UID: \"767a61f9-7a7d-43df-b53f-efdc8c693381\") " pod="openstack/cinder-db-sync-kx454" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.320992 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnrlk\" (UniqueName: \"kubernetes.io/projected/e12d10f6-caef-4c9d-9d88-332042911454-kube-api-access-mnrlk\") pod \"keystone-bootstrap-6lm44\" (UID: \"e12d10f6-caef-4c9d-9d88-332042911454\") " pod="openstack/keystone-bootstrap-6lm44" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.321010 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/923d279f-d980-4e9e-88dd-a0d7aaa266c4-dns-swift-storage-0\") pod \"dnsmasq-dns-5c5cc7c5ff-dgj8k\" (UID: \"923d279f-d980-4e9e-88dd-a0d7aaa266c4\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-dgj8k" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.321891 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/923d279f-d980-4e9e-88dd-a0d7aaa266c4-dns-swift-storage-0\") pod \"dnsmasq-dns-5c5cc7c5ff-dgj8k\" (UID: \"923d279f-d980-4e9e-88dd-a0d7aaa266c4\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-dgj8k" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.325074 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/923d279f-d980-4e9e-88dd-a0d7aaa266c4-config\") pod \"dnsmasq-dns-5c5cc7c5ff-dgj8k\" (UID: \"923d279f-d980-4e9e-88dd-a0d7aaa266c4\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-dgj8k" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.335984 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/923d279f-d980-4e9e-88dd-a0d7aaa266c4-ovsdbserver-nb\") pod \"dnsmasq-dns-5c5cc7c5ff-dgj8k\" (UID: \"923d279f-d980-4e9e-88dd-a0d7aaa266c4\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-dgj8k" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.338634 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/923d279f-d980-4e9e-88dd-a0d7aaa266c4-dns-svc\") pod \"dnsmasq-dns-5c5cc7c5ff-dgj8k\" (UID: \"923d279f-d980-4e9e-88dd-a0d7aaa266c4\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-dgj8k" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.338898 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/923d279f-d980-4e9e-88dd-a0d7aaa266c4-ovsdbserver-sb\") pod \"dnsmasq-dns-5c5cc7c5ff-dgj8k\" (UID: \"923d279f-d980-4e9e-88dd-a0d7aaa266c4\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-dgj8k" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.343456 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-w7brp" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.343648 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.343790 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.346965 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e12d10f6-caef-4c9d-9d88-332042911454-config-data\") pod \"keystone-bootstrap-6lm44\" (UID: \"e12d10f6-caef-4c9d-9d88-332042911454\") " pod="openstack/keystone-bootstrap-6lm44" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.347186 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e12d10f6-caef-4c9d-9d88-332042911454-combined-ca-bundle\") pod \"keystone-bootstrap-6lm44\" (UID: \"e12d10f6-caef-4c9d-9d88-332042911454\") " pod="openstack/keystone-bootstrap-6lm44" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.351570 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-kx454"] Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.368919 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e12d10f6-caef-4c9d-9d88-332042911454-scripts\") pod \"keystone-bootstrap-6lm44\" (UID: \"e12d10f6-caef-4c9d-9d88-332042911454\") " pod="openstack/keystone-bootstrap-6lm44" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.369503 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e12d10f6-caef-4c9d-9d88-332042911454-credential-keys\") pod \"keystone-bootstrap-6lm44\" (UID: \"e12d10f6-caef-4c9d-9d88-332042911454\") " pod="openstack/keystone-bootstrap-6lm44" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.380196 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e12d10f6-caef-4c9d-9d88-332042911454-fernet-keys\") pod \"keystone-bootstrap-6lm44\" (UID: \"e12d10f6-caef-4c9d-9d88-332042911454\") " pod="openstack/keystone-bootstrap-6lm44" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.380696 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4h6k6\" (UniqueName: \"kubernetes.io/projected/923d279f-d980-4e9e-88dd-a0d7aaa266c4-kube-api-access-4h6k6\") pod \"dnsmasq-dns-5c5cc7c5ff-dgj8k\" (UID: \"923d279f-d980-4e9e-88dd-a0d7aaa266c4\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-dgj8k" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.394586 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnrlk\" (UniqueName: \"kubernetes.io/projected/e12d10f6-caef-4c9d-9d88-332042911454-kube-api-access-mnrlk\") pod \"keystone-bootstrap-6lm44\" (UID: \"e12d10f6-caef-4c9d-9d88-332042911454\") " pod="openstack/keystone-bootstrap-6lm44" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.407520 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7585876fd5-877pk"] Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.408957 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7585876fd5-877pk" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.417480 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-pw86s" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.417780 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.417957 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.417974 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.426271 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/767a61f9-7a7d-43df-b53f-efdc8c693381-config-data\") pod \"cinder-db-sync-kx454\" (UID: \"767a61f9-7a7d-43df-b53f-efdc8c693381\") " pod="openstack/cinder-db-sync-kx454" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.426336 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/767a61f9-7a7d-43df-b53f-efdc8c693381-etc-machine-id\") pod \"cinder-db-sync-kx454\" (UID: \"767a61f9-7a7d-43df-b53f-efdc8c693381\") " pod="openstack/cinder-db-sync-kx454" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.426393 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/767a61f9-7a7d-43df-b53f-efdc8c693381-scripts\") pod \"cinder-db-sync-kx454\" (UID: \"767a61f9-7a7d-43df-b53f-efdc8c693381\") " pod="openstack/cinder-db-sync-kx454" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.426419 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/767a61f9-7a7d-43df-b53f-efdc8c693381-combined-ca-bundle\") pod \"cinder-db-sync-kx454\" (UID: \"767a61f9-7a7d-43df-b53f-efdc8c693381\") " pod="openstack/cinder-db-sync-kx454" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.426467 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqlzj\" (UniqueName: \"kubernetes.io/projected/767a61f9-7a7d-43df-b53f-efdc8c693381-kube-api-access-jqlzj\") pod \"cinder-db-sync-kx454\" (UID: \"767a61f9-7a7d-43df-b53f-efdc8c693381\") " pod="openstack/cinder-db-sync-kx454" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.426486 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/767a61f9-7a7d-43df-b53f-efdc8c693381-db-sync-config-data\") pod \"cinder-db-sync-kx454\" (UID: \"767a61f9-7a7d-43df-b53f-efdc8c693381\") " pod="openstack/cinder-db-sync-kx454" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.431823 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/767a61f9-7a7d-43df-b53f-efdc8c693381-etc-machine-id\") pod \"cinder-db-sync-kx454\" (UID: \"767a61f9-7a7d-43df-b53f-efdc8c693381\") " pod="openstack/cinder-db-sync-kx454" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.436688 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/767a61f9-7a7d-43df-b53f-efdc8c693381-scripts\") pod \"cinder-db-sync-kx454\" (UID: \"767a61f9-7a7d-43df-b53f-efdc8c693381\") " pod="openstack/cinder-db-sync-kx454" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.440610 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/767a61f9-7a7d-43df-b53f-efdc8c693381-combined-ca-bundle\") pod \"cinder-db-sync-kx454\" (UID: \"767a61f9-7a7d-43df-b53f-efdc8c693381\") " pod="openstack/cinder-db-sync-kx454" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.440975 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/767a61f9-7a7d-43df-b53f-efdc8c693381-db-sync-config-data\") pod \"cinder-db-sync-kx454\" (UID: \"767a61f9-7a7d-43df-b53f-efdc8c693381\") " pod="openstack/cinder-db-sync-kx454" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.445611 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/767a61f9-7a7d-43df-b53f-efdc8c693381-config-data\") pod \"cinder-db-sync-kx454\" (UID: \"767a61f9-7a7d-43df-b53f-efdc8c693381\") " pod="openstack/cinder-db-sync-kx454" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.457958 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7585876fd5-877pk"] Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.462528 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c5cc7c5ff-dgj8k" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.534568 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqlzj\" (UniqueName: \"kubernetes.io/projected/767a61f9-7a7d-43df-b53f-efdc8c693381-kube-api-access-jqlzj\") pod \"cinder-db-sync-kx454\" (UID: \"767a61f9-7a7d-43df-b53f-efdc8c693381\") " pod="openstack/cinder-db-sync-kx454" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.537239 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/863bf673-6941-42ac-90ff-9e70bbf3f05a-logs\") pod \"horizon-7585876fd5-877pk\" (UID: \"863bf673-6941-42ac-90ff-9e70bbf3f05a\") " pod="openstack/horizon-7585876fd5-877pk" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.537292 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/863bf673-6941-42ac-90ff-9e70bbf3f05a-scripts\") pod \"horizon-7585876fd5-877pk\" (UID: \"863bf673-6941-42ac-90ff-9e70bbf3f05a\") " pod="openstack/horizon-7585876fd5-877pk" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.537331 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/863bf673-6941-42ac-90ff-9e70bbf3f05a-horizon-secret-key\") pod \"horizon-7585876fd5-877pk\" (UID: \"863bf673-6941-42ac-90ff-9e70bbf3f05a\") " pod="openstack/horizon-7585876fd5-877pk" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.537416 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/863bf673-6941-42ac-90ff-9e70bbf3f05a-config-data\") pod \"horizon-7585876fd5-877pk\" (UID: \"863bf673-6941-42ac-90ff-9e70bbf3f05a\") " pod="openstack/horizon-7585876fd5-877pk" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.537438 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m95tg\" (UniqueName: \"kubernetes.io/projected/863bf673-6941-42ac-90ff-9e70bbf3f05a-kube-api-access-m95tg\") pod \"horizon-7585876fd5-877pk\" (UID: \"863bf673-6941-42ac-90ff-9e70bbf3f05a\") " pod="openstack/horizon-7585876fd5-877pk" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.609044 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-kfc4d"] Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.610241 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-kfc4d" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.615152 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.615339 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.620453 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-xp6ks" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.639033 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/863bf673-6941-42ac-90ff-9e70bbf3f05a-logs\") pod \"horizon-7585876fd5-877pk\" (UID: \"863bf673-6941-42ac-90ff-9e70bbf3f05a\") " pod="openstack/horizon-7585876fd5-877pk" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.639081 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/863bf673-6941-42ac-90ff-9e70bbf3f05a-scripts\") pod \"horizon-7585876fd5-877pk\" (UID: \"863bf673-6941-42ac-90ff-9e70bbf3f05a\") " pod="openstack/horizon-7585876fd5-877pk" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.639118 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/863bf673-6941-42ac-90ff-9e70bbf3f05a-horizon-secret-key\") pod \"horizon-7585876fd5-877pk\" (UID: \"863bf673-6941-42ac-90ff-9e70bbf3f05a\") " pod="openstack/horizon-7585876fd5-877pk" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.639171 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/863bf673-6941-42ac-90ff-9e70bbf3f05a-config-data\") pod \"horizon-7585876fd5-877pk\" (UID: \"863bf673-6941-42ac-90ff-9e70bbf3f05a\") " pod="openstack/horizon-7585876fd5-877pk" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.639187 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m95tg\" (UniqueName: \"kubernetes.io/projected/863bf673-6941-42ac-90ff-9e70bbf3f05a-kube-api-access-m95tg\") pod \"horizon-7585876fd5-877pk\" (UID: \"863bf673-6941-42ac-90ff-9e70bbf3f05a\") " pod="openstack/horizon-7585876fd5-877pk" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.639896 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/863bf673-6941-42ac-90ff-9e70bbf3f05a-logs\") pod \"horizon-7585876fd5-877pk\" (UID: \"863bf673-6941-42ac-90ff-9e70bbf3f05a\") " pod="openstack/horizon-7585876fd5-877pk" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.643505 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.643514 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/863bf673-6941-42ac-90ff-9e70bbf3f05a-config-data\") pod \"horizon-7585876fd5-877pk\" (UID: \"863bf673-6941-42ac-90ff-9e70bbf3f05a\") " pod="openstack/horizon-7585876fd5-877pk" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.653423 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/863bf673-6941-42ac-90ff-9e70bbf3f05a-scripts\") pod \"horizon-7585876fd5-877pk\" (UID: \"863bf673-6941-42ac-90ff-9e70bbf3f05a\") " pod="openstack/horizon-7585876fd5-877pk" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.663177 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/863bf673-6941-42ac-90ff-9e70bbf3f05a-horizon-secret-key\") pod \"horizon-7585876fd5-877pk\" (UID: \"863bf673-6941-42ac-90ff-9e70bbf3f05a\") " pod="openstack/horizon-7585876fd5-877pk" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.664818 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-6lm44" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.665176 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-kfc4d"] Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.665217 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.665309 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.714554 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.714725 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.718583 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-kx454" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.736252 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m95tg\" (UniqueName: \"kubernetes.io/projected/863bf673-6941-42ac-90ff-9e70bbf3f05a-kube-api-access-m95tg\") pod \"horizon-7585876fd5-877pk\" (UID: \"863bf673-6941-42ac-90ff-9e70bbf3f05a\") " pod="openstack/horizon-7585876fd5-877pk" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.742497 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f54de58e-9111-462b-a86e-8e324060c8aa-scripts\") pod \"ceilometer-0\" (UID: \"f54de58e-9111-462b-a86e-8e324060c8aa\") " pod="openstack/ceilometer-0" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.742756 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f54de58e-9111-462b-a86e-8e324060c8aa-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f54de58e-9111-462b-a86e-8e324060c8aa\") " pod="openstack/ceilometer-0" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.742850 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f240a66f-70cd-4747-b16f-807e6715e7a0-config\") pod \"neutron-db-sync-kfc4d\" (UID: \"f240a66f-70cd-4747-b16f-807e6715e7a0\") " pod="openstack/neutron-db-sync-kfc4d" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.742956 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6td4z\" (UniqueName: \"kubernetes.io/projected/f240a66f-70cd-4747-b16f-807e6715e7a0-kube-api-access-6td4z\") pod \"neutron-db-sync-kfc4d\" (UID: \"f240a66f-70cd-4747-b16f-807e6715e7a0\") " pod="openstack/neutron-db-sync-kfc4d" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.743114 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f240a66f-70cd-4747-b16f-807e6715e7a0-combined-ca-bundle\") pod \"neutron-db-sync-kfc4d\" (UID: \"f240a66f-70cd-4747-b16f-807e6715e7a0\") " pod="openstack/neutron-db-sync-kfc4d" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.743236 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f54de58e-9111-462b-a86e-8e324060c8aa-log-httpd\") pod \"ceilometer-0\" (UID: \"f54de58e-9111-462b-a86e-8e324060c8aa\") " pod="openstack/ceilometer-0" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.743388 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f54de58e-9111-462b-a86e-8e324060c8aa-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f54de58e-9111-462b-a86e-8e324060c8aa\") " pod="openstack/ceilometer-0" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.743508 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f54de58e-9111-462b-a86e-8e324060c8aa-config-data\") pod \"ceilometer-0\" (UID: \"f54de58e-9111-462b-a86e-8e324060c8aa\") " pod="openstack/ceilometer-0" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.743622 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r29fc\" (UniqueName: \"kubernetes.io/projected/f54de58e-9111-462b-a86e-8e324060c8aa-kube-api-access-r29fc\") pod \"ceilometer-0\" (UID: \"f54de58e-9111-462b-a86e-8e324060c8aa\") " pod="openstack/ceilometer-0" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.743796 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f54de58e-9111-462b-a86e-8e324060c8aa-run-httpd\") pod \"ceilometer-0\" (UID: \"f54de58e-9111-462b-a86e-8e324060c8aa\") " pod="openstack/ceilometer-0" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.745811 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7585876fd5-877pk" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.746408 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-776b5c685-s4c5n"] Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.747798 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-776b5c685-s4c5n" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.775749 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-5ttrw"] Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.777191 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-5ttrw" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.804166 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.804227 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-d7gf9" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.817235 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-776b5c685-s4c5n"] Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.846523 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6551765-e11c-4cfc-a20f-976c7b1807ad-logs\") pod \"horizon-776b5c685-s4c5n\" (UID: \"b6551765-e11c-4cfc-a20f-976c7b1807ad\") " pod="openstack/horizon-776b5c685-s4c5n" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.846567 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f240a66f-70cd-4747-b16f-807e6715e7a0-combined-ca-bundle\") pod \"neutron-db-sync-kfc4d\" (UID: \"f240a66f-70cd-4747-b16f-807e6715e7a0\") " pod="openstack/neutron-db-sync-kfc4d" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.846584 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b6551765-e11c-4cfc-a20f-976c7b1807ad-config-data\") pod \"horizon-776b5c685-s4c5n\" (UID: \"b6551765-e11c-4cfc-a20f-976c7b1807ad\") " pod="openstack/horizon-776b5c685-s4c5n" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.846604 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f54de58e-9111-462b-a86e-8e324060c8aa-log-httpd\") pod \"ceilometer-0\" (UID: \"f54de58e-9111-462b-a86e-8e324060c8aa\") " pod="openstack/ceilometer-0" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.846631 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b6551765-e11c-4cfc-a20f-976c7b1807ad-scripts\") pod \"horizon-776b5c685-s4c5n\" (UID: \"b6551765-e11c-4cfc-a20f-976c7b1807ad\") " pod="openstack/horizon-776b5c685-s4c5n" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.846653 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rlqc\" (UniqueName: \"kubernetes.io/projected/b6551765-e11c-4cfc-a20f-976c7b1807ad-kube-api-access-8rlqc\") pod \"horizon-776b5c685-s4c5n\" (UID: \"b6551765-e11c-4cfc-a20f-976c7b1807ad\") " pod="openstack/horizon-776b5c685-s4c5n" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.846678 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f54de58e-9111-462b-a86e-8e324060c8aa-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f54de58e-9111-462b-a86e-8e324060c8aa\") " pod="openstack/ceilometer-0" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.846705 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f54de58e-9111-462b-a86e-8e324060c8aa-config-data\") pod \"ceilometer-0\" (UID: \"f54de58e-9111-462b-a86e-8e324060c8aa\") " pod="openstack/ceilometer-0" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.846729 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r29fc\" (UniqueName: \"kubernetes.io/projected/f54de58e-9111-462b-a86e-8e324060c8aa-kube-api-access-r29fc\") pod \"ceilometer-0\" (UID: \"f54de58e-9111-462b-a86e-8e324060c8aa\") " pod="openstack/ceilometer-0" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.846753 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/878af3f4-684c-457b-b943-b47aa64dcb58-combined-ca-bundle\") pod \"barbican-db-sync-5ttrw\" (UID: \"878af3f4-684c-457b-b943-b47aa64dcb58\") " pod="openstack/barbican-db-sync-5ttrw" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.846776 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b6551765-e11c-4cfc-a20f-976c7b1807ad-horizon-secret-key\") pod \"horizon-776b5c685-s4c5n\" (UID: \"b6551765-e11c-4cfc-a20f-976c7b1807ad\") " pod="openstack/horizon-776b5c685-s4c5n" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.846801 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/878af3f4-684c-457b-b943-b47aa64dcb58-db-sync-config-data\") pod \"barbican-db-sync-5ttrw\" (UID: \"878af3f4-684c-457b-b943-b47aa64dcb58\") " pod="openstack/barbican-db-sync-5ttrw" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.846825 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f54de58e-9111-462b-a86e-8e324060c8aa-run-httpd\") pod \"ceilometer-0\" (UID: \"f54de58e-9111-462b-a86e-8e324060c8aa\") " pod="openstack/ceilometer-0" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.846863 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4v76\" (UniqueName: \"kubernetes.io/projected/878af3f4-684c-457b-b943-b47aa64dcb58-kube-api-access-l4v76\") pod \"barbican-db-sync-5ttrw\" (UID: \"878af3f4-684c-457b-b943-b47aa64dcb58\") " pod="openstack/barbican-db-sync-5ttrw" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.846884 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f54de58e-9111-462b-a86e-8e324060c8aa-scripts\") pod \"ceilometer-0\" (UID: \"f54de58e-9111-462b-a86e-8e324060c8aa\") " pod="openstack/ceilometer-0" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.846902 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f54de58e-9111-462b-a86e-8e324060c8aa-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f54de58e-9111-462b-a86e-8e324060c8aa\") " pod="openstack/ceilometer-0" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.846917 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f240a66f-70cd-4747-b16f-807e6715e7a0-config\") pod \"neutron-db-sync-kfc4d\" (UID: \"f240a66f-70cd-4747-b16f-807e6715e7a0\") " pod="openstack/neutron-db-sync-kfc4d" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.846943 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6td4z\" (UniqueName: \"kubernetes.io/projected/f240a66f-70cd-4747-b16f-807e6715e7a0-kube-api-access-6td4z\") pod \"neutron-db-sync-kfc4d\" (UID: \"f240a66f-70cd-4747-b16f-807e6715e7a0\") " pod="openstack/neutron-db-sync-kfc4d" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.848555 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-5ttrw"] Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.849033 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f54de58e-9111-462b-a86e-8e324060c8aa-log-httpd\") pod \"ceilometer-0\" (UID: \"f54de58e-9111-462b-a86e-8e324060c8aa\") " pod="openstack/ceilometer-0" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.853106 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f54de58e-9111-462b-a86e-8e324060c8aa-run-httpd\") pod \"ceilometer-0\" (UID: \"f54de58e-9111-462b-a86e-8e324060c8aa\") " pod="openstack/ceilometer-0" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.854411 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f54de58e-9111-462b-a86e-8e324060c8aa-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f54de58e-9111-462b-a86e-8e324060c8aa\") " pod="openstack/ceilometer-0" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.855218 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f240a66f-70cd-4747-b16f-807e6715e7a0-combined-ca-bundle\") pod \"neutron-db-sync-kfc4d\" (UID: \"f240a66f-70cd-4747-b16f-807e6715e7a0\") " pod="openstack/neutron-db-sync-kfc4d" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.863536 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/f240a66f-70cd-4747-b16f-807e6715e7a0-config\") pod \"neutron-db-sync-kfc4d\" (UID: \"f240a66f-70cd-4747-b16f-807e6715e7a0\") " pod="openstack/neutron-db-sync-kfc4d" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.867083 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f54de58e-9111-462b-a86e-8e324060c8aa-scripts\") pod \"ceilometer-0\" (UID: \"f54de58e-9111-462b-a86e-8e324060c8aa\") " pod="openstack/ceilometer-0" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.871691 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f54de58e-9111-462b-a86e-8e324060c8aa-config-data\") pod \"ceilometer-0\" (UID: \"f54de58e-9111-462b-a86e-8e324060c8aa\") " pod="openstack/ceilometer-0" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.876128 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f54de58e-9111-462b-a86e-8e324060c8aa-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f54de58e-9111-462b-a86e-8e324060c8aa\") " pod="openstack/ceilometer-0" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.890963 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c5cc7c5ff-dgj8k"] Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.902210 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r29fc\" (UniqueName: \"kubernetes.io/projected/f54de58e-9111-462b-a86e-8e324060c8aa-kube-api-access-r29fc\") pod \"ceilometer-0\" (UID: \"f54de58e-9111-462b-a86e-8e324060c8aa\") " pod="openstack/ceilometer-0" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.907954 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6td4z\" (UniqueName: \"kubernetes.io/projected/f240a66f-70cd-4747-b16f-807e6715e7a0-kube-api-access-6td4z\") pod \"neutron-db-sync-kfc4d\" (UID: \"f240a66f-70cd-4747-b16f-807e6715e7a0\") " pod="openstack/neutron-db-sync-kfc4d" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.908019 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-f9pr6"] Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.909008 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-f9pr6" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.918943 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7ff5475cc9-6xbtd" event={"ID":"7c8b7623-5762-411b-9245-598e5b57f9da","Type":"ContainerStarted","Data":"226841dc6bca8f30c6e15a317117832828c1f985db89f3c3db95283ba80e0ca2"} Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.918990 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7ff5475cc9-6xbtd" event={"ID":"7c8b7623-5762-411b-9245-598e5b57f9da","Type":"ContainerStarted","Data":"1d70a249e237ceb66fb85fc76624110987513075c62f6c2afeee4b960e2f04fb"} Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.919763 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7ff5475cc9-6xbtd" podUID="7c8b7623-5762-411b-9245-598e5b57f9da" containerName="init" containerID="cri-o://226841dc6bca8f30c6e15a317117832828c1f985db89f3c3db95283ba80e0ca2" gracePeriod=10 Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.928663 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-bhgfx" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.928841 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.942345 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.950142 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/878af3f4-684c-457b-b943-b47aa64dcb58-combined-ca-bundle\") pod \"barbican-db-sync-5ttrw\" (UID: \"878af3f4-684c-457b-b943-b47aa64dcb58\") " pod="openstack/barbican-db-sync-5ttrw" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.950200 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b6551765-e11c-4cfc-a20f-976c7b1807ad-horizon-secret-key\") pod \"horizon-776b5c685-s4c5n\" (UID: \"b6551765-e11c-4cfc-a20f-976c7b1807ad\") " pod="openstack/horizon-776b5c685-s4c5n" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.950227 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/878af3f4-684c-457b-b943-b47aa64dcb58-db-sync-config-data\") pod \"barbican-db-sync-5ttrw\" (UID: \"878af3f4-684c-457b-b943-b47aa64dcb58\") " pod="openstack/barbican-db-sync-5ttrw" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.950269 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4v76\" (UniqueName: \"kubernetes.io/projected/878af3f4-684c-457b-b943-b47aa64dcb58-kube-api-access-l4v76\") pod \"barbican-db-sync-5ttrw\" (UID: \"878af3f4-684c-457b-b943-b47aa64dcb58\") " pod="openstack/barbican-db-sync-5ttrw" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.950319 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6551765-e11c-4cfc-a20f-976c7b1807ad-logs\") pod \"horizon-776b5c685-s4c5n\" (UID: \"b6551765-e11c-4cfc-a20f-976c7b1807ad\") " pod="openstack/horizon-776b5c685-s4c5n" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.950340 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b6551765-e11c-4cfc-a20f-976c7b1807ad-config-data\") pod \"horizon-776b5c685-s4c5n\" (UID: \"b6551765-e11c-4cfc-a20f-976c7b1807ad\") " pod="openstack/horizon-776b5c685-s4c5n" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.950374 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b6551765-e11c-4cfc-a20f-976c7b1807ad-scripts\") pod \"horizon-776b5c685-s4c5n\" (UID: \"b6551765-e11c-4cfc-a20f-976c7b1807ad\") " pod="openstack/horizon-776b5c685-s4c5n" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.950393 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rlqc\" (UniqueName: \"kubernetes.io/projected/b6551765-e11c-4cfc-a20f-976c7b1807ad-kube-api-access-8rlqc\") pod \"horizon-776b5c685-s4c5n\" (UID: \"b6551765-e11c-4cfc-a20f-976c7b1807ad\") " pod="openstack/horizon-776b5c685-s4c5n" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.957766 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b6551765-e11c-4cfc-a20f-976c7b1807ad-config-data\") pod \"horizon-776b5c685-s4c5n\" (UID: \"b6551765-e11c-4cfc-a20f-976c7b1807ad\") " pod="openstack/horizon-776b5c685-s4c5n" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.958068 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6551765-e11c-4cfc-a20f-976c7b1807ad-logs\") pod \"horizon-776b5c685-s4c5n\" (UID: \"b6551765-e11c-4cfc-a20f-976c7b1807ad\") " pod="openstack/horizon-776b5c685-s4c5n" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.961872 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b6551765-e11c-4cfc-a20f-976c7b1807ad-horizon-secret-key\") pod \"horizon-776b5c685-s4c5n\" (UID: \"b6551765-e11c-4cfc-a20f-976c7b1807ad\") " pod="openstack/horizon-776b5c685-s4c5n" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.962384 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b6551765-e11c-4cfc-a20f-976c7b1807ad-scripts\") pod \"horizon-776b5c685-s4c5n\" (UID: \"b6551765-e11c-4cfc-a20f-976c7b1807ad\") " pod="openstack/horizon-776b5c685-s4c5n" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.962702 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-kfc4d" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.988745 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/878af3f4-684c-457b-b943-b47aa64dcb58-combined-ca-bundle\") pod \"barbican-db-sync-5ttrw\" (UID: \"878af3f4-684c-457b-b943-b47aa64dcb58\") " pod="openstack/barbican-db-sync-5ttrw" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.990163 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-f9pr6"] Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.997712 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rlqc\" (UniqueName: \"kubernetes.io/projected/b6551765-e11c-4cfc-a20f-976c7b1807ad-kube-api-access-8rlqc\") pod \"horizon-776b5c685-s4c5n\" (UID: \"b6551765-e11c-4cfc-a20f-976c7b1807ad\") " pod="openstack/horizon-776b5c685-s4c5n" Dec 01 08:56:49 crc kubenswrapper[4689]: I1201 08:56:49.998280 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/878af3f4-684c-457b-b943-b47aa64dcb58-db-sync-config-data\") pod \"barbican-db-sync-5ttrw\" (UID: \"878af3f4-684c-457b-b943-b47aa64dcb58\") " pod="openstack/barbican-db-sync-5ttrw" Dec 01 08:56:50 crc kubenswrapper[4689]: I1201 08:56:50.000979 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4v76\" (UniqueName: \"kubernetes.io/projected/878af3f4-684c-457b-b943-b47aa64dcb58-kube-api-access-l4v76\") pod \"barbican-db-sync-5ttrw\" (UID: \"878af3f4-684c-457b-b943-b47aa64dcb58\") " pod="openstack/barbican-db-sync-5ttrw" Dec 01 08:56:50 crc kubenswrapper[4689]: I1201 08:56:50.015353 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-v5gml"] Dec 01 08:56:50 crc kubenswrapper[4689]: I1201 08:56:50.032305 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-v5gml" Dec 01 08:56:50 crc kubenswrapper[4689]: I1201 08:56:50.035477 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-v5gml"] Dec 01 08:56:50 crc kubenswrapper[4689]: I1201 08:56:50.052116 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8txg\" (UniqueName: \"kubernetes.io/projected/dc8aad14-4d75-45c4-9456-db0e80ffd8e7-kube-api-access-n8txg\") pod \"placement-db-sync-f9pr6\" (UID: \"dc8aad14-4d75-45c4-9456-db0e80ffd8e7\") " pod="openstack/placement-db-sync-f9pr6" Dec 01 08:56:50 crc kubenswrapper[4689]: I1201 08:56:50.052158 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc8aad14-4d75-45c4-9456-db0e80ffd8e7-logs\") pod \"placement-db-sync-f9pr6\" (UID: \"dc8aad14-4d75-45c4-9456-db0e80ffd8e7\") " pod="openstack/placement-db-sync-f9pr6" Dec 01 08:56:50 crc kubenswrapper[4689]: I1201 08:56:50.052222 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc8aad14-4d75-45c4-9456-db0e80ffd8e7-scripts\") pod \"placement-db-sync-f9pr6\" (UID: \"dc8aad14-4d75-45c4-9456-db0e80ffd8e7\") " pod="openstack/placement-db-sync-f9pr6" Dec 01 08:56:50 crc kubenswrapper[4689]: I1201 08:56:50.052294 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc8aad14-4d75-45c4-9456-db0e80ffd8e7-config-data\") pod \"placement-db-sync-f9pr6\" (UID: \"dc8aad14-4d75-45c4-9456-db0e80ffd8e7\") " pod="openstack/placement-db-sync-f9pr6" Dec 01 08:56:50 crc kubenswrapper[4689]: I1201 08:56:50.052314 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc8aad14-4d75-45c4-9456-db0e80ffd8e7-combined-ca-bundle\") pod \"placement-db-sync-f9pr6\" (UID: \"dc8aad14-4d75-45c4-9456-db0e80ffd8e7\") " pod="openstack/placement-db-sync-f9pr6" Dec 01 08:56:50 crc kubenswrapper[4689]: I1201 08:56:50.055758 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 08:56:50 crc kubenswrapper[4689]: I1201 08:56:50.077083 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-776b5c685-s4c5n" Dec 01 08:56:50 crc kubenswrapper[4689]: I1201 08:56:50.137290 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-5ttrw" Dec 01 08:56:50 crc kubenswrapper[4689]: I1201 08:56:50.146740 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 01 08:56:50 crc kubenswrapper[4689]: I1201 08:56:50.148943 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 01 08:56:50 crc kubenswrapper[4689]: I1201 08:56:50.158200 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-p2bzt" Dec 01 08:56:50 crc kubenswrapper[4689]: I1201 08:56:50.158479 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 01 08:56:50 crc kubenswrapper[4689]: I1201 08:56:50.158613 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Dec 01 08:56:50 crc kubenswrapper[4689]: I1201 08:56:50.159608 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc8aad14-4d75-45c4-9456-db0e80ffd8e7-scripts\") pod \"placement-db-sync-f9pr6\" (UID: \"dc8aad14-4d75-45c4-9456-db0e80ffd8e7\") " pod="openstack/placement-db-sync-f9pr6" Dec 01 08:56:50 crc kubenswrapper[4689]: I1201 08:56:50.159741 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc8aad14-4d75-45c4-9456-db0e80ffd8e7-config-data\") pod \"placement-db-sync-f9pr6\" (UID: \"dc8aad14-4d75-45c4-9456-db0e80ffd8e7\") " pod="openstack/placement-db-sync-f9pr6" Dec 01 08:56:50 crc kubenswrapper[4689]: I1201 08:56:50.159762 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc8aad14-4d75-45c4-9456-db0e80ffd8e7-combined-ca-bundle\") pod \"placement-db-sync-f9pr6\" (UID: \"dc8aad14-4d75-45c4-9456-db0e80ffd8e7\") " pod="openstack/placement-db-sync-f9pr6" Dec 01 08:56:50 crc kubenswrapper[4689]: I1201 08:56:50.159783 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7f5f7d36-b4d5-4fa6-9b34-0fb15a5a17b2-dns-swift-storage-0\") pod \"dnsmasq-dns-8b5c85b87-v5gml\" (UID: \"7f5f7d36-b4d5-4fa6-9b34-0fb15a5a17b2\") " pod="openstack/dnsmasq-dns-8b5c85b87-v5gml" Dec 01 08:56:50 crc kubenswrapper[4689]: I1201 08:56:50.159814 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f5f7d36-b4d5-4fa6-9b34-0fb15a5a17b2-dns-svc\") pod \"dnsmasq-dns-8b5c85b87-v5gml\" (UID: \"7f5f7d36-b4d5-4fa6-9b34-0fb15a5a17b2\") " pod="openstack/dnsmasq-dns-8b5c85b87-v5gml" Dec 01 08:56:50 crc kubenswrapper[4689]: I1201 08:56:50.159833 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f5f7d36-b4d5-4fa6-9b34-0fb15a5a17b2-config\") pod \"dnsmasq-dns-8b5c85b87-v5gml\" (UID: \"7f5f7d36-b4d5-4fa6-9b34-0fb15a5a17b2\") " pod="openstack/dnsmasq-dns-8b5c85b87-v5gml" Dec 01 08:56:50 crc kubenswrapper[4689]: I1201 08:56:50.159878 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8txg\" (UniqueName: \"kubernetes.io/projected/dc8aad14-4d75-45c4-9456-db0e80ffd8e7-kube-api-access-n8txg\") pod \"placement-db-sync-f9pr6\" (UID: \"dc8aad14-4d75-45c4-9456-db0e80ffd8e7\") " pod="openstack/placement-db-sync-f9pr6" Dec 01 08:56:50 crc kubenswrapper[4689]: I1201 08:56:50.159893 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc8aad14-4d75-45c4-9456-db0e80ffd8e7-logs\") pod \"placement-db-sync-f9pr6\" (UID: \"dc8aad14-4d75-45c4-9456-db0e80ffd8e7\") " pod="openstack/placement-db-sync-f9pr6" Dec 01 08:56:50 crc kubenswrapper[4689]: I1201 08:56:50.159909 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f5f7d36-b4d5-4fa6-9b34-0fb15a5a17b2-ovsdbserver-nb\") pod \"dnsmasq-dns-8b5c85b87-v5gml\" (UID: \"7f5f7d36-b4d5-4fa6-9b34-0fb15a5a17b2\") " pod="openstack/dnsmasq-dns-8b5c85b87-v5gml" Dec 01 08:56:50 crc kubenswrapper[4689]: I1201 08:56:50.159926 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2tlg\" (UniqueName: \"kubernetes.io/projected/7f5f7d36-b4d5-4fa6-9b34-0fb15a5a17b2-kube-api-access-j2tlg\") pod \"dnsmasq-dns-8b5c85b87-v5gml\" (UID: \"7f5f7d36-b4d5-4fa6-9b34-0fb15a5a17b2\") " pod="openstack/dnsmasq-dns-8b5c85b87-v5gml" Dec 01 08:56:50 crc kubenswrapper[4689]: I1201 08:56:50.159944 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7f5f7d36-b4d5-4fa6-9b34-0fb15a5a17b2-ovsdbserver-sb\") pod \"dnsmasq-dns-8b5c85b87-v5gml\" (UID: \"7f5f7d36-b4d5-4fa6-9b34-0fb15a5a17b2\") " pod="openstack/dnsmasq-dns-8b5c85b87-v5gml" Dec 01 08:56:50 crc kubenswrapper[4689]: I1201 08:56:50.187270 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc8aad14-4d75-45c4-9456-db0e80ffd8e7-logs\") pod \"placement-db-sync-f9pr6\" (UID: \"dc8aad14-4d75-45c4-9456-db0e80ffd8e7\") " pod="openstack/placement-db-sync-f9pr6" Dec 01 08:56:50 crc kubenswrapper[4689]: I1201 08:56:50.199294 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc8aad14-4d75-45c4-9456-db0e80ffd8e7-config-data\") pod \"placement-db-sync-f9pr6\" (UID: \"dc8aad14-4d75-45c4-9456-db0e80ffd8e7\") " pod="openstack/placement-db-sync-f9pr6" Dec 01 08:56:50 crc kubenswrapper[4689]: I1201 08:56:50.200774 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc8aad14-4d75-45c4-9456-db0e80ffd8e7-combined-ca-bundle\") pod \"placement-db-sync-f9pr6\" (UID: \"dc8aad14-4d75-45c4-9456-db0e80ffd8e7\") " pod="openstack/placement-db-sync-f9pr6" Dec 01 08:56:50 crc kubenswrapper[4689]: I1201 08:56:50.214615 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc8aad14-4d75-45c4-9456-db0e80ffd8e7-scripts\") pod \"placement-db-sync-f9pr6\" (UID: \"dc8aad14-4d75-45c4-9456-db0e80ffd8e7\") " pod="openstack/placement-db-sync-f9pr6" Dec 01 08:56:50 crc kubenswrapper[4689]: I1201 08:56:50.220010 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8txg\" (UniqueName: \"kubernetes.io/projected/dc8aad14-4d75-45c4-9456-db0e80ffd8e7-kube-api-access-n8txg\") pod \"placement-db-sync-f9pr6\" (UID: \"dc8aad14-4d75-45c4-9456-db0e80ffd8e7\") " pod="openstack/placement-db-sync-f9pr6" Dec 01 08:56:50 crc kubenswrapper[4689]: I1201 08:56:50.223435 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 01 08:56:50 crc kubenswrapper[4689]: I1201 08:56:50.269335 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7f5f7d36-b4d5-4fa6-9b34-0fb15a5a17b2-dns-swift-storage-0\") pod \"dnsmasq-dns-8b5c85b87-v5gml\" (UID: \"7f5f7d36-b4d5-4fa6-9b34-0fb15a5a17b2\") " pod="openstack/dnsmasq-dns-8b5c85b87-v5gml" Dec 01 08:56:50 crc kubenswrapper[4689]: I1201 08:56:50.269645 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f5f7d36-b4d5-4fa6-9b34-0fb15a5a17b2-dns-svc\") pod \"dnsmasq-dns-8b5c85b87-v5gml\" (UID: \"7f5f7d36-b4d5-4fa6-9b34-0fb15a5a17b2\") " pod="openstack/dnsmasq-dns-8b5c85b87-v5gml" Dec 01 08:56:50 crc kubenswrapper[4689]: I1201 08:56:50.269810 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f5f7d36-b4d5-4fa6-9b34-0fb15a5a17b2-config\") pod \"dnsmasq-dns-8b5c85b87-v5gml\" (UID: \"7f5f7d36-b4d5-4fa6-9b34-0fb15a5a17b2\") " pod="openstack/dnsmasq-dns-8b5c85b87-v5gml" Dec 01 08:56:50 crc kubenswrapper[4689]: I1201 08:56:50.269966 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f5f7d36-b4d5-4fa6-9b34-0fb15a5a17b2-ovsdbserver-nb\") pod \"dnsmasq-dns-8b5c85b87-v5gml\" (UID: \"7f5f7d36-b4d5-4fa6-9b34-0fb15a5a17b2\") " pod="openstack/dnsmasq-dns-8b5c85b87-v5gml" Dec 01 08:56:50 crc kubenswrapper[4689]: I1201 08:56:50.270097 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7f5f7d36-b4d5-4fa6-9b34-0fb15a5a17b2-ovsdbserver-sb\") pod \"dnsmasq-dns-8b5c85b87-v5gml\" (UID: \"7f5f7d36-b4d5-4fa6-9b34-0fb15a5a17b2\") " pod="openstack/dnsmasq-dns-8b5c85b87-v5gml" Dec 01 08:56:50 crc kubenswrapper[4689]: I1201 08:56:50.270211 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2tlg\" (UniqueName: \"kubernetes.io/projected/7f5f7d36-b4d5-4fa6-9b34-0fb15a5a17b2-kube-api-access-j2tlg\") pod \"dnsmasq-dns-8b5c85b87-v5gml\" (UID: \"7f5f7d36-b4d5-4fa6-9b34-0fb15a5a17b2\") " pod="openstack/dnsmasq-dns-8b5c85b87-v5gml" Dec 01 08:56:50 crc kubenswrapper[4689]: I1201 08:56:50.289750 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7f5f7d36-b4d5-4fa6-9b34-0fb15a5a17b2-dns-swift-storage-0\") pod \"dnsmasq-dns-8b5c85b87-v5gml\" (UID: \"7f5f7d36-b4d5-4fa6-9b34-0fb15a5a17b2\") " pod="openstack/dnsmasq-dns-8b5c85b87-v5gml" Dec 01 08:56:50 crc kubenswrapper[4689]: I1201 08:56:50.289865 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-f9pr6" Dec 01 08:56:50 crc kubenswrapper[4689]: I1201 08:56:50.290679 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7f5f7d36-b4d5-4fa6-9b34-0fb15a5a17b2-ovsdbserver-sb\") pod \"dnsmasq-dns-8b5c85b87-v5gml\" (UID: \"7f5f7d36-b4d5-4fa6-9b34-0fb15a5a17b2\") " pod="openstack/dnsmasq-dns-8b5c85b87-v5gml" Dec 01 08:56:50 crc kubenswrapper[4689]: I1201 08:56:50.290844 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f5f7d36-b4d5-4fa6-9b34-0fb15a5a17b2-config\") pod \"dnsmasq-dns-8b5c85b87-v5gml\" (UID: \"7f5f7d36-b4d5-4fa6-9b34-0fb15a5a17b2\") " pod="openstack/dnsmasq-dns-8b5c85b87-v5gml" Dec 01 08:56:50 crc kubenswrapper[4689]: I1201 08:56:50.291200 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f5f7d36-b4d5-4fa6-9b34-0fb15a5a17b2-ovsdbserver-nb\") pod \"dnsmasq-dns-8b5c85b87-v5gml\" (UID: \"7f5f7d36-b4d5-4fa6-9b34-0fb15a5a17b2\") " pod="openstack/dnsmasq-dns-8b5c85b87-v5gml" Dec 01 08:56:50 crc kubenswrapper[4689]: I1201 08:56:50.291834 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f5f7d36-b4d5-4fa6-9b34-0fb15a5a17b2-dns-svc\") pod \"dnsmasq-dns-8b5c85b87-v5gml\" (UID: \"7f5f7d36-b4d5-4fa6-9b34-0fb15a5a17b2\") " pod="openstack/dnsmasq-dns-8b5c85b87-v5gml" Dec 01 08:56:50 crc kubenswrapper[4689]: I1201 08:56:50.305236 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2tlg\" (UniqueName: \"kubernetes.io/projected/7f5f7d36-b4d5-4fa6-9b34-0fb15a5a17b2-kube-api-access-j2tlg\") pod \"dnsmasq-dns-8b5c85b87-v5gml\" (UID: \"7f5f7d36-b4d5-4fa6-9b34-0fb15a5a17b2\") " pod="openstack/dnsmasq-dns-8b5c85b87-v5gml" Dec 01 08:56:50 crc kubenswrapper[4689]: I1201 08:56:50.373317 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-v5gml" Dec 01 08:56:50 crc kubenswrapper[4689]: I1201 08:56:50.375629 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45f2e78d-f339-4376-bf35-05208c6b277b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"45f2e78d-f339-4376-bf35-05208c6b277b\") " pod="openstack/glance-default-external-api-0" Dec 01 08:56:50 crc kubenswrapper[4689]: I1201 08:56:50.375671 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/45f2e78d-f339-4376-bf35-05208c6b277b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"45f2e78d-f339-4376-bf35-05208c6b277b\") " pod="openstack/glance-default-external-api-0" Dec 01 08:56:50 crc kubenswrapper[4689]: I1201 08:56:50.375695 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45f2e78d-f339-4376-bf35-05208c6b277b-config-data\") pod \"glance-default-external-api-0\" (UID: \"45f2e78d-f339-4376-bf35-05208c6b277b\") " pod="openstack/glance-default-external-api-0" Dec 01 08:56:50 crc kubenswrapper[4689]: I1201 08:56:50.375738 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45f2e78d-f339-4376-bf35-05208c6b277b-logs\") pod \"glance-default-external-api-0\" (UID: \"45f2e78d-f339-4376-bf35-05208c6b277b\") " pod="openstack/glance-default-external-api-0" Dec 01 08:56:50 crc kubenswrapper[4689]: I1201 08:56:50.375763 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"45f2e78d-f339-4376-bf35-05208c6b277b\") " pod="openstack/glance-default-external-api-0" Dec 01 08:56:50 crc kubenswrapper[4689]: I1201 08:56:50.375785 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45f2e78d-f339-4376-bf35-05208c6b277b-scripts\") pod \"glance-default-external-api-0\" (UID: \"45f2e78d-f339-4376-bf35-05208c6b277b\") " pod="openstack/glance-default-external-api-0" Dec 01 08:56:50 crc kubenswrapper[4689]: I1201 08:56:50.375859 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttdss\" (UniqueName: \"kubernetes.io/projected/45f2e78d-f339-4376-bf35-05208c6b277b-kube-api-access-ttdss\") pod \"glance-default-external-api-0\" (UID: \"45f2e78d-f339-4376-bf35-05208c6b277b\") " pod="openstack/glance-default-external-api-0" Dec 01 08:56:51 crc kubenswrapper[4689]: I1201 08:56:50.459030 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c5cc7c5ff-dgj8k"] Dec 01 08:56:51 crc kubenswrapper[4689]: I1201 08:56:50.479332 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttdss\" (UniqueName: \"kubernetes.io/projected/45f2e78d-f339-4376-bf35-05208c6b277b-kube-api-access-ttdss\") pod \"glance-default-external-api-0\" (UID: \"45f2e78d-f339-4376-bf35-05208c6b277b\") " pod="openstack/glance-default-external-api-0" Dec 01 08:56:51 crc kubenswrapper[4689]: I1201 08:56:50.479408 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45f2e78d-f339-4376-bf35-05208c6b277b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"45f2e78d-f339-4376-bf35-05208c6b277b\") " pod="openstack/glance-default-external-api-0" Dec 01 08:56:51 crc kubenswrapper[4689]: I1201 08:56:50.479435 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/45f2e78d-f339-4376-bf35-05208c6b277b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"45f2e78d-f339-4376-bf35-05208c6b277b\") " pod="openstack/glance-default-external-api-0" Dec 01 08:56:51 crc kubenswrapper[4689]: I1201 08:56:50.479457 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45f2e78d-f339-4376-bf35-05208c6b277b-config-data\") pod \"glance-default-external-api-0\" (UID: \"45f2e78d-f339-4376-bf35-05208c6b277b\") " pod="openstack/glance-default-external-api-0" Dec 01 08:56:51 crc kubenswrapper[4689]: I1201 08:56:50.479501 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45f2e78d-f339-4376-bf35-05208c6b277b-logs\") pod \"glance-default-external-api-0\" (UID: \"45f2e78d-f339-4376-bf35-05208c6b277b\") " pod="openstack/glance-default-external-api-0" Dec 01 08:56:51 crc kubenswrapper[4689]: I1201 08:56:50.479537 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"45f2e78d-f339-4376-bf35-05208c6b277b\") " pod="openstack/glance-default-external-api-0" Dec 01 08:56:51 crc kubenswrapper[4689]: I1201 08:56:50.479564 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45f2e78d-f339-4376-bf35-05208c6b277b-scripts\") pod \"glance-default-external-api-0\" (UID: \"45f2e78d-f339-4376-bf35-05208c6b277b\") " pod="openstack/glance-default-external-api-0" Dec 01 08:56:51 crc kubenswrapper[4689]: I1201 08:56:50.480746 4689 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"45f2e78d-f339-4376-bf35-05208c6b277b\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-external-api-0" Dec 01 08:56:51 crc kubenswrapper[4689]: I1201 08:56:50.480910 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45f2e78d-f339-4376-bf35-05208c6b277b-logs\") pod \"glance-default-external-api-0\" (UID: \"45f2e78d-f339-4376-bf35-05208c6b277b\") " pod="openstack/glance-default-external-api-0" Dec 01 08:56:51 crc kubenswrapper[4689]: I1201 08:56:50.481337 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/45f2e78d-f339-4376-bf35-05208c6b277b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"45f2e78d-f339-4376-bf35-05208c6b277b\") " pod="openstack/glance-default-external-api-0" Dec 01 08:56:51 crc kubenswrapper[4689]: I1201 08:56:50.486338 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45f2e78d-f339-4376-bf35-05208c6b277b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"45f2e78d-f339-4376-bf35-05208c6b277b\") " pod="openstack/glance-default-external-api-0" Dec 01 08:56:51 crc kubenswrapper[4689]: I1201 08:56:50.487470 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45f2e78d-f339-4376-bf35-05208c6b277b-config-data\") pod \"glance-default-external-api-0\" (UID: \"45f2e78d-f339-4376-bf35-05208c6b277b\") " pod="openstack/glance-default-external-api-0" Dec 01 08:56:51 crc kubenswrapper[4689]: I1201 08:56:50.499043 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45f2e78d-f339-4376-bf35-05208c6b277b-scripts\") pod \"glance-default-external-api-0\" (UID: \"45f2e78d-f339-4376-bf35-05208c6b277b\") " pod="openstack/glance-default-external-api-0" Dec 01 08:56:51 crc kubenswrapper[4689]: I1201 08:56:50.629913 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttdss\" (UniqueName: \"kubernetes.io/projected/45f2e78d-f339-4376-bf35-05208c6b277b-kube-api-access-ttdss\") pod \"glance-default-external-api-0\" (UID: \"45f2e78d-f339-4376-bf35-05208c6b277b\") " pod="openstack/glance-default-external-api-0" Dec 01 08:56:51 crc kubenswrapper[4689]: I1201 08:56:50.663550 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 01 08:56:51 crc kubenswrapper[4689]: I1201 08:56:50.674243 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"45f2e78d-f339-4376-bf35-05208c6b277b\") " pod="openstack/glance-default-external-api-0" Dec 01 08:56:51 crc kubenswrapper[4689]: I1201 08:56:50.674855 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 01 08:56:51 crc kubenswrapper[4689]: I1201 08:56:50.690282 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 01 08:56:51 crc kubenswrapper[4689]: I1201 08:56:50.715480 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 01 08:56:51 crc kubenswrapper[4689]: I1201 08:56:50.745507 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 01 08:56:51 crc kubenswrapper[4689]: I1201 08:56:50.836189 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6a92850-d699-4169-be88-6bf241235c16-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d6a92850-d699-4169-be88-6bf241235c16\") " pod="openstack/glance-default-internal-api-0" Dec 01 08:56:51 crc kubenswrapper[4689]: I1201 08:56:50.836231 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6a92850-d699-4169-be88-6bf241235c16-logs\") pod \"glance-default-internal-api-0\" (UID: \"d6a92850-d699-4169-be88-6bf241235c16\") " pod="openstack/glance-default-internal-api-0" Dec 01 08:56:51 crc kubenswrapper[4689]: I1201 08:56:50.836264 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6a92850-d699-4169-be88-6bf241235c16-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d6a92850-d699-4169-be88-6bf241235c16\") " pod="openstack/glance-default-internal-api-0" Dec 01 08:56:51 crc kubenswrapper[4689]: I1201 08:56:50.836281 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d6a92850-d699-4169-be88-6bf241235c16-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d6a92850-d699-4169-be88-6bf241235c16\") " pod="openstack/glance-default-internal-api-0" Dec 01 08:56:51 crc kubenswrapper[4689]: I1201 08:56:50.836299 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b284j\" (UniqueName: \"kubernetes.io/projected/d6a92850-d699-4169-be88-6bf241235c16-kube-api-access-b284j\") pod \"glance-default-internal-api-0\" (UID: \"d6a92850-d699-4169-be88-6bf241235c16\") " pod="openstack/glance-default-internal-api-0" Dec 01 08:56:51 crc kubenswrapper[4689]: I1201 08:56:50.836327 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d6a92850-d699-4169-be88-6bf241235c16-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d6a92850-d699-4169-be88-6bf241235c16\") " pod="openstack/glance-default-internal-api-0" Dec 01 08:56:51 crc kubenswrapper[4689]: I1201 08:56:50.836390 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"d6a92850-d699-4169-be88-6bf241235c16\") " pod="openstack/glance-default-internal-api-0" Dec 01 08:56:51 crc kubenswrapper[4689]: I1201 08:56:50.937849 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"d6a92850-d699-4169-be88-6bf241235c16\") " pod="openstack/glance-default-internal-api-0" Dec 01 08:56:51 crc kubenswrapper[4689]: I1201 08:56:50.938329 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6a92850-d699-4169-be88-6bf241235c16-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d6a92850-d699-4169-be88-6bf241235c16\") " pod="openstack/glance-default-internal-api-0" Dec 01 08:56:51 crc kubenswrapper[4689]: I1201 08:56:50.938457 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6a92850-d699-4169-be88-6bf241235c16-logs\") pod \"glance-default-internal-api-0\" (UID: \"d6a92850-d699-4169-be88-6bf241235c16\") " pod="openstack/glance-default-internal-api-0" Dec 01 08:56:51 crc kubenswrapper[4689]: I1201 08:56:50.938504 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6a92850-d699-4169-be88-6bf241235c16-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d6a92850-d699-4169-be88-6bf241235c16\") " pod="openstack/glance-default-internal-api-0" Dec 01 08:56:51 crc kubenswrapper[4689]: I1201 08:56:50.938532 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d6a92850-d699-4169-be88-6bf241235c16-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d6a92850-d699-4169-be88-6bf241235c16\") " pod="openstack/glance-default-internal-api-0" Dec 01 08:56:51 crc kubenswrapper[4689]: I1201 08:56:50.938563 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b284j\" (UniqueName: \"kubernetes.io/projected/d6a92850-d699-4169-be88-6bf241235c16-kube-api-access-b284j\") pod \"glance-default-internal-api-0\" (UID: \"d6a92850-d699-4169-be88-6bf241235c16\") " pod="openstack/glance-default-internal-api-0" Dec 01 08:56:51 crc kubenswrapper[4689]: I1201 08:56:50.938595 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d6a92850-d699-4169-be88-6bf241235c16-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d6a92850-d699-4169-be88-6bf241235c16\") " pod="openstack/glance-default-internal-api-0" Dec 01 08:56:51 crc kubenswrapper[4689]: I1201 08:56:50.939155 4689 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"d6a92850-d699-4169-be88-6bf241235c16\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-internal-api-0" Dec 01 08:56:51 crc kubenswrapper[4689]: I1201 08:56:50.940082 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d6a92850-d699-4169-be88-6bf241235c16-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d6a92850-d699-4169-be88-6bf241235c16\") " pod="openstack/glance-default-internal-api-0" Dec 01 08:56:51 crc kubenswrapper[4689]: I1201 08:56:50.940318 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6a92850-d699-4169-be88-6bf241235c16-logs\") pod \"glance-default-internal-api-0\" (UID: \"d6a92850-d699-4169-be88-6bf241235c16\") " pod="openstack/glance-default-internal-api-0" Dec 01 08:56:51 crc kubenswrapper[4689]: I1201 08:56:50.941727 4689 generic.go:334] "Generic (PLEG): container finished" podID="7c8b7623-5762-411b-9245-598e5b57f9da" containerID="226841dc6bca8f30c6e15a317117832828c1f985db89f3c3db95283ba80e0ca2" exitCode=0 Dec 01 08:56:51 crc kubenswrapper[4689]: I1201 08:56:50.941802 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7ff5475cc9-6xbtd" event={"ID":"7c8b7623-5762-411b-9245-598e5b57f9da","Type":"ContainerDied","Data":"226841dc6bca8f30c6e15a317117832828c1f985db89f3c3db95283ba80e0ca2"} Dec 01 08:56:51 crc kubenswrapper[4689]: I1201 08:56:50.953841 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6a92850-d699-4169-be88-6bf241235c16-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d6a92850-d699-4169-be88-6bf241235c16\") " pod="openstack/glance-default-internal-api-0" Dec 01 08:56:51 crc kubenswrapper[4689]: I1201 08:56:50.954972 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6a92850-d699-4169-be88-6bf241235c16-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d6a92850-d699-4169-be88-6bf241235c16\") " pod="openstack/glance-default-internal-api-0" Dec 01 08:56:51 crc kubenswrapper[4689]: I1201 08:56:50.959304 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c5cc7c5ff-dgj8k" event={"ID":"923d279f-d980-4e9e-88dd-a0d7aaa266c4","Type":"ContainerStarted","Data":"3a703f38e30c8c8f808bfd073176fc535471b02598a8d00ddda0b8216b7c964b"} Dec 01 08:56:51 crc kubenswrapper[4689]: I1201 08:56:50.966004 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b284j\" (UniqueName: \"kubernetes.io/projected/d6a92850-d699-4169-be88-6bf241235c16-kube-api-access-b284j\") pod \"glance-default-internal-api-0\" (UID: \"d6a92850-d699-4169-be88-6bf241235c16\") " pod="openstack/glance-default-internal-api-0" Dec 01 08:56:51 crc kubenswrapper[4689]: I1201 08:56:50.967808 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d6a92850-d699-4169-be88-6bf241235c16-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d6a92850-d699-4169-be88-6bf241235c16\") " pod="openstack/glance-default-internal-api-0" Dec 01 08:56:51 crc kubenswrapper[4689]: I1201 08:56:51.077678 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"d6a92850-d699-4169-be88-6bf241235c16\") " pod="openstack/glance-default-internal-api-0" Dec 01 08:56:51 crc kubenswrapper[4689]: I1201 08:56:51.103167 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 01 08:56:51 crc kubenswrapper[4689]: I1201 08:56:51.942112 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7ff5475cc9-6xbtd" Dec 01 08:56:51 crc kubenswrapper[4689]: I1201 08:56:51.994517 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7ff5475cc9-6xbtd" event={"ID":"7c8b7623-5762-411b-9245-598e5b57f9da","Type":"ContainerDied","Data":"1d70a249e237ceb66fb85fc76624110987513075c62f6c2afeee4b960e2f04fb"} Dec 01 08:56:51 crc kubenswrapper[4689]: I1201 08:56:51.994595 4689 scope.go:117] "RemoveContainer" containerID="226841dc6bca8f30c6e15a317117832828c1f985db89f3c3db95283ba80e0ca2" Dec 01 08:56:51 crc kubenswrapper[4689]: I1201 08:56:51.994545 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7ff5475cc9-6xbtd" Dec 01 08:56:52 crc kubenswrapper[4689]: I1201 08:56:52.002963 4689 generic.go:334] "Generic (PLEG): container finished" podID="923d279f-d980-4e9e-88dd-a0d7aaa266c4" containerID="3be9f4e7b8f2e24f9e2989820f95ef81ad7ad0f2c0dd9200bd879f0748a3b859" exitCode=0 Dec 01 08:56:52 crc kubenswrapper[4689]: I1201 08:56:52.003002 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c5cc7c5ff-dgj8k" event={"ID":"923d279f-d980-4e9e-88dd-a0d7aaa266c4","Type":"ContainerDied","Data":"3be9f4e7b8f2e24f9e2989820f95ef81ad7ad0f2c0dd9200bd879f0748a3b859"} Dec 01 08:56:52 crc kubenswrapper[4689]: I1201 08:56:52.060424 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7c8b7623-5762-411b-9245-598e5b57f9da-ovsdbserver-nb\") pod \"7c8b7623-5762-411b-9245-598e5b57f9da\" (UID: \"7c8b7623-5762-411b-9245-598e5b57f9da\") " Dec 01 08:56:52 crc kubenswrapper[4689]: I1201 08:56:52.060483 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9lvfw\" (UniqueName: \"kubernetes.io/projected/7c8b7623-5762-411b-9245-598e5b57f9da-kube-api-access-9lvfw\") pod \"7c8b7623-5762-411b-9245-598e5b57f9da\" (UID: \"7c8b7623-5762-411b-9245-598e5b57f9da\") " Dec 01 08:56:52 crc kubenswrapper[4689]: I1201 08:56:52.060536 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c8b7623-5762-411b-9245-598e5b57f9da-config\") pod \"7c8b7623-5762-411b-9245-598e5b57f9da\" (UID: \"7c8b7623-5762-411b-9245-598e5b57f9da\") " Dec 01 08:56:52 crc kubenswrapper[4689]: I1201 08:56:52.060567 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7c8b7623-5762-411b-9245-598e5b57f9da-dns-svc\") pod \"7c8b7623-5762-411b-9245-598e5b57f9da\" (UID: \"7c8b7623-5762-411b-9245-598e5b57f9da\") " Dec 01 08:56:52 crc kubenswrapper[4689]: I1201 08:56:52.060614 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7c8b7623-5762-411b-9245-598e5b57f9da-dns-swift-storage-0\") pod \"7c8b7623-5762-411b-9245-598e5b57f9da\" (UID: \"7c8b7623-5762-411b-9245-598e5b57f9da\") " Dec 01 08:56:52 crc kubenswrapper[4689]: I1201 08:56:52.060661 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7c8b7623-5762-411b-9245-598e5b57f9da-ovsdbserver-sb\") pod \"7c8b7623-5762-411b-9245-598e5b57f9da\" (UID: \"7c8b7623-5762-411b-9245-598e5b57f9da\") " Dec 01 08:56:52 crc kubenswrapper[4689]: I1201 08:56:52.085381 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c8b7623-5762-411b-9245-598e5b57f9da-kube-api-access-9lvfw" (OuterVolumeSpecName: "kube-api-access-9lvfw") pod "7c8b7623-5762-411b-9245-598e5b57f9da" (UID: "7c8b7623-5762-411b-9245-598e5b57f9da"). InnerVolumeSpecName "kube-api-access-9lvfw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:56:52 crc kubenswrapper[4689]: I1201 08:56:52.091117 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c8b7623-5762-411b-9245-598e5b57f9da-config" (OuterVolumeSpecName: "config") pod "7c8b7623-5762-411b-9245-598e5b57f9da" (UID: "7c8b7623-5762-411b-9245-598e5b57f9da"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:56:52 crc kubenswrapper[4689]: I1201 08:56:52.093709 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c8b7623-5762-411b-9245-598e5b57f9da-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7c8b7623-5762-411b-9245-598e5b57f9da" (UID: "7c8b7623-5762-411b-9245-598e5b57f9da"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:56:52 crc kubenswrapper[4689]: I1201 08:56:52.101105 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c8b7623-5762-411b-9245-598e5b57f9da-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7c8b7623-5762-411b-9245-598e5b57f9da" (UID: "7c8b7623-5762-411b-9245-598e5b57f9da"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:56:52 crc kubenswrapper[4689]: I1201 08:56:52.101509 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c8b7623-5762-411b-9245-598e5b57f9da-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "7c8b7623-5762-411b-9245-598e5b57f9da" (UID: "7c8b7623-5762-411b-9245-598e5b57f9da"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:56:52 crc kubenswrapper[4689]: I1201 08:56:52.102139 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c8b7623-5762-411b-9245-598e5b57f9da-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7c8b7623-5762-411b-9245-598e5b57f9da" (UID: "7c8b7623-5762-411b-9245-598e5b57f9da"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:56:52 crc kubenswrapper[4689]: I1201 08:56:52.163689 4689 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7c8b7623-5762-411b-9245-598e5b57f9da-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 01 08:56:52 crc kubenswrapper[4689]: I1201 08:56:52.163723 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9lvfw\" (UniqueName: \"kubernetes.io/projected/7c8b7623-5762-411b-9245-598e5b57f9da-kube-api-access-9lvfw\") on node \"crc\" DevicePath \"\"" Dec 01 08:56:52 crc kubenswrapper[4689]: I1201 08:56:52.163735 4689 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c8b7623-5762-411b-9245-598e5b57f9da-config\") on node \"crc\" DevicePath \"\"" Dec 01 08:56:52 crc kubenswrapper[4689]: I1201 08:56:52.163745 4689 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7c8b7623-5762-411b-9245-598e5b57f9da-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 01 08:56:52 crc kubenswrapper[4689]: I1201 08:56:52.163753 4689 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7c8b7623-5762-411b-9245-598e5b57f9da-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 01 08:56:52 crc kubenswrapper[4689]: I1201 08:56:52.163764 4689 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7c8b7623-5762-411b-9245-598e5b57f9da-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 01 08:56:52 crc kubenswrapper[4689]: I1201 08:56:52.414635 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 01 08:56:52 crc kubenswrapper[4689]: I1201 08:56:52.432439 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7ff5475cc9-6xbtd"] Dec 01 08:56:52 crc kubenswrapper[4689]: I1201 08:56:52.454816 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7ff5475cc9-6xbtd"] Dec 01 08:56:52 crc kubenswrapper[4689]: I1201 08:56:52.469428 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7585876fd5-877pk"] Dec 01 08:56:52 crc kubenswrapper[4689]: I1201 08:56:52.497946 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5cb9c55b6f-d69t9"] Dec 01 08:56:52 crc kubenswrapper[4689]: E1201 08:56:52.498959 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c8b7623-5762-411b-9245-598e5b57f9da" containerName="init" Dec 01 08:56:52 crc kubenswrapper[4689]: I1201 08:56:52.499025 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c8b7623-5762-411b-9245-598e5b57f9da" containerName="init" Dec 01 08:56:52 crc kubenswrapper[4689]: I1201 08:56:52.499238 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c8b7623-5762-411b-9245-598e5b57f9da" containerName="init" Dec 01 08:56:52 crc kubenswrapper[4689]: I1201 08:56:52.500181 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5cb9c55b6f-d69t9" Dec 01 08:56:52 crc kubenswrapper[4689]: I1201 08:56:52.526589 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5cb9c55b6f-d69t9"] Dec 01 08:56:52 crc kubenswrapper[4689]: I1201 08:56:52.605179 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 01 08:56:52 crc kubenswrapper[4689]: I1201 08:56:52.616736 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 01 08:56:52 crc kubenswrapper[4689]: I1201 08:56:52.679943 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkdtj\" (UniqueName: \"kubernetes.io/projected/f88ff950-322a-4e58-8cfa-def03f3c0752-kube-api-access-hkdtj\") pod \"horizon-5cb9c55b6f-d69t9\" (UID: \"f88ff950-322a-4e58-8cfa-def03f3c0752\") " pod="openstack/horizon-5cb9c55b6f-d69t9" Dec 01 08:56:52 crc kubenswrapper[4689]: I1201 08:56:52.680014 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f88ff950-322a-4e58-8cfa-def03f3c0752-scripts\") pod \"horizon-5cb9c55b6f-d69t9\" (UID: \"f88ff950-322a-4e58-8cfa-def03f3c0752\") " pod="openstack/horizon-5cb9c55b6f-d69t9" Dec 01 08:56:52 crc kubenswrapper[4689]: I1201 08:56:52.680039 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f88ff950-322a-4e58-8cfa-def03f3c0752-logs\") pod \"horizon-5cb9c55b6f-d69t9\" (UID: \"f88ff950-322a-4e58-8cfa-def03f3c0752\") " pod="openstack/horizon-5cb9c55b6f-d69t9" Dec 01 08:56:52 crc kubenswrapper[4689]: I1201 08:56:52.680081 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f88ff950-322a-4e58-8cfa-def03f3c0752-horizon-secret-key\") pod \"horizon-5cb9c55b6f-d69t9\" (UID: \"f88ff950-322a-4e58-8cfa-def03f3c0752\") " pod="openstack/horizon-5cb9c55b6f-d69t9" Dec 01 08:56:52 crc kubenswrapper[4689]: I1201 08:56:52.680095 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f88ff950-322a-4e58-8cfa-def03f3c0752-config-data\") pod \"horizon-5cb9c55b6f-d69t9\" (UID: \"f88ff950-322a-4e58-8cfa-def03f3c0752\") " pod="openstack/horizon-5cb9c55b6f-d69t9" Dec 01 08:56:52 crc kubenswrapper[4689]: I1201 08:56:52.787897 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkdtj\" (UniqueName: \"kubernetes.io/projected/f88ff950-322a-4e58-8cfa-def03f3c0752-kube-api-access-hkdtj\") pod \"horizon-5cb9c55b6f-d69t9\" (UID: \"f88ff950-322a-4e58-8cfa-def03f3c0752\") " pod="openstack/horizon-5cb9c55b6f-d69t9" Dec 01 08:56:52 crc kubenswrapper[4689]: I1201 08:56:52.787989 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f88ff950-322a-4e58-8cfa-def03f3c0752-scripts\") pod \"horizon-5cb9c55b6f-d69t9\" (UID: \"f88ff950-322a-4e58-8cfa-def03f3c0752\") " pod="openstack/horizon-5cb9c55b6f-d69t9" Dec 01 08:56:52 crc kubenswrapper[4689]: I1201 08:56:52.788031 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f88ff950-322a-4e58-8cfa-def03f3c0752-logs\") pod \"horizon-5cb9c55b6f-d69t9\" (UID: \"f88ff950-322a-4e58-8cfa-def03f3c0752\") " pod="openstack/horizon-5cb9c55b6f-d69t9" Dec 01 08:56:52 crc kubenswrapper[4689]: I1201 08:56:52.792966 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f88ff950-322a-4e58-8cfa-def03f3c0752-scripts\") pod \"horizon-5cb9c55b6f-d69t9\" (UID: \"f88ff950-322a-4e58-8cfa-def03f3c0752\") " pod="openstack/horizon-5cb9c55b6f-d69t9" Dec 01 08:56:52 crc kubenswrapper[4689]: I1201 08:56:52.793474 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f88ff950-322a-4e58-8cfa-def03f3c0752-logs\") pod \"horizon-5cb9c55b6f-d69t9\" (UID: \"f88ff950-322a-4e58-8cfa-def03f3c0752\") " pod="openstack/horizon-5cb9c55b6f-d69t9" Dec 01 08:56:52 crc kubenswrapper[4689]: I1201 08:56:52.801620 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f88ff950-322a-4e58-8cfa-def03f3c0752-horizon-secret-key\") pod \"horizon-5cb9c55b6f-d69t9\" (UID: \"f88ff950-322a-4e58-8cfa-def03f3c0752\") " pod="openstack/horizon-5cb9c55b6f-d69t9" Dec 01 08:56:52 crc kubenswrapper[4689]: I1201 08:56:52.801697 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f88ff950-322a-4e58-8cfa-def03f3c0752-config-data\") pod \"horizon-5cb9c55b6f-d69t9\" (UID: \"f88ff950-322a-4e58-8cfa-def03f3c0752\") " pod="openstack/horizon-5cb9c55b6f-d69t9" Dec 01 08:56:52 crc kubenswrapper[4689]: I1201 08:56:52.809766 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f88ff950-322a-4e58-8cfa-def03f3c0752-config-data\") pod \"horizon-5cb9c55b6f-d69t9\" (UID: \"f88ff950-322a-4e58-8cfa-def03f3c0752\") " pod="openstack/horizon-5cb9c55b6f-d69t9" Dec 01 08:56:52 crc kubenswrapper[4689]: I1201 08:56:52.835747 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c5cc7c5ff-dgj8k" Dec 01 08:56:52 crc kubenswrapper[4689]: I1201 08:56:52.839427 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-5ttrw"] Dec 01 08:56:52 crc kubenswrapper[4689]: I1201 08:56:52.894749 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f88ff950-322a-4e58-8cfa-def03f3c0752-horizon-secret-key\") pod \"horizon-5cb9c55b6f-d69t9\" (UID: \"f88ff950-322a-4e58-8cfa-def03f3c0752\") " pod="openstack/horizon-5cb9c55b6f-d69t9" Dec 01 08:56:52 crc kubenswrapper[4689]: I1201 08:56:52.895621 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-f9pr6"] Dec 01 08:56:52 crc kubenswrapper[4689]: I1201 08:56:52.904446 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkdtj\" (UniqueName: \"kubernetes.io/projected/f88ff950-322a-4e58-8cfa-def03f3c0752-kube-api-access-hkdtj\") pod \"horizon-5cb9c55b6f-d69t9\" (UID: \"f88ff950-322a-4e58-8cfa-def03f3c0752\") " pod="openstack/horizon-5cb9c55b6f-d69t9" Dec 01 08:56:52 crc kubenswrapper[4689]: I1201 08:56:52.904535 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/923d279f-d980-4e9e-88dd-a0d7aaa266c4-dns-swift-storage-0\") pod \"923d279f-d980-4e9e-88dd-a0d7aaa266c4\" (UID: \"923d279f-d980-4e9e-88dd-a0d7aaa266c4\") " Dec 01 08:56:52 crc kubenswrapper[4689]: I1201 08:56:52.904602 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/923d279f-d980-4e9e-88dd-a0d7aaa266c4-ovsdbserver-nb\") pod \"923d279f-d980-4e9e-88dd-a0d7aaa266c4\" (UID: \"923d279f-d980-4e9e-88dd-a0d7aaa266c4\") " Dec 01 08:56:52 crc kubenswrapper[4689]: I1201 08:56:52.904683 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/923d279f-d980-4e9e-88dd-a0d7aaa266c4-dns-svc\") pod \"923d279f-d980-4e9e-88dd-a0d7aaa266c4\" (UID: \"923d279f-d980-4e9e-88dd-a0d7aaa266c4\") " Dec 01 08:56:52 crc kubenswrapper[4689]: I1201 08:56:52.904708 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/923d279f-d980-4e9e-88dd-a0d7aaa266c4-ovsdbserver-sb\") pod \"923d279f-d980-4e9e-88dd-a0d7aaa266c4\" (UID: \"923d279f-d980-4e9e-88dd-a0d7aaa266c4\") " Dec 01 08:56:52 crc kubenswrapper[4689]: I1201 08:56:52.904799 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/923d279f-d980-4e9e-88dd-a0d7aaa266c4-config\") pod \"923d279f-d980-4e9e-88dd-a0d7aaa266c4\" (UID: \"923d279f-d980-4e9e-88dd-a0d7aaa266c4\") " Dec 01 08:56:52 crc kubenswrapper[4689]: I1201 08:56:52.904877 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4h6k6\" (UniqueName: \"kubernetes.io/projected/923d279f-d980-4e9e-88dd-a0d7aaa266c4-kube-api-access-4h6k6\") pod \"923d279f-d980-4e9e-88dd-a0d7aaa266c4\" (UID: \"923d279f-d980-4e9e-88dd-a0d7aaa266c4\") " Dec 01 08:56:52 crc kubenswrapper[4689]: I1201 08:56:52.911266 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/923d279f-d980-4e9e-88dd-a0d7aaa266c4-kube-api-access-4h6k6" (OuterVolumeSpecName: "kube-api-access-4h6k6") pod "923d279f-d980-4e9e-88dd-a0d7aaa266c4" (UID: "923d279f-d980-4e9e-88dd-a0d7aaa266c4"). InnerVolumeSpecName "kube-api-access-4h6k6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:56:52 crc kubenswrapper[4689]: I1201 08:56:52.940467 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/923d279f-d980-4e9e-88dd-a0d7aaa266c4-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "923d279f-d980-4e9e-88dd-a0d7aaa266c4" (UID: "923d279f-d980-4e9e-88dd-a0d7aaa266c4"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:56:52 crc kubenswrapper[4689]: I1201 08:56:52.994740 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-kfc4d"] Dec 01 08:56:52 crc kubenswrapper[4689]: E1201 08:56:52.996736 4689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/923d279f-d980-4e9e-88dd-a0d7aaa266c4-ovsdbserver-nb podName:923d279f-d980-4e9e-88dd-a0d7aaa266c4 nodeName:}" failed. No retries permitted until 2025-12-01 08:56:53.496701453 +0000 UTC m=+1093.568989357 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "ovsdbserver-nb" (UniqueName: "kubernetes.io/configmap/923d279f-d980-4e9e-88dd-a0d7aaa266c4-ovsdbserver-nb") pod "923d279f-d980-4e9e-88dd-a0d7aaa266c4" (UID: "923d279f-d980-4e9e-88dd-a0d7aaa266c4") : error deleting /var/lib/kubelet/pods/923d279f-d980-4e9e-88dd-a0d7aaa266c4/volume-subpaths: remove /var/lib/kubelet/pods/923d279f-d980-4e9e-88dd-a0d7aaa266c4/volume-subpaths: no such file or directory Dec 01 08:56:52 crc kubenswrapper[4689]: I1201 08:56:52.997046 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/923d279f-d980-4e9e-88dd-a0d7aaa266c4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "923d279f-d980-4e9e-88dd-a0d7aaa266c4" (UID: "923d279f-d980-4e9e-88dd-a0d7aaa266c4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:56:52 crc kubenswrapper[4689]: I1201 08:56:52.997072 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/923d279f-d980-4e9e-88dd-a0d7aaa266c4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "923d279f-d980-4e9e-88dd-a0d7aaa266c4" (UID: "923d279f-d980-4e9e-88dd-a0d7aaa266c4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:56:52 crc kubenswrapper[4689]: I1201 08:56:52.997039 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/923d279f-d980-4e9e-88dd-a0d7aaa266c4-config" (OuterVolumeSpecName: "config") pod "923d279f-d980-4e9e-88dd-a0d7aaa266c4" (UID: "923d279f-d980-4e9e-88dd-a0d7aaa266c4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:56:53 crc kubenswrapper[4689]: I1201 08:56:53.010173 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4h6k6\" (UniqueName: \"kubernetes.io/projected/923d279f-d980-4e9e-88dd-a0d7aaa266c4-kube-api-access-4h6k6\") on node \"crc\" DevicePath \"\"" Dec 01 08:56:53 crc kubenswrapper[4689]: I1201 08:56:53.010203 4689 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/923d279f-d980-4e9e-88dd-a0d7aaa266c4-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 01 08:56:53 crc kubenswrapper[4689]: I1201 08:56:53.010213 4689 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/923d279f-d980-4e9e-88dd-a0d7aaa266c4-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 01 08:56:53 crc kubenswrapper[4689]: I1201 08:56:53.010223 4689 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/923d279f-d980-4e9e-88dd-a0d7aaa266c4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 01 08:56:53 crc kubenswrapper[4689]: I1201 08:56:53.010232 4689 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/923d279f-d980-4e9e-88dd-a0d7aaa266c4-config\") on node \"crc\" DevicePath \"\"" Dec 01 08:56:53 crc kubenswrapper[4689]: I1201 08:56:53.021430 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-kx454"] Dec 01 08:56:53 crc kubenswrapper[4689]: I1201 08:56:53.028765 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-5ttrw" event={"ID":"878af3f4-684c-457b-b943-b47aa64dcb58","Type":"ContainerStarted","Data":"480b5f881fb111563da3c8b941ed218a9b39f5962e3551f07aad6d0594280ec2"} Dec 01 08:56:53 crc kubenswrapper[4689]: I1201 08:56:53.035198 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c5cc7c5ff-dgj8k" event={"ID":"923d279f-d980-4e9e-88dd-a0d7aaa266c4","Type":"ContainerDied","Data":"3a703f38e30c8c8f808bfd073176fc535471b02598a8d00ddda0b8216b7c964b"} Dec 01 08:56:53 crc kubenswrapper[4689]: I1201 08:56:53.035500 4689 scope.go:117] "RemoveContainer" containerID="3be9f4e7b8f2e24f9e2989820f95ef81ad7ad0f2c0dd9200bd879f0748a3b859" Dec 01 08:56:53 crc kubenswrapper[4689]: I1201 08:56:53.035591 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c5cc7c5ff-dgj8k" Dec 01 08:56:53 crc kubenswrapper[4689]: I1201 08:56:53.040393 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-776b5c685-s4c5n" event={"ID":"b6551765-e11c-4cfc-a20f-976c7b1807ad","Type":"ContainerStarted","Data":"d5ec7353576391b54cedcbf4d4ad7a00296418214053bd8cd42b88cf60f9c48e"} Dec 01 08:56:53 crc kubenswrapper[4689]: I1201 08:56:53.041628 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-6lm44" event={"ID":"e12d10f6-caef-4c9d-9d88-332042911454","Type":"ContainerStarted","Data":"f10abe809df509b54730c5986a6869383848d24efa86547c2bd109d740351ec7"} Dec 01 08:56:53 crc kubenswrapper[4689]: I1201 08:56:53.058769 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c8b7623-5762-411b-9245-598e5b57f9da" path="/var/lib/kubelet/pods/7c8b7623-5762-411b-9245-598e5b57f9da/volumes" Dec 01 08:56:53 crc kubenswrapper[4689]: I1201 08:56:53.063224 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-776b5c685-s4c5n"] Dec 01 08:56:53 crc kubenswrapper[4689]: I1201 08:56:53.063258 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-kfc4d" event={"ID":"f240a66f-70cd-4747-b16f-807e6715e7a0","Type":"ContainerStarted","Data":"1d763543cc1e20539e25adbbfd2d4452b24f04f963e4bcc196c8065082e7410c"} Dec 01 08:56:53 crc kubenswrapper[4689]: I1201 08:56:53.063276 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7585876fd5-877pk" event={"ID":"863bf673-6941-42ac-90ff-9e70bbf3f05a","Type":"ContainerStarted","Data":"5a45c03f2b4b81d336a095f3169f4cfe755ada9d61d891a05c7f1d01e00c66d5"} Dec 01 08:56:53 crc kubenswrapper[4689]: I1201 08:56:53.064538 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-kx454" event={"ID":"767a61f9-7a7d-43df-b53f-efdc8c693381","Type":"ContainerStarted","Data":"c349819f5831f8081ecc56f1765983cc4931af41a39edd250a9ba8cdb0cd42ab"} Dec 01 08:56:53 crc kubenswrapper[4689]: I1201 08:56:53.077913 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-f9pr6" event={"ID":"dc8aad14-4d75-45c4-9456-db0e80ffd8e7","Type":"ContainerStarted","Data":"7636d0ac6006d6e95d22685f651202fa724ec186aafbdfed52740e373ba5841b"} Dec 01 08:56:53 crc kubenswrapper[4689]: I1201 08:56:53.159403 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5cb9c55b6f-d69t9" Dec 01 08:56:53 crc kubenswrapper[4689]: I1201 08:56:53.195489 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7585876fd5-877pk"] Dec 01 08:56:53 crc kubenswrapper[4689]: I1201 08:56:53.214824 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-6lm44"] Dec 01 08:56:53 crc kubenswrapper[4689]: I1201 08:56:53.224151 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-v5gml"] Dec 01 08:56:53 crc kubenswrapper[4689]: I1201 08:56:53.242520 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 01 08:56:53 crc kubenswrapper[4689]: I1201 08:56:53.250170 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 01 08:56:53 crc kubenswrapper[4689]: I1201 08:56:53.523227 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/923d279f-d980-4e9e-88dd-a0d7aaa266c4-ovsdbserver-nb\") pod \"923d279f-d980-4e9e-88dd-a0d7aaa266c4\" (UID: \"923d279f-d980-4e9e-88dd-a0d7aaa266c4\") " Dec 01 08:56:53 crc kubenswrapper[4689]: I1201 08:56:53.523937 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/923d279f-d980-4e9e-88dd-a0d7aaa266c4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "923d279f-d980-4e9e-88dd-a0d7aaa266c4" (UID: "923d279f-d980-4e9e-88dd-a0d7aaa266c4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:56:53 crc kubenswrapper[4689]: I1201 08:56:53.525614 4689 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/923d279f-d980-4e9e-88dd-a0d7aaa266c4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 01 08:56:53 crc kubenswrapper[4689]: I1201 08:56:53.697890 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c5cc7c5ff-dgj8k"] Dec 01 08:56:53 crc kubenswrapper[4689]: I1201 08:56:53.705480 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c5cc7c5ff-dgj8k"] Dec 01 08:56:53 crc kubenswrapper[4689]: I1201 08:56:53.783142 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5cb9c55b6f-d69t9"] Dec 01 08:56:53 crc kubenswrapper[4689]: W1201 08:56:53.794941 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf88ff950_322a_4e58_8cfa_def03f3c0752.slice/crio-f371d39159d4c00e3f62d58b1f07a7e75ac6c13ac6997e74e48032b333adcc21 WatchSource:0}: Error finding container f371d39159d4c00e3f62d58b1f07a7e75ac6c13ac6997e74e48032b333adcc21: Status 404 returned error can't find the container with id f371d39159d4c00e3f62d58b1f07a7e75ac6c13ac6997e74e48032b333adcc21 Dec 01 08:56:54 crc kubenswrapper[4689]: I1201 08:56:54.036016 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 01 08:56:54 crc kubenswrapper[4689]: W1201 08:56:54.061291 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod45f2e78d_f339_4376_bf35_05208c6b277b.slice/crio-9cada2e734c74408e920fe2ed266117a8a8774106f659d477684627ce358505a WatchSource:0}: Error finding container 9cada2e734c74408e920fe2ed266117a8a8774106f659d477684627ce358505a: Status 404 returned error can't find the container with id 9cada2e734c74408e920fe2ed266117a8a8774106f659d477684627ce358505a Dec 01 08:56:54 crc kubenswrapper[4689]: I1201 08:56:54.114541 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-6lm44" event={"ID":"e12d10f6-caef-4c9d-9d88-332042911454","Type":"ContainerStarted","Data":"37c994c606873636a62994ed7609e596e42de192a923f11a618d108755886555"} Dec 01 08:56:54 crc kubenswrapper[4689]: I1201 08:56:54.119560 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"45f2e78d-f339-4376-bf35-05208c6b277b","Type":"ContainerStarted","Data":"9cada2e734c74408e920fe2ed266117a8a8774106f659d477684627ce358505a"} Dec 01 08:56:54 crc kubenswrapper[4689]: I1201 08:56:54.136417 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-6lm44" podStartSLOduration=6.136314594 podStartE2EDuration="6.136314594s" podCreationTimestamp="2025-12-01 08:56:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:56:54.134323259 +0000 UTC m=+1094.206611163" watchObservedRunningTime="2025-12-01 08:56:54.136314594 +0000 UTC m=+1094.208602498" Dec 01 08:56:54 crc kubenswrapper[4689]: I1201 08:56:54.143796 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f54de58e-9111-462b-a86e-8e324060c8aa","Type":"ContainerStarted","Data":"72a69f752069c7c0043e8b641e84c8a59cd5a3d76ed9c8596d72a2ca20805a7a"} Dec 01 08:56:54 crc kubenswrapper[4689]: I1201 08:56:54.149844 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5cb9c55b6f-d69t9" event={"ID":"f88ff950-322a-4e58-8cfa-def03f3c0752","Type":"ContainerStarted","Data":"f371d39159d4c00e3f62d58b1f07a7e75ac6c13ac6997e74e48032b333adcc21"} Dec 01 08:56:54 crc kubenswrapper[4689]: I1201 08:56:54.151853 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d6a92850-d699-4169-be88-6bf241235c16","Type":"ContainerStarted","Data":"50f640235998fc8ed7dd7200bcef58fe72298ff609c3099257148f393d372843"} Dec 01 08:56:54 crc kubenswrapper[4689]: I1201 08:56:54.154000 4689 generic.go:334] "Generic (PLEG): container finished" podID="7f5f7d36-b4d5-4fa6-9b34-0fb15a5a17b2" containerID="96526ff594f5b22294f4fa9b589c03d2574c4f45f687c40417695c662a77ee2d" exitCode=0 Dec 01 08:56:54 crc kubenswrapper[4689]: I1201 08:56:54.154162 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-v5gml" event={"ID":"7f5f7d36-b4d5-4fa6-9b34-0fb15a5a17b2","Type":"ContainerDied","Data":"96526ff594f5b22294f4fa9b589c03d2574c4f45f687c40417695c662a77ee2d"} Dec 01 08:56:54 crc kubenswrapper[4689]: I1201 08:56:54.154276 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-v5gml" event={"ID":"7f5f7d36-b4d5-4fa6-9b34-0fb15a5a17b2","Type":"ContainerStarted","Data":"6a244b9a6cf6e697d35c8d0d4fc07f358f5fa0f527e2fa1fbcee3724013ea09c"} Dec 01 08:56:54 crc kubenswrapper[4689]: I1201 08:56:54.156883 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-kfc4d" event={"ID":"f240a66f-70cd-4747-b16f-807e6715e7a0","Type":"ContainerStarted","Data":"a594a7715aa34dcac9c70c5a096c7b85d42af74194a425c8c8b35f799d8fb14a"} Dec 01 08:56:54 crc kubenswrapper[4689]: I1201 08:56:54.200700 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-kfc4d" podStartSLOduration=5.200674868 podStartE2EDuration="5.200674868s" podCreationTimestamp="2025-12-01 08:56:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:56:54.195335502 +0000 UTC m=+1094.267623406" watchObservedRunningTime="2025-12-01 08:56:54.200674868 +0000 UTC m=+1094.272962762" Dec 01 08:56:55 crc kubenswrapper[4689]: I1201 08:56:55.059034 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="923d279f-d980-4e9e-88dd-a0d7aaa266c4" path="/var/lib/kubelet/pods/923d279f-d980-4e9e-88dd-a0d7aaa266c4/volumes" Dec 01 08:56:55 crc kubenswrapper[4689]: I1201 08:56:55.209303 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d6a92850-d699-4169-be88-6bf241235c16","Type":"ContainerStarted","Data":"e08a0cee0a4139601eb3f3fe9b4a0292cc34ecf0f3bf8cb077ac46ae442acd98"} Dec 01 08:56:55 crc kubenswrapper[4689]: I1201 08:56:55.226457 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-v5gml" event={"ID":"7f5f7d36-b4d5-4fa6-9b34-0fb15a5a17b2","Type":"ContainerStarted","Data":"dc579cedb070df927524192a142e9de958280e6e24ec7109f350def44b4af7de"} Dec 01 08:56:55 crc kubenswrapper[4689]: I1201 08:56:55.226588 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8b5c85b87-v5gml" Dec 01 08:56:55 crc kubenswrapper[4689]: I1201 08:56:55.248786 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8b5c85b87-v5gml" podStartSLOduration=6.24876182 podStartE2EDuration="6.24876182s" podCreationTimestamp="2025-12-01 08:56:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:56:55.244189294 +0000 UTC m=+1095.316477208" watchObservedRunningTime="2025-12-01 08:56:55.24876182 +0000 UTC m=+1095.321049724" Dec 01 08:56:56 crc kubenswrapper[4689]: I1201 08:56:56.257970 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"45f2e78d-f339-4376-bf35-05208c6b277b","Type":"ContainerStarted","Data":"18f7cbe0f72892f2ef341991013471c5a3fc10ea95f6e234b6f6a02f8622c184"} Dec 01 08:56:56 crc kubenswrapper[4689]: I1201 08:56:56.280130 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="d6a92850-d699-4169-be88-6bf241235c16" containerName="glance-log" containerID="cri-o://e08a0cee0a4139601eb3f3fe9b4a0292cc34ecf0f3bf8cb077ac46ae442acd98" gracePeriod=30 Dec 01 08:56:56 crc kubenswrapper[4689]: I1201 08:56:56.280446 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d6a92850-d699-4169-be88-6bf241235c16","Type":"ContainerStarted","Data":"a1d4006c9643e0abd442a302f7716bd75fa9b566a9b5a290fe34fafb6de7c94c"} Dec 01 08:56:56 crc kubenswrapper[4689]: I1201 08:56:56.280675 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="d6a92850-d699-4169-be88-6bf241235c16" containerName="glance-httpd" containerID="cri-o://a1d4006c9643e0abd442a302f7716bd75fa9b566a9b5a290fe34fafb6de7c94c" gracePeriod=30 Dec 01 08:56:56 crc kubenswrapper[4689]: I1201 08:56:56.323896 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=7.323864803 podStartE2EDuration="7.323864803s" podCreationTimestamp="2025-12-01 08:56:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:56:56.323723409 +0000 UTC m=+1096.396011313" watchObservedRunningTime="2025-12-01 08:56:56.323864803 +0000 UTC m=+1096.396152707" Dec 01 08:56:56 crc kubenswrapper[4689]: I1201 08:56:56.925558 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 01 08:56:57 crc kubenswrapper[4689]: I1201 08:56:57.017120 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d6a92850-d699-4169-be88-6bf241235c16-httpd-run\") pod \"d6a92850-d699-4169-be88-6bf241235c16\" (UID: \"d6a92850-d699-4169-be88-6bf241235c16\") " Dec 01 08:56:57 crc kubenswrapper[4689]: I1201 08:56:57.017159 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6a92850-d699-4169-be88-6bf241235c16-logs\") pod \"d6a92850-d699-4169-be88-6bf241235c16\" (UID: \"d6a92850-d699-4169-be88-6bf241235c16\") " Dec 01 08:56:57 crc kubenswrapper[4689]: I1201 08:56:57.017205 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b284j\" (UniqueName: \"kubernetes.io/projected/d6a92850-d699-4169-be88-6bf241235c16-kube-api-access-b284j\") pod \"d6a92850-d699-4169-be88-6bf241235c16\" (UID: \"d6a92850-d699-4169-be88-6bf241235c16\") " Dec 01 08:56:57 crc kubenswrapper[4689]: I1201 08:56:57.017233 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d6a92850-d699-4169-be88-6bf241235c16-scripts\") pod \"d6a92850-d699-4169-be88-6bf241235c16\" (UID: \"d6a92850-d699-4169-be88-6bf241235c16\") " Dec 01 08:56:57 crc kubenswrapper[4689]: I1201 08:56:57.017317 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6a92850-d699-4169-be88-6bf241235c16-combined-ca-bundle\") pod \"d6a92850-d699-4169-be88-6bf241235c16\" (UID: \"d6a92850-d699-4169-be88-6bf241235c16\") " Dec 01 08:56:57 crc kubenswrapper[4689]: I1201 08:56:57.017345 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"d6a92850-d699-4169-be88-6bf241235c16\" (UID: \"d6a92850-d699-4169-be88-6bf241235c16\") " Dec 01 08:56:57 crc kubenswrapper[4689]: I1201 08:56:57.017383 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6a92850-d699-4169-be88-6bf241235c16-config-data\") pod \"d6a92850-d699-4169-be88-6bf241235c16\" (UID: \"d6a92850-d699-4169-be88-6bf241235c16\") " Dec 01 08:56:57 crc kubenswrapper[4689]: I1201 08:56:57.022818 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d6a92850-d699-4169-be88-6bf241235c16-logs" (OuterVolumeSpecName: "logs") pod "d6a92850-d699-4169-be88-6bf241235c16" (UID: "d6a92850-d699-4169-be88-6bf241235c16"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 08:56:57 crc kubenswrapper[4689]: I1201 08:56:57.027055 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d6a92850-d699-4169-be88-6bf241235c16-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "d6a92850-d699-4169-be88-6bf241235c16" (UID: "d6a92850-d699-4169-be88-6bf241235c16"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 08:56:57 crc kubenswrapper[4689]: I1201 08:56:57.032235 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6a92850-d699-4169-be88-6bf241235c16-kube-api-access-b284j" (OuterVolumeSpecName: "kube-api-access-b284j") pod "d6a92850-d699-4169-be88-6bf241235c16" (UID: "d6a92850-d699-4169-be88-6bf241235c16"). InnerVolumeSpecName "kube-api-access-b284j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:56:57 crc kubenswrapper[4689]: I1201 08:56:57.033355 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6a92850-d699-4169-be88-6bf241235c16-scripts" (OuterVolumeSpecName: "scripts") pod "d6a92850-d699-4169-be88-6bf241235c16" (UID: "d6a92850-d699-4169-be88-6bf241235c16"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:56:57 crc kubenswrapper[4689]: I1201 08:56:57.037673 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "glance") pod "d6a92850-d699-4169-be88-6bf241235c16" (UID: "d6a92850-d699-4169-be88-6bf241235c16"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 01 08:56:57 crc kubenswrapper[4689]: I1201 08:56:57.090909 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6a92850-d699-4169-be88-6bf241235c16-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d6a92850-d699-4169-be88-6bf241235c16" (UID: "d6a92850-d699-4169-be88-6bf241235c16"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:56:57 crc kubenswrapper[4689]: I1201 08:56:57.119052 4689 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6a92850-d699-4169-be88-6bf241235c16-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 08:56:57 crc kubenswrapper[4689]: I1201 08:56:57.119094 4689 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Dec 01 08:56:57 crc kubenswrapper[4689]: I1201 08:56:57.119991 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6a92850-d699-4169-be88-6bf241235c16-config-data" (OuterVolumeSpecName: "config-data") pod "d6a92850-d699-4169-be88-6bf241235c16" (UID: "d6a92850-d699-4169-be88-6bf241235c16"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:56:57 crc kubenswrapper[4689]: I1201 08:56:57.120133 4689 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d6a92850-d699-4169-be88-6bf241235c16-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 01 08:56:57 crc kubenswrapper[4689]: I1201 08:56:57.120150 4689 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6a92850-d699-4169-be88-6bf241235c16-logs\") on node \"crc\" DevicePath \"\"" Dec 01 08:56:57 crc kubenswrapper[4689]: I1201 08:56:57.120160 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b284j\" (UniqueName: \"kubernetes.io/projected/d6a92850-d699-4169-be88-6bf241235c16-kube-api-access-b284j\") on node \"crc\" DevicePath \"\"" Dec 01 08:56:57 crc kubenswrapper[4689]: I1201 08:56:57.120172 4689 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d6a92850-d699-4169-be88-6bf241235c16-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 08:56:57 crc kubenswrapper[4689]: I1201 08:56:57.138088 4689 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Dec 01 08:56:57 crc kubenswrapper[4689]: I1201 08:56:57.221765 4689 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Dec 01 08:56:57 crc kubenswrapper[4689]: I1201 08:56:57.221985 4689 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6a92850-d699-4169-be88-6bf241235c16-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 08:56:57 crc kubenswrapper[4689]: I1201 08:56:57.301859 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="45f2e78d-f339-4376-bf35-05208c6b277b" containerName="glance-log" containerID="cri-o://18f7cbe0f72892f2ef341991013471c5a3fc10ea95f6e234b6f6a02f8622c184" gracePeriod=30 Dec 01 08:56:57 crc kubenswrapper[4689]: I1201 08:56:57.302243 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="45f2e78d-f339-4376-bf35-05208c6b277b" containerName="glance-httpd" containerID="cri-o://4f68a3dc14caa75f5c7023cb5a54f57b894f335e256ba16b2cd7461a6c777d2a" gracePeriod=30 Dec 01 08:56:57 crc kubenswrapper[4689]: I1201 08:56:57.302227 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"45f2e78d-f339-4376-bf35-05208c6b277b","Type":"ContainerStarted","Data":"4f68a3dc14caa75f5c7023cb5a54f57b894f335e256ba16b2cd7461a6c777d2a"} Dec 01 08:56:57 crc kubenswrapper[4689]: I1201 08:56:57.309559 4689 generic.go:334] "Generic (PLEG): container finished" podID="d6a92850-d699-4169-be88-6bf241235c16" containerID="a1d4006c9643e0abd442a302f7716bd75fa9b566a9b5a290fe34fafb6de7c94c" exitCode=143 Dec 01 08:56:57 crc kubenswrapper[4689]: I1201 08:56:57.309596 4689 generic.go:334] "Generic (PLEG): container finished" podID="d6a92850-d699-4169-be88-6bf241235c16" containerID="e08a0cee0a4139601eb3f3fe9b4a0292cc34ecf0f3bf8cb077ac46ae442acd98" exitCode=143 Dec 01 08:56:57 crc kubenswrapper[4689]: I1201 08:56:57.309652 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d6a92850-d699-4169-be88-6bf241235c16","Type":"ContainerDied","Data":"a1d4006c9643e0abd442a302f7716bd75fa9b566a9b5a290fe34fafb6de7c94c"} Dec 01 08:56:57 crc kubenswrapper[4689]: I1201 08:56:57.309693 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 01 08:56:57 crc kubenswrapper[4689]: I1201 08:56:57.309726 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d6a92850-d699-4169-be88-6bf241235c16","Type":"ContainerDied","Data":"e08a0cee0a4139601eb3f3fe9b4a0292cc34ecf0f3bf8cb077ac46ae442acd98"} Dec 01 08:56:57 crc kubenswrapper[4689]: I1201 08:56:57.309742 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d6a92850-d699-4169-be88-6bf241235c16","Type":"ContainerDied","Data":"50f640235998fc8ed7dd7200bcef58fe72298ff609c3099257148f393d372843"} Dec 01 08:56:57 crc kubenswrapper[4689]: I1201 08:56:57.309765 4689 scope.go:117] "RemoveContainer" containerID="a1d4006c9643e0abd442a302f7716bd75fa9b566a9b5a290fe34fafb6de7c94c" Dec 01 08:56:57 crc kubenswrapper[4689]: I1201 08:56:57.339433 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=8.339413863 podStartE2EDuration="8.339413863s" podCreationTimestamp="2025-12-01 08:56:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:56:57.338251662 +0000 UTC m=+1097.410539566" watchObservedRunningTime="2025-12-01 08:56:57.339413863 +0000 UTC m=+1097.411701767" Dec 01 08:56:57 crc kubenswrapper[4689]: I1201 08:56:57.368636 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 01 08:56:57 crc kubenswrapper[4689]: I1201 08:56:57.381599 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 01 08:56:57 crc kubenswrapper[4689]: I1201 08:56:57.393129 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 01 08:56:57 crc kubenswrapper[4689]: E1201 08:56:57.395588 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="923d279f-d980-4e9e-88dd-a0d7aaa266c4" containerName="init" Dec 01 08:56:57 crc kubenswrapper[4689]: I1201 08:56:57.395630 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="923d279f-d980-4e9e-88dd-a0d7aaa266c4" containerName="init" Dec 01 08:56:57 crc kubenswrapper[4689]: E1201 08:56:57.395660 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6a92850-d699-4169-be88-6bf241235c16" containerName="glance-httpd" Dec 01 08:56:57 crc kubenswrapper[4689]: I1201 08:56:57.395671 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6a92850-d699-4169-be88-6bf241235c16" containerName="glance-httpd" Dec 01 08:56:57 crc kubenswrapper[4689]: E1201 08:56:57.395692 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6a92850-d699-4169-be88-6bf241235c16" containerName="glance-log" Dec 01 08:56:57 crc kubenswrapper[4689]: I1201 08:56:57.395698 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6a92850-d699-4169-be88-6bf241235c16" containerName="glance-log" Dec 01 08:56:57 crc kubenswrapper[4689]: I1201 08:56:57.396111 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="923d279f-d980-4e9e-88dd-a0d7aaa266c4" containerName="init" Dec 01 08:56:57 crc kubenswrapper[4689]: I1201 08:56:57.396132 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6a92850-d699-4169-be88-6bf241235c16" containerName="glance-httpd" Dec 01 08:56:57 crc kubenswrapper[4689]: I1201 08:56:57.396148 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6a92850-d699-4169-be88-6bf241235c16" containerName="glance-log" Dec 01 08:56:57 crc kubenswrapper[4689]: I1201 08:56:57.397483 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 01 08:56:57 crc kubenswrapper[4689]: I1201 08:56:57.402038 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 01 08:56:57 crc kubenswrapper[4689]: I1201 08:56:57.403042 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 01 08:56:57 crc kubenswrapper[4689]: I1201 08:56:57.451690 4689 scope.go:117] "RemoveContainer" containerID="e08a0cee0a4139601eb3f3fe9b4a0292cc34ecf0f3bf8cb077ac46ae442acd98" Dec 01 08:56:57 crc kubenswrapper[4689]: I1201 08:56:57.536574 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"3d35154f-4a7e-4d08-935d-4fadbcd89379\") " pod="openstack/glance-default-internal-api-0" Dec 01 08:56:57 crc kubenswrapper[4689]: I1201 08:56:57.536830 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d35154f-4a7e-4d08-935d-4fadbcd89379-config-data\") pod \"glance-default-internal-api-0\" (UID: \"3d35154f-4a7e-4d08-935d-4fadbcd89379\") " pod="openstack/glance-default-internal-api-0" Dec 01 08:56:57 crc kubenswrapper[4689]: I1201 08:56:57.536905 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3d35154f-4a7e-4d08-935d-4fadbcd89379-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"3d35154f-4a7e-4d08-935d-4fadbcd89379\") " pod="openstack/glance-default-internal-api-0" Dec 01 08:56:57 crc kubenswrapper[4689]: I1201 08:56:57.536993 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpk5p\" (UniqueName: \"kubernetes.io/projected/3d35154f-4a7e-4d08-935d-4fadbcd89379-kube-api-access-kpk5p\") pod \"glance-default-internal-api-0\" (UID: \"3d35154f-4a7e-4d08-935d-4fadbcd89379\") " pod="openstack/glance-default-internal-api-0" Dec 01 08:56:57 crc kubenswrapper[4689]: I1201 08:56:57.537040 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d35154f-4a7e-4d08-935d-4fadbcd89379-logs\") pod \"glance-default-internal-api-0\" (UID: \"3d35154f-4a7e-4d08-935d-4fadbcd89379\") " pod="openstack/glance-default-internal-api-0" Dec 01 08:56:57 crc kubenswrapper[4689]: I1201 08:56:57.537074 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d35154f-4a7e-4d08-935d-4fadbcd89379-scripts\") pod \"glance-default-internal-api-0\" (UID: \"3d35154f-4a7e-4d08-935d-4fadbcd89379\") " pod="openstack/glance-default-internal-api-0" Dec 01 08:56:57 crc kubenswrapper[4689]: I1201 08:56:57.537138 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d35154f-4a7e-4d08-935d-4fadbcd89379-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"3d35154f-4a7e-4d08-935d-4fadbcd89379\") " pod="openstack/glance-default-internal-api-0" Dec 01 08:56:57 crc kubenswrapper[4689]: I1201 08:56:57.590441 4689 scope.go:117] "RemoveContainer" containerID="a1d4006c9643e0abd442a302f7716bd75fa9b566a9b5a290fe34fafb6de7c94c" Dec 01 08:56:57 crc kubenswrapper[4689]: E1201 08:56:57.590991 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1d4006c9643e0abd442a302f7716bd75fa9b566a9b5a290fe34fafb6de7c94c\": container with ID starting with a1d4006c9643e0abd442a302f7716bd75fa9b566a9b5a290fe34fafb6de7c94c not found: ID does not exist" containerID="a1d4006c9643e0abd442a302f7716bd75fa9b566a9b5a290fe34fafb6de7c94c" Dec 01 08:56:57 crc kubenswrapper[4689]: I1201 08:56:57.591030 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1d4006c9643e0abd442a302f7716bd75fa9b566a9b5a290fe34fafb6de7c94c"} err="failed to get container status \"a1d4006c9643e0abd442a302f7716bd75fa9b566a9b5a290fe34fafb6de7c94c\": rpc error: code = NotFound desc = could not find container \"a1d4006c9643e0abd442a302f7716bd75fa9b566a9b5a290fe34fafb6de7c94c\": container with ID starting with a1d4006c9643e0abd442a302f7716bd75fa9b566a9b5a290fe34fafb6de7c94c not found: ID does not exist" Dec 01 08:56:57 crc kubenswrapper[4689]: I1201 08:56:57.591054 4689 scope.go:117] "RemoveContainer" containerID="e08a0cee0a4139601eb3f3fe9b4a0292cc34ecf0f3bf8cb077ac46ae442acd98" Dec 01 08:56:57 crc kubenswrapper[4689]: E1201 08:56:57.592406 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e08a0cee0a4139601eb3f3fe9b4a0292cc34ecf0f3bf8cb077ac46ae442acd98\": container with ID starting with e08a0cee0a4139601eb3f3fe9b4a0292cc34ecf0f3bf8cb077ac46ae442acd98 not found: ID does not exist" containerID="e08a0cee0a4139601eb3f3fe9b4a0292cc34ecf0f3bf8cb077ac46ae442acd98" Dec 01 08:56:57 crc kubenswrapper[4689]: I1201 08:56:57.592452 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e08a0cee0a4139601eb3f3fe9b4a0292cc34ecf0f3bf8cb077ac46ae442acd98"} err="failed to get container status \"e08a0cee0a4139601eb3f3fe9b4a0292cc34ecf0f3bf8cb077ac46ae442acd98\": rpc error: code = NotFound desc = could not find container \"e08a0cee0a4139601eb3f3fe9b4a0292cc34ecf0f3bf8cb077ac46ae442acd98\": container with ID starting with e08a0cee0a4139601eb3f3fe9b4a0292cc34ecf0f3bf8cb077ac46ae442acd98 not found: ID does not exist" Dec 01 08:56:57 crc kubenswrapper[4689]: I1201 08:56:57.592481 4689 scope.go:117] "RemoveContainer" containerID="a1d4006c9643e0abd442a302f7716bd75fa9b566a9b5a290fe34fafb6de7c94c" Dec 01 08:56:57 crc kubenswrapper[4689]: I1201 08:56:57.594489 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1d4006c9643e0abd442a302f7716bd75fa9b566a9b5a290fe34fafb6de7c94c"} err="failed to get container status \"a1d4006c9643e0abd442a302f7716bd75fa9b566a9b5a290fe34fafb6de7c94c\": rpc error: code = NotFound desc = could not find container \"a1d4006c9643e0abd442a302f7716bd75fa9b566a9b5a290fe34fafb6de7c94c\": container with ID starting with a1d4006c9643e0abd442a302f7716bd75fa9b566a9b5a290fe34fafb6de7c94c not found: ID does not exist" Dec 01 08:56:57 crc kubenswrapper[4689]: I1201 08:56:57.594523 4689 scope.go:117] "RemoveContainer" containerID="e08a0cee0a4139601eb3f3fe9b4a0292cc34ecf0f3bf8cb077ac46ae442acd98" Dec 01 08:56:57 crc kubenswrapper[4689]: I1201 08:56:57.595015 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e08a0cee0a4139601eb3f3fe9b4a0292cc34ecf0f3bf8cb077ac46ae442acd98"} err="failed to get container status \"e08a0cee0a4139601eb3f3fe9b4a0292cc34ecf0f3bf8cb077ac46ae442acd98\": rpc error: code = NotFound desc = could not find container \"e08a0cee0a4139601eb3f3fe9b4a0292cc34ecf0f3bf8cb077ac46ae442acd98\": container with ID starting with e08a0cee0a4139601eb3f3fe9b4a0292cc34ecf0f3bf8cb077ac46ae442acd98 not found: ID does not exist" Dec 01 08:56:57 crc kubenswrapper[4689]: I1201 08:56:57.638522 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d35154f-4a7e-4d08-935d-4fadbcd89379-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"3d35154f-4a7e-4d08-935d-4fadbcd89379\") " pod="openstack/glance-default-internal-api-0" Dec 01 08:56:57 crc kubenswrapper[4689]: I1201 08:56:57.638583 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"3d35154f-4a7e-4d08-935d-4fadbcd89379\") " pod="openstack/glance-default-internal-api-0" Dec 01 08:56:57 crc kubenswrapper[4689]: I1201 08:56:57.638637 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d35154f-4a7e-4d08-935d-4fadbcd89379-config-data\") pod \"glance-default-internal-api-0\" (UID: \"3d35154f-4a7e-4d08-935d-4fadbcd89379\") " pod="openstack/glance-default-internal-api-0" Dec 01 08:56:57 crc kubenswrapper[4689]: I1201 08:56:57.638671 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3d35154f-4a7e-4d08-935d-4fadbcd89379-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"3d35154f-4a7e-4d08-935d-4fadbcd89379\") " pod="openstack/glance-default-internal-api-0" Dec 01 08:56:57 crc kubenswrapper[4689]: I1201 08:56:57.638703 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpk5p\" (UniqueName: \"kubernetes.io/projected/3d35154f-4a7e-4d08-935d-4fadbcd89379-kube-api-access-kpk5p\") pod \"glance-default-internal-api-0\" (UID: \"3d35154f-4a7e-4d08-935d-4fadbcd89379\") " pod="openstack/glance-default-internal-api-0" Dec 01 08:56:57 crc kubenswrapper[4689]: I1201 08:56:57.638731 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d35154f-4a7e-4d08-935d-4fadbcd89379-logs\") pod \"glance-default-internal-api-0\" (UID: \"3d35154f-4a7e-4d08-935d-4fadbcd89379\") " pod="openstack/glance-default-internal-api-0" Dec 01 08:56:57 crc kubenswrapper[4689]: I1201 08:56:57.638767 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d35154f-4a7e-4d08-935d-4fadbcd89379-scripts\") pod \"glance-default-internal-api-0\" (UID: \"3d35154f-4a7e-4d08-935d-4fadbcd89379\") " pod="openstack/glance-default-internal-api-0" Dec 01 08:56:57 crc kubenswrapper[4689]: I1201 08:56:57.639910 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3d35154f-4a7e-4d08-935d-4fadbcd89379-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"3d35154f-4a7e-4d08-935d-4fadbcd89379\") " pod="openstack/glance-default-internal-api-0" Dec 01 08:56:57 crc kubenswrapper[4689]: I1201 08:56:57.640101 4689 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"3d35154f-4a7e-4d08-935d-4fadbcd89379\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-internal-api-0" Dec 01 08:56:57 crc kubenswrapper[4689]: I1201 08:56:57.642047 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d35154f-4a7e-4d08-935d-4fadbcd89379-logs\") pod \"glance-default-internal-api-0\" (UID: \"3d35154f-4a7e-4d08-935d-4fadbcd89379\") " pod="openstack/glance-default-internal-api-0" Dec 01 08:56:57 crc kubenswrapper[4689]: I1201 08:56:57.645343 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d35154f-4a7e-4d08-935d-4fadbcd89379-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"3d35154f-4a7e-4d08-935d-4fadbcd89379\") " pod="openstack/glance-default-internal-api-0" Dec 01 08:56:57 crc kubenswrapper[4689]: I1201 08:56:57.649418 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d35154f-4a7e-4d08-935d-4fadbcd89379-scripts\") pod \"glance-default-internal-api-0\" (UID: \"3d35154f-4a7e-4d08-935d-4fadbcd89379\") " pod="openstack/glance-default-internal-api-0" Dec 01 08:56:57 crc kubenswrapper[4689]: I1201 08:56:57.661724 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpk5p\" (UniqueName: \"kubernetes.io/projected/3d35154f-4a7e-4d08-935d-4fadbcd89379-kube-api-access-kpk5p\") pod \"glance-default-internal-api-0\" (UID: \"3d35154f-4a7e-4d08-935d-4fadbcd89379\") " pod="openstack/glance-default-internal-api-0" Dec 01 08:56:57 crc kubenswrapper[4689]: I1201 08:56:57.673925 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d35154f-4a7e-4d08-935d-4fadbcd89379-config-data\") pod \"glance-default-internal-api-0\" (UID: \"3d35154f-4a7e-4d08-935d-4fadbcd89379\") " pod="openstack/glance-default-internal-api-0" Dec 01 08:56:57 crc kubenswrapper[4689]: I1201 08:56:57.682284 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"3d35154f-4a7e-4d08-935d-4fadbcd89379\") " pod="openstack/glance-default-internal-api-0" Dec 01 08:56:57 crc kubenswrapper[4689]: I1201 08:56:57.829481 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 01 08:56:57 crc kubenswrapper[4689]: I1201 08:56:57.992194 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 01 08:56:58 crc kubenswrapper[4689]: I1201 08:56:58.049767 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45f2e78d-f339-4376-bf35-05208c6b277b-scripts\") pod \"45f2e78d-f339-4376-bf35-05208c6b277b\" (UID: \"45f2e78d-f339-4376-bf35-05208c6b277b\") " Dec 01 08:56:58 crc kubenswrapper[4689]: I1201 08:56:58.049917 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ttdss\" (UniqueName: \"kubernetes.io/projected/45f2e78d-f339-4376-bf35-05208c6b277b-kube-api-access-ttdss\") pod \"45f2e78d-f339-4376-bf35-05208c6b277b\" (UID: \"45f2e78d-f339-4376-bf35-05208c6b277b\") " Dec 01 08:56:58 crc kubenswrapper[4689]: I1201 08:56:58.049943 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45f2e78d-f339-4376-bf35-05208c6b277b-config-data\") pod \"45f2e78d-f339-4376-bf35-05208c6b277b\" (UID: \"45f2e78d-f339-4376-bf35-05208c6b277b\") " Dec 01 08:56:58 crc kubenswrapper[4689]: I1201 08:56:58.049987 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45f2e78d-f339-4376-bf35-05208c6b277b-combined-ca-bundle\") pod \"45f2e78d-f339-4376-bf35-05208c6b277b\" (UID: \"45f2e78d-f339-4376-bf35-05208c6b277b\") " Dec 01 08:56:58 crc kubenswrapper[4689]: I1201 08:56:58.050049 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"45f2e78d-f339-4376-bf35-05208c6b277b\" (UID: \"45f2e78d-f339-4376-bf35-05208c6b277b\") " Dec 01 08:56:58 crc kubenswrapper[4689]: I1201 08:56:58.050099 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/45f2e78d-f339-4376-bf35-05208c6b277b-httpd-run\") pod \"45f2e78d-f339-4376-bf35-05208c6b277b\" (UID: \"45f2e78d-f339-4376-bf35-05208c6b277b\") " Dec 01 08:56:58 crc kubenswrapper[4689]: I1201 08:56:58.050208 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45f2e78d-f339-4376-bf35-05208c6b277b-logs\") pod \"45f2e78d-f339-4376-bf35-05208c6b277b\" (UID: \"45f2e78d-f339-4376-bf35-05208c6b277b\") " Dec 01 08:56:58 crc kubenswrapper[4689]: I1201 08:56:58.050862 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45f2e78d-f339-4376-bf35-05208c6b277b-logs" (OuterVolumeSpecName: "logs") pod "45f2e78d-f339-4376-bf35-05208c6b277b" (UID: "45f2e78d-f339-4376-bf35-05208c6b277b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 08:56:58 crc kubenswrapper[4689]: I1201 08:56:58.051055 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45f2e78d-f339-4376-bf35-05208c6b277b-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "45f2e78d-f339-4376-bf35-05208c6b277b" (UID: "45f2e78d-f339-4376-bf35-05208c6b277b"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 08:56:58 crc kubenswrapper[4689]: I1201 08:56:58.054903 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45f2e78d-f339-4376-bf35-05208c6b277b-kube-api-access-ttdss" (OuterVolumeSpecName: "kube-api-access-ttdss") pod "45f2e78d-f339-4376-bf35-05208c6b277b" (UID: "45f2e78d-f339-4376-bf35-05208c6b277b"). InnerVolumeSpecName "kube-api-access-ttdss". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:56:58 crc kubenswrapper[4689]: I1201 08:56:58.058223 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45f2e78d-f339-4376-bf35-05208c6b277b-scripts" (OuterVolumeSpecName: "scripts") pod "45f2e78d-f339-4376-bf35-05208c6b277b" (UID: "45f2e78d-f339-4376-bf35-05208c6b277b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:56:58 crc kubenswrapper[4689]: I1201 08:56:58.059429 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "glance") pod "45f2e78d-f339-4376-bf35-05208c6b277b" (UID: "45f2e78d-f339-4376-bf35-05208c6b277b"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 01 08:56:58 crc kubenswrapper[4689]: I1201 08:56:58.079104 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45f2e78d-f339-4376-bf35-05208c6b277b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "45f2e78d-f339-4376-bf35-05208c6b277b" (UID: "45f2e78d-f339-4376-bf35-05208c6b277b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:56:58 crc kubenswrapper[4689]: I1201 08:56:58.136909 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45f2e78d-f339-4376-bf35-05208c6b277b-config-data" (OuterVolumeSpecName: "config-data") pod "45f2e78d-f339-4376-bf35-05208c6b277b" (UID: "45f2e78d-f339-4376-bf35-05208c6b277b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:56:58 crc kubenswrapper[4689]: I1201 08:56:58.152030 4689 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45f2e78d-f339-4376-bf35-05208c6b277b-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 08:56:58 crc kubenswrapper[4689]: I1201 08:56:58.152073 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ttdss\" (UniqueName: \"kubernetes.io/projected/45f2e78d-f339-4376-bf35-05208c6b277b-kube-api-access-ttdss\") on node \"crc\" DevicePath \"\"" Dec 01 08:56:58 crc kubenswrapper[4689]: I1201 08:56:58.152089 4689 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45f2e78d-f339-4376-bf35-05208c6b277b-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 08:56:58 crc kubenswrapper[4689]: I1201 08:56:58.152217 4689 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45f2e78d-f339-4376-bf35-05208c6b277b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 08:56:58 crc kubenswrapper[4689]: I1201 08:56:58.152251 4689 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Dec 01 08:56:58 crc kubenswrapper[4689]: I1201 08:56:58.152265 4689 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/45f2e78d-f339-4376-bf35-05208c6b277b-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 01 08:56:58 crc kubenswrapper[4689]: I1201 08:56:58.152278 4689 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45f2e78d-f339-4376-bf35-05208c6b277b-logs\") on node \"crc\" DevicePath \"\"" Dec 01 08:56:58 crc kubenswrapper[4689]: I1201 08:56:58.177632 4689 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Dec 01 08:56:58 crc kubenswrapper[4689]: I1201 08:56:58.254235 4689 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Dec 01 08:56:58 crc kubenswrapper[4689]: I1201 08:56:58.326498 4689 generic.go:334] "Generic (PLEG): container finished" podID="45f2e78d-f339-4376-bf35-05208c6b277b" containerID="4f68a3dc14caa75f5c7023cb5a54f57b894f335e256ba16b2cd7461a6c777d2a" exitCode=143 Dec 01 08:56:58 crc kubenswrapper[4689]: I1201 08:56:58.326540 4689 generic.go:334] "Generic (PLEG): container finished" podID="45f2e78d-f339-4376-bf35-05208c6b277b" containerID="18f7cbe0f72892f2ef341991013471c5a3fc10ea95f6e234b6f6a02f8622c184" exitCode=143 Dec 01 08:56:58 crc kubenswrapper[4689]: I1201 08:56:58.326589 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"45f2e78d-f339-4376-bf35-05208c6b277b","Type":"ContainerDied","Data":"4f68a3dc14caa75f5c7023cb5a54f57b894f335e256ba16b2cd7461a6c777d2a"} Dec 01 08:56:58 crc kubenswrapper[4689]: I1201 08:56:58.326621 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"45f2e78d-f339-4376-bf35-05208c6b277b","Type":"ContainerDied","Data":"18f7cbe0f72892f2ef341991013471c5a3fc10ea95f6e234b6f6a02f8622c184"} Dec 01 08:56:58 crc kubenswrapper[4689]: I1201 08:56:58.326647 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"45f2e78d-f339-4376-bf35-05208c6b277b","Type":"ContainerDied","Data":"9cada2e734c74408e920fe2ed266117a8a8774106f659d477684627ce358505a"} Dec 01 08:56:58 crc kubenswrapper[4689]: I1201 08:56:58.326666 4689 scope.go:117] "RemoveContainer" containerID="4f68a3dc14caa75f5c7023cb5a54f57b894f335e256ba16b2cd7461a6c777d2a" Dec 01 08:56:58 crc kubenswrapper[4689]: I1201 08:56:58.326809 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 01 08:56:58 crc kubenswrapper[4689]: I1201 08:56:58.374565 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 01 08:56:58 crc kubenswrapper[4689]: I1201 08:56:58.400104 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 01 08:56:58 crc kubenswrapper[4689]: I1201 08:56:58.413472 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 01 08:56:58 crc kubenswrapper[4689]: E1201 08:56:58.413881 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45f2e78d-f339-4376-bf35-05208c6b277b" containerName="glance-log" Dec 01 08:56:58 crc kubenswrapper[4689]: I1201 08:56:58.413893 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="45f2e78d-f339-4376-bf35-05208c6b277b" containerName="glance-log" Dec 01 08:56:58 crc kubenswrapper[4689]: E1201 08:56:58.413906 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45f2e78d-f339-4376-bf35-05208c6b277b" containerName="glance-httpd" Dec 01 08:56:58 crc kubenswrapper[4689]: I1201 08:56:58.413912 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="45f2e78d-f339-4376-bf35-05208c6b277b" containerName="glance-httpd" Dec 01 08:56:58 crc kubenswrapper[4689]: I1201 08:56:58.414076 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="45f2e78d-f339-4376-bf35-05208c6b277b" containerName="glance-httpd" Dec 01 08:56:58 crc kubenswrapper[4689]: I1201 08:56:58.414089 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="45f2e78d-f339-4376-bf35-05208c6b277b" containerName="glance-log" Dec 01 08:56:58 crc kubenswrapper[4689]: I1201 08:56:58.415038 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 01 08:56:58 crc kubenswrapper[4689]: I1201 08:56:58.427029 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 01 08:56:58 crc kubenswrapper[4689]: I1201 08:56:58.454791 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 01 08:56:58 crc kubenswrapper[4689]: I1201 08:56:58.553952 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 01 08:56:58 crc kubenswrapper[4689]: I1201 08:56:58.566762 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f48e5437-c685-4121-954b-1d3d8625bd28-scripts\") pod \"glance-default-external-api-0\" (UID: \"f48e5437-c685-4121-954b-1d3d8625bd28\") " pod="openstack/glance-default-external-api-0" Dec 01 08:56:58 crc kubenswrapper[4689]: I1201 08:56:58.566824 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f48e5437-c685-4121-954b-1d3d8625bd28-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f48e5437-c685-4121-954b-1d3d8625bd28\") " pod="openstack/glance-default-external-api-0" Dec 01 08:56:58 crc kubenswrapper[4689]: I1201 08:56:58.566850 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"f48e5437-c685-4121-954b-1d3d8625bd28\") " pod="openstack/glance-default-external-api-0" Dec 01 08:56:58 crc kubenswrapper[4689]: I1201 08:56:58.566921 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9tmx\" (UniqueName: \"kubernetes.io/projected/f48e5437-c685-4121-954b-1d3d8625bd28-kube-api-access-j9tmx\") pod \"glance-default-external-api-0\" (UID: \"f48e5437-c685-4121-954b-1d3d8625bd28\") " pod="openstack/glance-default-external-api-0" Dec 01 08:56:58 crc kubenswrapper[4689]: I1201 08:56:58.566967 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f48e5437-c685-4121-954b-1d3d8625bd28-logs\") pod \"glance-default-external-api-0\" (UID: \"f48e5437-c685-4121-954b-1d3d8625bd28\") " pod="openstack/glance-default-external-api-0" Dec 01 08:56:58 crc kubenswrapper[4689]: I1201 08:56:58.567008 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f48e5437-c685-4121-954b-1d3d8625bd28-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f48e5437-c685-4121-954b-1d3d8625bd28\") " pod="openstack/glance-default-external-api-0" Dec 01 08:56:58 crc kubenswrapper[4689]: I1201 08:56:58.567032 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f48e5437-c685-4121-954b-1d3d8625bd28-config-data\") pod \"glance-default-external-api-0\" (UID: \"f48e5437-c685-4121-954b-1d3d8625bd28\") " pod="openstack/glance-default-external-api-0" Dec 01 08:56:58 crc kubenswrapper[4689]: I1201 08:56:58.668018 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f48e5437-c685-4121-954b-1d3d8625bd28-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f48e5437-c685-4121-954b-1d3d8625bd28\") " pod="openstack/glance-default-external-api-0" Dec 01 08:56:58 crc kubenswrapper[4689]: I1201 08:56:58.668087 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f48e5437-c685-4121-954b-1d3d8625bd28-config-data\") pod \"glance-default-external-api-0\" (UID: \"f48e5437-c685-4121-954b-1d3d8625bd28\") " pod="openstack/glance-default-external-api-0" Dec 01 08:56:58 crc kubenswrapper[4689]: I1201 08:56:58.668149 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f48e5437-c685-4121-954b-1d3d8625bd28-scripts\") pod \"glance-default-external-api-0\" (UID: \"f48e5437-c685-4121-954b-1d3d8625bd28\") " pod="openstack/glance-default-external-api-0" Dec 01 08:56:58 crc kubenswrapper[4689]: I1201 08:56:58.668182 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f48e5437-c685-4121-954b-1d3d8625bd28-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f48e5437-c685-4121-954b-1d3d8625bd28\") " pod="openstack/glance-default-external-api-0" Dec 01 08:56:58 crc kubenswrapper[4689]: I1201 08:56:58.668202 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"f48e5437-c685-4121-954b-1d3d8625bd28\") " pod="openstack/glance-default-external-api-0" Dec 01 08:56:58 crc kubenswrapper[4689]: I1201 08:56:58.668238 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9tmx\" (UniqueName: \"kubernetes.io/projected/f48e5437-c685-4121-954b-1d3d8625bd28-kube-api-access-j9tmx\") pod \"glance-default-external-api-0\" (UID: \"f48e5437-c685-4121-954b-1d3d8625bd28\") " pod="openstack/glance-default-external-api-0" Dec 01 08:56:58 crc kubenswrapper[4689]: I1201 08:56:58.668279 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f48e5437-c685-4121-954b-1d3d8625bd28-logs\") pod \"glance-default-external-api-0\" (UID: \"f48e5437-c685-4121-954b-1d3d8625bd28\") " pod="openstack/glance-default-external-api-0" Dec 01 08:56:58 crc kubenswrapper[4689]: I1201 08:56:58.668813 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f48e5437-c685-4121-954b-1d3d8625bd28-logs\") pod \"glance-default-external-api-0\" (UID: \"f48e5437-c685-4121-954b-1d3d8625bd28\") " pod="openstack/glance-default-external-api-0" Dec 01 08:56:58 crc kubenswrapper[4689]: I1201 08:56:58.669517 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f48e5437-c685-4121-954b-1d3d8625bd28-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f48e5437-c685-4121-954b-1d3d8625bd28\") " pod="openstack/glance-default-external-api-0" Dec 01 08:56:58 crc kubenswrapper[4689]: I1201 08:56:58.669724 4689 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"f48e5437-c685-4121-954b-1d3d8625bd28\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-external-api-0" Dec 01 08:56:58 crc kubenswrapper[4689]: I1201 08:56:58.680697 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f48e5437-c685-4121-954b-1d3d8625bd28-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f48e5437-c685-4121-954b-1d3d8625bd28\") " pod="openstack/glance-default-external-api-0" Dec 01 08:56:58 crc kubenswrapper[4689]: I1201 08:56:58.681223 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f48e5437-c685-4121-954b-1d3d8625bd28-scripts\") pod \"glance-default-external-api-0\" (UID: \"f48e5437-c685-4121-954b-1d3d8625bd28\") " pod="openstack/glance-default-external-api-0" Dec 01 08:56:58 crc kubenswrapper[4689]: I1201 08:56:58.696089 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f48e5437-c685-4121-954b-1d3d8625bd28-config-data\") pod \"glance-default-external-api-0\" (UID: \"f48e5437-c685-4121-954b-1d3d8625bd28\") " pod="openstack/glance-default-external-api-0" Dec 01 08:56:58 crc kubenswrapper[4689]: I1201 08:56:58.709149 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9tmx\" (UniqueName: \"kubernetes.io/projected/f48e5437-c685-4121-954b-1d3d8625bd28-kube-api-access-j9tmx\") pod \"glance-default-external-api-0\" (UID: \"f48e5437-c685-4121-954b-1d3d8625bd28\") " pod="openstack/glance-default-external-api-0" Dec 01 08:56:58 crc kubenswrapper[4689]: I1201 08:56:58.765358 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"f48e5437-c685-4121-954b-1d3d8625bd28\") " pod="openstack/glance-default-external-api-0" Dec 01 08:56:58 crc kubenswrapper[4689]: I1201 08:56:58.816436 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 01 08:56:59 crc kubenswrapper[4689]: I1201 08:56:59.088154 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45f2e78d-f339-4376-bf35-05208c6b277b" path="/var/lib/kubelet/pods/45f2e78d-f339-4376-bf35-05208c6b277b/volumes" Dec 01 08:56:59 crc kubenswrapper[4689]: I1201 08:56:59.090741 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6a92850-d699-4169-be88-6bf241235c16" path="/var/lib/kubelet/pods/d6a92850-d699-4169-be88-6bf241235c16/volumes" Dec 01 08:56:59 crc kubenswrapper[4689]: I1201 08:56:59.360012 4689 generic.go:334] "Generic (PLEG): container finished" podID="e12d10f6-caef-4c9d-9d88-332042911454" containerID="37c994c606873636a62994ed7609e596e42de192a923f11a618d108755886555" exitCode=0 Dec 01 08:56:59 crc kubenswrapper[4689]: I1201 08:56:59.360062 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-6lm44" event={"ID":"e12d10f6-caef-4c9d-9d88-332042911454","Type":"ContainerDied","Data":"37c994c606873636a62994ed7609e596e42de192a923f11a618d108755886555"} Dec 01 08:57:00 crc kubenswrapper[4689]: I1201 08:57:00.375806 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8b5c85b87-v5gml" Dec 01 08:57:00 crc kubenswrapper[4689]: I1201 08:57:00.376509 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 01 08:57:00 crc kubenswrapper[4689]: I1201 08:57:00.471356 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-bv9wg"] Dec 01 08:57:00 crc kubenswrapper[4689]: I1201 08:57:00.472263 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-77585f5f8c-bv9wg" podUID="f3fc4aaf-1747-4ced-877d-63533218e8f1" containerName="dnsmasq-dns" containerID="cri-o://e64ddeef17e1dc2b79ca91d031e2897d8d3287a13dd93cae0a1f4c6c71e4f2e9" gracePeriod=10 Dec 01 08:57:00 crc kubenswrapper[4689]: I1201 08:57:00.509285 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 01 08:57:01 crc kubenswrapper[4689]: I1201 08:57:01.004081 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-77585f5f8c-bv9wg" podUID="f3fc4aaf-1747-4ced-877d-63533218e8f1" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.125:5353: connect: connection refused" Dec 01 08:57:01 crc kubenswrapper[4689]: I1201 08:57:01.406466 4689 generic.go:334] "Generic (PLEG): container finished" podID="f3fc4aaf-1747-4ced-877d-63533218e8f1" containerID="e64ddeef17e1dc2b79ca91d031e2897d8d3287a13dd93cae0a1f4c6c71e4f2e9" exitCode=0 Dec 01 08:57:01 crc kubenswrapper[4689]: I1201 08:57:01.406503 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-bv9wg" event={"ID":"f3fc4aaf-1747-4ced-877d-63533218e8f1","Type":"ContainerDied","Data":"e64ddeef17e1dc2b79ca91d031e2897d8d3287a13dd93cae0a1f4c6c71e4f2e9"} Dec 01 08:57:01 crc kubenswrapper[4689]: I1201 08:57:01.695090 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-776b5c685-s4c5n"] Dec 01 08:57:01 crc kubenswrapper[4689]: I1201 08:57:01.728166 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-78d9cd9dbd-qxwq7"] Dec 01 08:57:01 crc kubenswrapper[4689]: I1201 08:57:01.729681 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-78d9cd9dbd-qxwq7" Dec 01 08:57:01 crc kubenswrapper[4689]: I1201 08:57:01.744434 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Dec 01 08:57:01 crc kubenswrapper[4689]: I1201 08:57:01.773014 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-78d9cd9dbd-qxwq7"] Dec 01 08:57:01 crc kubenswrapper[4689]: I1201 08:57:01.861576 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rx6kr\" (UniqueName: \"kubernetes.io/projected/e88c04bb-01ff-47a6-8942-05a9a2a68416-kube-api-access-rx6kr\") pod \"horizon-78d9cd9dbd-qxwq7\" (UID: \"e88c04bb-01ff-47a6-8942-05a9a2a68416\") " pod="openstack/horizon-78d9cd9dbd-qxwq7" Dec 01 08:57:01 crc kubenswrapper[4689]: I1201 08:57:01.861842 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e88c04bb-01ff-47a6-8942-05a9a2a68416-config-data\") pod \"horizon-78d9cd9dbd-qxwq7\" (UID: \"e88c04bb-01ff-47a6-8942-05a9a2a68416\") " pod="openstack/horizon-78d9cd9dbd-qxwq7" Dec 01 08:57:01 crc kubenswrapper[4689]: I1201 08:57:01.861961 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e88c04bb-01ff-47a6-8942-05a9a2a68416-horizon-secret-key\") pod \"horizon-78d9cd9dbd-qxwq7\" (UID: \"e88c04bb-01ff-47a6-8942-05a9a2a68416\") " pod="openstack/horizon-78d9cd9dbd-qxwq7" Dec 01 08:57:01 crc kubenswrapper[4689]: I1201 08:57:01.862046 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e88c04bb-01ff-47a6-8942-05a9a2a68416-scripts\") pod \"horizon-78d9cd9dbd-qxwq7\" (UID: \"e88c04bb-01ff-47a6-8942-05a9a2a68416\") " pod="openstack/horizon-78d9cd9dbd-qxwq7" Dec 01 08:57:01 crc kubenswrapper[4689]: I1201 08:57:01.862145 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e88c04bb-01ff-47a6-8942-05a9a2a68416-logs\") pod \"horizon-78d9cd9dbd-qxwq7\" (UID: \"e88c04bb-01ff-47a6-8942-05a9a2a68416\") " pod="openstack/horizon-78d9cd9dbd-qxwq7" Dec 01 08:57:01 crc kubenswrapper[4689]: I1201 08:57:01.865620 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e88c04bb-01ff-47a6-8942-05a9a2a68416-combined-ca-bundle\") pod \"horizon-78d9cd9dbd-qxwq7\" (UID: \"e88c04bb-01ff-47a6-8942-05a9a2a68416\") " pod="openstack/horizon-78d9cd9dbd-qxwq7" Dec 01 08:57:01 crc kubenswrapper[4689]: I1201 08:57:01.866277 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/e88c04bb-01ff-47a6-8942-05a9a2a68416-horizon-tls-certs\") pod \"horizon-78d9cd9dbd-qxwq7\" (UID: \"e88c04bb-01ff-47a6-8942-05a9a2a68416\") " pod="openstack/horizon-78d9cd9dbd-qxwq7" Dec 01 08:57:01 crc kubenswrapper[4689]: I1201 08:57:01.884002 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5cb9c55b6f-d69t9"] Dec 01 08:57:01 crc kubenswrapper[4689]: I1201 08:57:01.914825 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-d65b9788-2kr5p"] Dec 01 08:57:01 crc kubenswrapper[4689]: I1201 08:57:01.916573 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-d65b9788-2kr5p" Dec 01 08:57:01 crc kubenswrapper[4689]: I1201 08:57:01.936524 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-d65b9788-2kr5p"] Dec 01 08:57:01 crc kubenswrapper[4689]: I1201 08:57:01.967611 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e88c04bb-01ff-47a6-8942-05a9a2a68416-logs\") pod \"horizon-78d9cd9dbd-qxwq7\" (UID: \"e88c04bb-01ff-47a6-8942-05a9a2a68416\") " pod="openstack/horizon-78d9cd9dbd-qxwq7" Dec 01 08:57:01 crc kubenswrapper[4689]: I1201 08:57:01.967720 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e88c04bb-01ff-47a6-8942-05a9a2a68416-combined-ca-bundle\") pod \"horizon-78d9cd9dbd-qxwq7\" (UID: \"e88c04bb-01ff-47a6-8942-05a9a2a68416\") " pod="openstack/horizon-78d9cd9dbd-qxwq7" Dec 01 08:57:01 crc kubenswrapper[4689]: I1201 08:57:01.967755 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/e88c04bb-01ff-47a6-8942-05a9a2a68416-horizon-tls-certs\") pod \"horizon-78d9cd9dbd-qxwq7\" (UID: \"e88c04bb-01ff-47a6-8942-05a9a2a68416\") " pod="openstack/horizon-78d9cd9dbd-qxwq7" Dec 01 08:57:01 crc kubenswrapper[4689]: I1201 08:57:01.967802 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rx6kr\" (UniqueName: \"kubernetes.io/projected/e88c04bb-01ff-47a6-8942-05a9a2a68416-kube-api-access-rx6kr\") pod \"horizon-78d9cd9dbd-qxwq7\" (UID: \"e88c04bb-01ff-47a6-8942-05a9a2a68416\") " pod="openstack/horizon-78d9cd9dbd-qxwq7" Dec 01 08:57:01 crc kubenswrapper[4689]: I1201 08:57:01.967819 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e88c04bb-01ff-47a6-8942-05a9a2a68416-config-data\") pod \"horizon-78d9cd9dbd-qxwq7\" (UID: \"e88c04bb-01ff-47a6-8942-05a9a2a68416\") " pod="openstack/horizon-78d9cd9dbd-qxwq7" Dec 01 08:57:01 crc kubenswrapper[4689]: I1201 08:57:01.967857 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e88c04bb-01ff-47a6-8942-05a9a2a68416-horizon-secret-key\") pod \"horizon-78d9cd9dbd-qxwq7\" (UID: \"e88c04bb-01ff-47a6-8942-05a9a2a68416\") " pod="openstack/horizon-78d9cd9dbd-qxwq7" Dec 01 08:57:01 crc kubenswrapper[4689]: I1201 08:57:01.967876 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e88c04bb-01ff-47a6-8942-05a9a2a68416-scripts\") pod \"horizon-78d9cd9dbd-qxwq7\" (UID: \"e88c04bb-01ff-47a6-8942-05a9a2a68416\") " pod="openstack/horizon-78d9cd9dbd-qxwq7" Dec 01 08:57:01 crc kubenswrapper[4689]: I1201 08:57:01.968172 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e88c04bb-01ff-47a6-8942-05a9a2a68416-logs\") pod \"horizon-78d9cd9dbd-qxwq7\" (UID: \"e88c04bb-01ff-47a6-8942-05a9a2a68416\") " pod="openstack/horizon-78d9cd9dbd-qxwq7" Dec 01 08:57:01 crc kubenswrapper[4689]: I1201 08:57:01.968573 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e88c04bb-01ff-47a6-8942-05a9a2a68416-scripts\") pod \"horizon-78d9cd9dbd-qxwq7\" (UID: \"e88c04bb-01ff-47a6-8942-05a9a2a68416\") " pod="openstack/horizon-78d9cd9dbd-qxwq7" Dec 01 08:57:01 crc kubenswrapper[4689]: I1201 08:57:01.969633 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e88c04bb-01ff-47a6-8942-05a9a2a68416-config-data\") pod \"horizon-78d9cd9dbd-qxwq7\" (UID: \"e88c04bb-01ff-47a6-8942-05a9a2a68416\") " pod="openstack/horizon-78d9cd9dbd-qxwq7" Dec 01 08:57:01 crc kubenswrapper[4689]: I1201 08:57:01.977823 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/e88c04bb-01ff-47a6-8942-05a9a2a68416-horizon-tls-certs\") pod \"horizon-78d9cd9dbd-qxwq7\" (UID: \"e88c04bb-01ff-47a6-8942-05a9a2a68416\") " pod="openstack/horizon-78d9cd9dbd-qxwq7" Dec 01 08:57:01 crc kubenswrapper[4689]: I1201 08:57:01.978323 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e88c04bb-01ff-47a6-8942-05a9a2a68416-combined-ca-bundle\") pod \"horizon-78d9cd9dbd-qxwq7\" (UID: \"e88c04bb-01ff-47a6-8942-05a9a2a68416\") " pod="openstack/horizon-78d9cd9dbd-qxwq7" Dec 01 08:57:01 crc kubenswrapper[4689]: I1201 08:57:01.992356 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e88c04bb-01ff-47a6-8942-05a9a2a68416-horizon-secret-key\") pod \"horizon-78d9cd9dbd-qxwq7\" (UID: \"e88c04bb-01ff-47a6-8942-05a9a2a68416\") " pod="openstack/horizon-78d9cd9dbd-qxwq7" Dec 01 08:57:01 crc kubenswrapper[4689]: I1201 08:57:01.992386 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rx6kr\" (UniqueName: \"kubernetes.io/projected/e88c04bb-01ff-47a6-8942-05a9a2a68416-kube-api-access-rx6kr\") pod \"horizon-78d9cd9dbd-qxwq7\" (UID: \"e88c04bb-01ff-47a6-8942-05a9a2a68416\") " pod="openstack/horizon-78d9cd9dbd-qxwq7" Dec 01 08:57:02 crc kubenswrapper[4689]: I1201 08:57:02.049154 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-78d9cd9dbd-qxwq7" Dec 01 08:57:02 crc kubenswrapper[4689]: I1201 08:57:02.069511 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fcebf70c-3de0-499e-928d-3419299a512f-logs\") pod \"horizon-d65b9788-2kr5p\" (UID: \"fcebf70c-3de0-499e-928d-3419299a512f\") " pod="openstack/horizon-d65b9788-2kr5p" Dec 01 08:57:02 crc kubenswrapper[4689]: I1201 08:57:02.069644 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/fcebf70c-3de0-499e-928d-3419299a512f-horizon-tls-certs\") pod \"horizon-d65b9788-2kr5p\" (UID: \"fcebf70c-3de0-499e-928d-3419299a512f\") " pod="openstack/horizon-d65b9788-2kr5p" Dec 01 08:57:02 crc kubenswrapper[4689]: I1201 08:57:02.069669 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fcebf70c-3de0-499e-928d-3419299a512f-config-data\") pod \"horizon-d65b9788-2kr5p\" (UID: \"fcebf70c-3de0-499e-928d-3419299a512f\") " pod="openstack/horizon-d65b9788-2kr5p" Dec 01 08:57:02 crc kubenswrapper[4689]: I1201 08:57:02.069699 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/fcebf70c-3de0-499e-928d-3419299a512f-horizon-secret-key\") pod \"horizon-d65b9788-2kr5p\" (UID: \"fcebf70c-3de0-499e-928d-3419299a512f\") " pod="openstack/horizon-d65b9788-2kr5p" Dec 01 08:57:02 crc kubenswrapper[4689]: I1201 08:57:02.069722 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fcebf70c-3de0-499e-928d-3419299a512f-scripts\") pod \"horizon-d65b9788-2kr5p\" (UID: \"fcebf70c-3de0-499e-928d-3419299a512f\") " pod="openstack/horizon-d65b9788-2kr5p" Dec 01 08:57:02 crc kubenswrapper[4689]: I1201 08:57:02.069749 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcebf70c-3de0-499e-928d-3419299a512f-combined-ca-bundle\") pod \"horizon-d65b9788-2kr5p\" (UID: \"fcebf70c-3de0-499e-928d-3419299a512f\") " pod="openstack/horizon-d65b9788-2kr5p" Dec 01 08:57:02 crc kubenswrapper[4689]: I1201 08:57:02.069773 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dz8nv\" (UniqueName: \"kubernetes.io/projected/fcebf70c-3de0-499e-928d-3419299a512f-kube-api-access-dz8nv\") pod \"horizon-d65b9788-2kr5p\" (UID: \"fcebf70c-3de0-499e-928d-3419299a512f\") " pod="openstack/horizon-d65b9788-2kr5p" Dec 01 08:57:02 crc kubenswrapper[4689]: I1201 08:57:02.181785 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/fcebf70c-3de0-499e-928d-3419299a512f-horizon-tls-certs\") pod \"horizon-d65b9788-2kr5p\" (UID: \"fcebf70c-3de0-499e-928d-3419299a512f\") " pod="openstack/horizon-d65b9788-2kr5p" Dec 01 08:57:02 crc kubenswrapper[4689]: I1201 08:57:02.181840 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fcebf70c-3de0-499e-928d-3419299a512f-config-data\") pod \"horizon-d65b9788-2kr5p\" (UID: \"fcebf70c-3de0-499e-928d-3419299a512f\") " pod="openstack/horizon-d65b9788-2kr5p" Dec 01 08:57:02 crc kubenswrapper[4689]: I1201 08:57:02.181879 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/fcebf70c-3de0-499e-928d-3419299a512f-horizon-secret-key\") pod \"horizon-d65b9788-2kr5p\" (UID: \"fcebf70c-3de0-499e-928d-3419299a512f\") " pod="openstack/horizon-d65b9788-2kr5p" Dec 01 08:57:02 crc kubenswrapper[4689]: I1201 08:57:02.181898 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fcebf70c-3de0-499e-928d-3419299a512f-scripts\") pod \"horizon-d65b9788-2kr5p\" (UID: \"fcebf70c-3de0-499e-928d-3419299a512f\") " pod="openstack/horizon-d65b9788-2kr5p" Dec 01 08:57:02 crc kubenswrapper[4689]: I1201 08:57:02.183286 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcebf70c-3de0-499e-928d-3419299a512f-combined-ca-bundle\") pod \"horizon-d65b9788-2kr5p\" (UID: \"fcebf70c-3de0-499e-928d-3419299a512f\") " pod="openstack/horizon-d65b9788-2kr5p" Dec 01 08:57:02 crc kubenswrapper[4689]: I1201 08:57:02.183323 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dz8nv\" (UniqueName: \"kubernetes.io/projected/fcebf70c-3de0-499e-928d-3419299a512f-kube-api-access-dz8nv\") pod \"horizon-d65b9788-2kr5p\" (UID: \"fcebf70c-3de0-499e-928d-3419299a512f\") " pod="openstack/horizon-d65b9788-2kr5p" Dec 01 08:57:02 crc kubenswrapper[4689]: I1201 08:57:02.183435 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fcebf70c-3de0-499e-928d-3419299a512f-logs\") pod \"horizon-d65b9788-2kr5p\" (UID: \"fcebf70c-3de0-499e-928d-3419299a512f\") " pod="openstack/horizon-d65b9788-2kr5p" Dec 01 08:57:02 crc kubenswrapper[4689]: I1201 08:57:02.183556 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fcebf70c-3de0-499e-928d-3419299a512f-scripts\") pod \"horizon-d65b9788-2kr5p\" (UID: \"fcebf70c-3de0-499e-928d-3419299a512f\") " pod="openstack/horizon-d65b9788-2kr5p" Dec 01 08:57:02 crc kubenswrapper[4689]: I1201 08:57:02.183811 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fcebf70c-3de0-499e-928d-3419299a512f-logs\") pod \"horizon-d65b9788-2kr5p\" (UID: \"fcebf70c-3de0-499e-928d-3419299a512f\") " pod="openstack/horizon-d65b9788-2kr5p" Dec 01 08:57:02 crc kubenswrapper[4689]: I1201 08:57:02.183970 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fcebf70c-3de0-499e-928d-3419299a512f-config-data\") pod \"horizon-d65b9788-2kr5p\" (UID: \"fcebf70c-3de0-499e-928d-3419299a512f\") " pod="openstack/horizon-d65b9788-2kr5p" Dec 01 08:57:02 crc kubenswrapper[4689]: I1201 08:57:02.189646 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/fcebf70c-3de0-499e-928d-3419299a512f-horizon-secret-key\") pod \"horizon-d65b9788-2kr5p\" (UID: \"fcebf70c-3de0-499e-928d-3419299a512f\") " pod="openstack/horizon-d65b9788-2kr5p" Dec 01 08:57:02 crc kubenswrapper[4689]: I1201 08:57:02.196836 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/fcebf70c-3de0-499e-928d-3419299a512f-horizon-tls-certs\") pod \"horizon-d65b9788-2kr5p\" (UID: \"fcebf70c-3de0-499e-928d-3419299a512f\") " pod="openstack/horizon-d65b9788-2kr5p" Dec 01 08:57:02 crc kubenswrapper[4689]: I1201 08:57:02.197567 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcebf70c-3de0-499e-928d-3419299a512f-combined-ca-bundle\") pod \"horizon-d65b9788-2kr5p\" (UID: \"fcebf70c-3de0-499e-928d-3419299a512f\") " pod="openstack/horizon-d65b9788-2kr5p" Dec 01 08:57:02 crc kubenswrapper[4689]: I1201 08:57:02.201147 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dz8nv\" (UniqueName: \"kubernetes.io/projected/fcebf70c-3de0-499e-928d-3419299a512f-kube-api-access-dz8nv\") pod \"horizon-d65b9788-2kr5p\" (UID: \"fcebf70c-3de0-499e-928d-3419299a512f\") " pod="openstack/horizon-d65b9788-2kr5p" Dec 01 08:57:02 crc kubenswrapper[4689]: I1201 08:57:02.237406 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-d65b9788-2kr5p" Dec 01 08:57:06 crc kubenswrapper[4689]: I1201 08:57:06.006983 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-77585f5f8c-bv9wg" podUID="f3fc4aaf-1747-4ced-877d-63533218e8f1" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.125:5353: connect: connection refused" Dec 01 08:57:09 crc kubenswrapper[4689]: I1201 08:57:09.147017 4689 patch_prober.go:28] interesting pod/machine-config-daemon-hmdnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 08:57:09 crc kubenswrapper[4689]: I1201 08:57:09.147790 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 08:57:11 crc kubenswrapper[4689]: I1201 08:57:11.003585 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-77585f5f8c-bv9wg" podUID="f3fc4aaf-1747-4ced-877d-63533218e8f1" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.125:5353: connect: connection refused" Dec 01 08:57:11 crc kubenswrapper[4689]: I1201 08:57:11.004616 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-77585f5f8c-bv9wg" Dec 01 08:57:11 crc kubenswrapper[4689]: E1201 08:57:11.071558 4689 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Dec 01 08:57:11 crc kubenswrapper[4689]: E1201 08:57:11.071951 4689 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5ch56h89h79hfch5ddh88h55bh9h67dh7h678h649h56ch5f6h68dhb9h674h68dh66dh675h56hc4h6chcbh556h54fh5b8h5b9h58ch88h566q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8rlqc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-776b5c685-s4c5n_openstack(b6551765-e11c-4cfc-a20f-976c7b1807ad): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 01 08:57:11 crc kubenswrapper[4689]: E1201 08:57:11.075960 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-776b5c685-s4c5n" podUID="b6551765-e11c-4cfc-a20f-976c7b1807ad" Dec 01 08:57:11 crc kubenswrapper[4689]: E1201 08:57:11.091076 4689 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Dec 01 08:57:11 crc kubenswrapper[4689]: E1201 08:57:11.091241 4689 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n59hf6h568h674h5cfh5b6hcbh674hbbh549hbh56dh655h5hd6h57hfh679hc8h5hc4h54fh654hd8h56bh54h97hc5h695hfdh559h655q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hkdtj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-5cb9c55b6f-d69t9_openstack(f88ff950-322a-4e58-8cfa-def03f3c0752): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 01 08:57:11 crc kubenswrapper[4689]: E1201 08:57:11.093928 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-5cb9c55b6f-d69t9" podUID="f88ff950-322a-4e58-8cfa-def03f3c0752" Dec 01 08:57:11 crc kubenswrapper[4689]: E1201 08:57:11.119102 4689 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Dec 01 08:57:11 crc kubenswrapper[4689]: E1201 08:57:11.119276 4689 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n584h5d9h96h5c9h58dh5c7hf8h64dh9h75h59bh559h588h67dhddh76h597hbdh5cch567h566h66hbfh9bhd6h5cdh57h595h97h688h549h58cq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m95tg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-7585876fd5-877pk_openstack(863bf673-6941-42ac-90ff-9e70bbf3f05a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 01 08:57:11 crc kubenswrapper[4689]: E1201 08:57:11.129518 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-7585876fd5-877pk" podUID="863bf673-6941-42ac-90ff-9e70bbf3f05a" Dec 01 08:57:11 crc kubenswrapper[4689]: I1201 08:57:11.188607 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-6lm44" Dec 01 08:57:11 crc kubenswrapper[4689]: I1201 08:57:11.278444 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e12d10f6-caef-4c9d-9d88-332042911454-credential-keys\") pod \"e12d10f6-caef-4c9d-9d88-332042911454\" (UID: \"e12d10f6-caef-4c9d-9d88-332042911454\") " Dec 01 08:57:11 crc kubenswrapper[4689]: I1201 08:57:11.278494 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e12d10f6-caef-4c9d-9d88-332042911454-config-data\") pod \"e12d10f6-caef-4c9d-9d88-332042911454\" (UID: \"e12d10f6-caef-4c9d-9d88-332042911454\") " Dec 01 08:57:11 crc kubenswrapper[4689]: I1201 08:57:11.279328 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e12d10f6-caef-4c9d-9d88-332042911454-combined-ca-bundle\") pod \"e12d10f6-caef-4c9d-9d88-332042911454\" (UID: \"e12d10f6-caef-4c9d-9d88-332042911454\") " Dec 01 08:57:11 crc kubenswrapper[4689]: I1201 08:57:11.279609 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrlk\" (UniqueName: \"kubernetes.io/projected/e12d10f6-caef-4c9d-9d88-332042911454-kube-api-access-mnrlk\") pod \"e12d10f6-caef-4c9d-9d88-332042911454\" (UID: \"e12d10f6-caef-4c9d-9d88-332042911454\") " Dec 01 08:57:11 crc kubenswrapper[4689]: I1201 08:57:11.279686 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e12d10f6-caef-4c9d-9d88-332042911454-scripts\") pod \"e12d10f6-caef-4c9d-9d88-332042911454\" (UID: \"e12d10f6-caef-4c9d-9d88-332042911454\") " Dec 01 08:57:11 crc kubenswrapper[4689]: I1201 08:57:11.279739 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e12d10f6-caef-4c9d-9d88-332042911454-fernet-keys\") pod \"e12d10f6-caef-4c9d-9d88-332042911454\" (UID: \"e12d10f6-caef-4c9d-9d88-332042911454\") " Dec 01 08:57:11 crc kubenswrapper[4689]: I1201 08:57:11.285813 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e12d10f6-caef-4c9d-9d88-332042911454-kube-api-access-mnrlk" (OuterVolumeSpecName: "kube-api-access-mnrlk") pod "e12d10f6-caef-4c9d-9d88-332042911454" (UID: "e12d10f6-caef-4c9d-9d88-332042911454"). InnerVolumeSpecName "kube-api-access-mnrlk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:57:11 crc kubenswrapper[4689]: I1201 08:57:11.285999 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e12d10f6-caef-4c9d-9d88-332042911454-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "e12d10f6-caef-4c9d-9d88-332042911454" (UID: "e12d10f6-caef-4c9d-9d88-332042911454"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:57:11 crc kubenswrapper[4689]: I1201 08:57:11.287288 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e12d10f6-caef-4c9d-9d88-332042911454-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "e12d10f6-caef-4c9d-9d88-332042911454" (UID: "e12d10f6-caef-4c9d-9d88-332042911454"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:57:11 crc kubenswrapper[4689]: I1201 08:57:11.301073 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e12d10f6-caef-4c9d-9d88-332042911454-scripts" (OuterVolumeSpecName: "scripts") pod "e12d10f6-caef-4c9d-9d88-332042911454" (UID: "e12d10f6-caef-4c9d-9d88-332042911454"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:57:11 crc kubenswrapper[4689]: I1201 08:57:11.316244 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e12d10f6-caef-4c9d-9d88-332042911454-config-data" (OuterVolumeSpecName: "config-data") pod "e12d10f6-caef-4c9d-9d88-332042911454" (UID: "e12d10f6-caef-4c9d-9d88-332042911454"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:57:11 crc kubenswrapper[4689]: I1201 08:57:11.335689 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e12d10f6-caef-4c9d-9d88-332042911454-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e12d10f6-caef-4c9d-9d88-332042911454" (UID: "e12d10f6-caef-4c9d-9d88-332042911454"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:57:11 crc kubenswrapper[4689]: I1201 08:57:11.387342 4689 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e12d10f6-caef-4c9d-9d88-332042911454-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 08:57:11 crc kubenswrapper[4689]: I1201 08:57:11.387389 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrlk\" (UniqueName: \"kubernetes.io/projected/e12d10f6-caef-4c9d-9d88-332042911454-kube-api-access-mnrlk\") on node \"crc\" DevicePath \"\"" Dec 01 08:57:11 crc kubenswrapper[4689]: I1201 08:57:11.387402 4689 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e12d10f6-caef-4c9d-9d88-332042911454-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 08:57:11 crc kubenswrapper[4689]: I1201 08:57:11.387411 4689 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e12d10f6-caef-4c9d-9d88-332042911454-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 01 08:57:11 crc kubenswrapper[4689]: I1201 08:57:11.387422 4689 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e12d10f6-caef-4c9d-9d88-332042911454-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 01 08:57:11 crc kubenswrapper[4689]: I1201 08:57:11.387430 4689 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e12d10f6-caef-4c9d-9d88-332042911454-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 08:57:11 crc kubenswrapper[4689]: I1201 08:57:11.504720 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3d35154f-4a7e-4d08-935d-4fadbcd89379","Type":"ContainerStarted","Data":"d472fe1ac12848acf093cba9e81f55fd128ba744a9c704dfce58b95f282f1b4e"} Dec 01 08:57:11 crc kubenswrapper[4689]: I1201 08:57:11.506889 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-6lm44" event={"ID":"e12d10f6-caef-4c9d-9d88-332042911454","Type":"ContainerDied","Data":"f10abe809df509b54730c5986a6869383848d24efa86547c2bd109d740351ec7"} Dec 01 08:57:11 crc kubenswrapper[4689]: I1201 08:57:11.506930 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f10abe809df509b54730c5986a6869383848d24efa86547c2bd109d740351ec7" Dec 01 08:57:11 crc kubenswrapper[4689]: I1201 08:57:11.506943 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-6lm44" Dec 01 08:57:12 crc kubenswrapper[4689]: I1201 08:57:12.382332 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-6lm44"] Dec 01 08:57:12 crc kubenswrapper[4689]: I1201 08:57:12.388940 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-6lm44"] Dec 01 08:57:12 crc kubenswrapper[4689]: I1201 08:57:12.486340 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-4tdn6"] Dec 01 08:57:12 crc kubenswrapper[4689]: E1201 08:57:12.486857 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e12d10f6-caef-4c9d-9d88-332042911454" containerName="keystone-bootstrap" Dec 01 08:57:12 crc kubenswrapper[4689]: I1201 08:57:12.486881 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="e12d10f6-caef-4c9d-9d88-332042911454" containerName="keystone-bootstrap" Dec 01 08:57:12 crc kubenswrapper[4689]: I1201 08:57:12.487130 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="e12d10f6-caef-4c9d-9d88-332042911454" containerName="keystone-bootstrap" Dec 01 08:57:12 crc kubenswrapper[4689]: I1201 08:57:12.488355 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-4tdn6" Dec 01 08:57:12 crc kubenswrapper[4689]: I1201 08:57:12.522223 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/498e2dd1-b659-447d-9f5d-8a86c48fae77-credential-keys\") pod \"keystone-bootstrap-4tdn6\" (UID: \"498e2dd1-b659-447d-9f5d-8a86c48fae77\") " pod="openstack/keystone-bootstrap-4tdn6" Dec 01 08:57:12 crc kubenswrapper[4689]: I1201 08:57:12.522274 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/498e2dd1-b659-447d-9f5d-8a86c48fae77-fernet-keys\") pod \"keystone-bootstrap-4tdn6\" (UID: \"498e2dd1-b659-447d-9f5d-8a86c48fae77\") " pod="openstack/keystone-bootstrap-4tdn6" Dec 01 08:57:12 crc kubenswrapper[4689]: I1201 08:57:12.522337 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/498e2dd1-b659-447d-9f5d-8a86c48fae77-combined-ca-bundle\") pod \"keystone-bootstrap-4tdn6\" (UID: \"498e2dd1-b659-447d-9f5d-8a86c48fae77\") " pod="openstack/keystone-bootstrap-4tdn6" Dec 01 08:57:12 crc kubenswrapper[4689]: I1201 08:57:12.522355 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g54tb\" (UniqueName: \"kubernetes.io/projected/498e2dd1-b659-447d-9f5d-8a86c48fae77-kube-api-access-g54tb\") pod \"keystone-bootstrap-4tdn6\" (UID: \"498e2dd1-b659-447d-9f5d-8a86c48fae77\") " pod="openstack/keystone-bootstrap-4tdn6" Dec 01 08:57:12 crc kubenswrapper[4689]: I1201 08:57:12.522423 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/498e2dd1-b659-447d-9f5d-8a86c48fae77-scripts\") pod \"keystone-bootstrap-4tdn6\" (UID: \"498e2dd1-b659-447d-9f5d-8a86c48fae77\") " pod="openstack/keystone-bootstrap-4tdn6" Dec 01 08:57:12 crc kubenswrapper[4689]: I1201 08:57:12.522449 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/498e2dd1-b659-447d-9f5d-8a86c48fae77-config-data\") pod \"keystone-bootstrap-4tdn6\" (UID: \"498e2dd1-b659-447d-9f5d-8a86c48fae77\") " pod="openstack/keystone-bootstrap-4tdn6" Dec 01 08:57:12 crc kubenswrapper[4689]: I1201 08:57:12.534355 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-4tdn6"] Dec 01 08:57:12 crc kubenswrapper[4689]: I1201 08:57:12.535806 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-44r55" Dec 01 08:57:12 crc kubenswrapper[4689]: I1201 08:57:12.536073 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 01 08:57:12 crc kubenswrapper[4689]: I1201 08:57:12.536107 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 01 08:57:12 crc kubenswrapper[4689]: I1201 08:57:12.536107 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 01 08:57:12 crc kubenswrapper[4689]: I1201 08:57:12.536810 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 01 08:57:12 crc kubenswrapper[4689]: I1201 08:57:12.624229 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/498e2dd1-b659-447d-9f5d-8a86c48fae77-combined-ca-bundle\") pod \"keystone-bootstrap-4tdn6\" (UID: \"498e2dd1-b659-447d-9f5d-8a86c48fae77\") " pod="openstack/keystone-bootstrap-4tdn6" Dec 01 08:57:12 crc kubenswrapper[4689]: I1201 08:57:12.624262 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g54tb\" (UniqueName: \"kubernetes.io/projected/498e2dd1-b659-447d-9f5d-8a86c48fae77-kube-api-access-g54tb\") pod \"keystone-bootstrap-4tdn6\" (UID: \"498e2dd1-b659-447d-9f5d-8a86c48fae77\") " pod="openstack/keystone-bootstrap-4tdn6" Dec 01 08:57:12 crc kubenswrapper[4689]: I1201 08:57:12.624319 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/498e2dd1-b659-447d-9f5d-8a86c48fae77-scripts\") pod \"keystone-bootstrap-4tdn6\" (UID: \"498e2dd1-b659-447d-9f5d-8a86c48fae77\") " pod="openstack/keystone-bootstrap-4tdn6" Dec 01 08:57:12 crc kubenswrapper[4689]: I1201 08:57:12.624345 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/498e2dd1-b659-447d-9f5d-8a86c48fae77-config-data\") pod \"keystone-bootstrap-4tdn6\" (UID: \"498e2dd1-b659-447d-9f5d-8a86c48fae77\") " pod="openstack/keystone-bootstrap-4tdn6" Dec 01 08:57:12 crc kubenswrapper[4689]: I1201 08:57:12.624426 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/498e2dd1-b659-447d-9f5d-8a86c48fae77-credential-keys\") pod \"keystone-bootstrap-4tdn6\" (UID: \"498e2dd1-b659-447d-9f5d-8a86c48fae77\") " pod="openstack/keystone-bootstrap-4tdn6" Dec 01 08:57:12 crc kubenswrapper[4689]: I1201 08:57:12.624448 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/498e2dd1-b659-447d-9f5d-8a86c48fae77-fernet-keys\") pod \"keystone-bootstrap-4tdn6\" (UID: \"498e2dd1-b659-447d-9f5d-8a86c48fae77\") " pod="openstack/keystone-bootstrap-4tdn6" Dec 01 08:57:12 crc kubenswrapper[4689]: I1201 08:57:12.632670 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/498e2dd1-b659-447d-9f5d-8a86c48fae77-fernet-keys\") pod \"keystone-bootstrap-4tdn6\" (UID: \"498e2dd1-b659-447d-9f5d-8a86c48fae77\") " pod="openstack/keystone-bootstrap-4tdn6" Dec 01 08:57:12 crc kubenswrapper[4689]: I1201 08:57:12.632927 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/498e2dd1-b659-447d-9f5d-8a86c48fae77-combined-ca-bundle\") pod \"keystone-bootstrap-4tdn6\" (UID: \"498e2dd1-b659-447d-9f5d-8a86c48fae77\") " pod="openstack/keystone-bootstrap-4tdn6" Dec 01 08:57:12 crc kubenswrapper[4689]: I1201 08:57:12.633397 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/498e2dd1-b659-447d-9f5d-8a86c48fae77-config-data\") pod \"keystone-bootstrap-4tdn6\" (UID: \"498e2dd1-b659-447d-9f5d-8a86c48fae77\") " pod="openstack/keystone-bootstrap-4tdn6" Dec 01 08:57:12 crc kubenswrapper[4689]: I1201 08:57:12.634136 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/498e2dd1-b659-447d-9f5d-8a86c48fae77-scripts\") pod \"keystone-bootstrap-4tdn6\" (UID: \"498e2dd1-b659-447d-9f5d-8a86c48fae77\") " pod="openstack/keystone-bootstrap-4tdn6" Dec 01 08:57:12 crc kubenswrapper[4689]: I1201 08:57:12.647075 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/498e2dd1-b659-447d-9f5d-8a86c48fae77-credential-keys\") pod \"keystone-bootstrap-4tdn6\" (UID: \"498e2dd1-b659-447d-9f5d-8a86c48fae77\") " pod="openstack/keystone-bootstrap-4tdn6" Dec 01 08:57:12 crc kubenswrapper[4689]: I1201 08:57:12.649167 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g54tb\" (UniqueName: \"kubernetes.io/projected/498e2dd1-b659-447d-9f5d-8a86c48fae77-kube-api-access-g54tb\") pod \"keystone-bootstrap-4tdn6\" (UID: \"498e2dd1-b659-447d-9f5d-8a86c48fae77\") " pod="openstack/keystone-bootstrap-4tdn6" Dec 01 08:57:12 crc kubenswrapper[4689]: I1201 08:57:12.868914 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-4tdn6" Dec 01 08:57:13 crc kubenswrapper[4689]: I1201 08:57:13.058675 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e12d10f6-caef-4c9d-9d88-332042911454" path="/var/lib/kubelet/pods/e12d10f6-caef-4c9d-9d88-332042911454/volumes" Dec 01 08:57:17 crc kubenswrapper[4689]: E1201 08:57:17.377308 4689 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Dec 01 08:57:17 crc kubenswrapper[4689]: E1201 08:57:17.378274 4689 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l4v76,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-5ttrw_openstack(878af3f4-684c-457b-b943-b47aa64dcb58): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 01 08:57:17 crc kubenswrapper[4689]: E1201 08:57:17.379623 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-5ttrw" podUID="878af3f4-684c-457b-b943-b47aa64dcb58" Dec 01 08:57:17 crc kubenswrapper[4689]: E1201 08:57:17.622074 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-5ttrw" podUID="878af3f4-684c-457b-b943-b47aa64dcb58" Dec 01 08:57:19 crc kubenswrapper[4689]: I1201 08:57:19.702643 4689 generic.go:334] "Generic (PLEG): container finished" podID="f240a66f-70cd-4747-b16f-807e6715e7a0" containerID="a594a7715aa34dcac9c70c5a096c7b85d42af74194a425c8c8b35f799d8fb14a" exitCode=0 Dec 01 08:57:19 crc kubenswrapper[4689]: I1201 08:57:19.703007 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-kfc4d" event={"ID":"f240a66f-70cd-4747-b16f-807e6715e7a0","Type":"ContainerDied","Data":"a594a7715aa34dcac9c70c5a096c7b85d42af74194a425c8c8b35f799d8fb14a"} Dec 01 08:57:21 crc kubenswrapper[4689]: I1201 08:57:21.004029 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-77585f5f8c-bv9wg" podUID="f3fc4aaf-1747-4ced-877d-63533218e8f1" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.125:5353: i/o timeout" Dec 01 08:57:26 crc kubenswrapper[4689]: I1201 08:57:26.005078 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-77585f5f8c-bv9wg" podUID="f3fc4aaf-1747-4ced-877d-63533218e8f1" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.125:5353: i/o timeout" Dec 01 08:57:27 crc kubenswrapper[4689]: I1201 08:57:27.336585 4689 scope.go:117] "RemoveContainer" containerID="18f7cbe0f72892f2ef341991013471c5a3fc10ea95f6e234b6f6a02f8622c184" Dec 01 08:57:27 crc kubenswrapper[4689]: I1201 08:57:27.551346 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7585876fd5-877pk" Dec 01 08:57:27 crc kubenswrapper[4689]: I1201 08:57:27.559923 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5cb9c55b6f-d69t9" Dec 01 08:57:27 crc kubenswrapper[4689]: I1201 08:57:27.580250 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-bv9wg" Dec 01 08:57:27 crc kubenswrapper[4689]: I1201 08:57:27.582993 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-776b5c685-s4c5n" Dec 01 08:57:27 crc kubenswrapper[4689]: I1201 08:57:27.743696 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f88ff950-322a-4e58-8cfa-def03f3c0752-scripts\") pod \"f88ff950-322a-4e58-8cfa-def03f3c0752\" (UID: \"f88ff950-322a-4e58-8cfa-def03f3c0752\") " Dec 01 08:57:27 crc kubenswrapper[4689]: I1201 08:57:27.743767 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b6551765-e11c-4cfc-a20f-976c7b1807ad-horizon-secret-key\") pod \"b6551765-e11c-4cfc-a20f-976c7b1807ad\" (UID: \"b6551765-e11c-4cfc-a20f-976c7b1807ad\") " Dec 01 08:57:27 crc kubenswrapper[4689]: I1201 08:57:27.743825 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rlqc\" (UniqueName: \"kubernetes.io/projected/b6551765-e11c-4cfc-a20f-976c7b1807ad-kube-api-access-8rlqc\") pod \"b6551765-e11c-4cfc-a20f-976c7b1807ad\" (UID: \"b6551765-e11c-4cfc-a20f-976c7b1807ad\") " Dec 01 08:57:27 crc kubenswrapper[4689]: I1201 08:57:27.743852 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f88ff950-322a-4e58-8cfa-def03f3c0752-config-data\") pod \"f88ff950-322a-4e58-8cfa-def03f3c0752\" (UID: \"f88ff950-322a-4e58-8cfa-def03f3c0752\") " Dec 01 08:57:27 crc kubenswrapper[4689]: I1201 08:57:27.743888 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f3fc4aaf-1747-4ced-877d-63533218e8f1-dns-svc\") pod \"f3fc4aaf-1747-4ced-877d-63533218e8f1\" (UID: \"f3fc4aaf-1747-4ced-877d-63533218e8f1\") " Dec 01 08:57:27 crc kubenswrapper[4689]: I1201 08:57:27.743940 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m95tg\" (UniqueName: \"kubernetes.io/projected/863bf673-6941-42ac-90ff-9e70bbf3f05a-kube-api-access-m95tg\") pod \"863bf673-6941-42ac-90ff-9e70bbf3f05a\" (UID: \"863bf673-6941-42ac-90ff-9e70bbf3f05a\") " Dec 01 08:57:27 crc kubenswrapper[4689]: I1201 08:57:27.743969 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/863bf673-6941-42ac-90ff-9e70bbf3f05a-scripts\") pod \"863bf673-6941-42ac-90ff-9e70bbf3f05a\" (UID: \"863bf673-6941-42ac-90ff-9e70bbf3f05a\") " Dec 01 08:57:27 crc kubenswrapper[4689]: I1201 08:57:27.743990 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/863bf673-6941-42ac-90ff-9e70bbf3f05a-config-data\") pod \"863bf673-6941-42ac-90ff-9e70bbf3f05a\" (UID: \"863bf673-6941-42ac-90ff-9e70bbf3f05a\") " Dec 01 08:57:27 crc kubenswrapper[4689]: I1201 08:57:27.744021 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6551765-e11c-4cfc-a20f-976c7b1807ad-logs\") pod \"b6551765-e11c-4cfc-a20f-976c7b1807ad\" (UID: \"b6551765-e11c-4cfc-a20f-976c7b1807ad\") " Dec 01 08:57:27 crc kubenswrapper[4689]: I1201 08:57:27.744039 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b6551765-e11c-4cfc-a20f-976c7b1807ad-scripts\") pod \"b6551765-e11c-4cfc-a20f-976c7b1807ad\" (UID: \"b6551765-e11c-4cfc-a20f-976c7b1807ad\") " Dec 01 08:57:27 crc kubenswrapper[4689]: I1201 08:57:27.744064 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3fc4aaf-1747-4ced-877d-63533218e8f1-config\") pod \"f3fc4aaf-1747-4ced-877d-63533218e8f1\" (UID: \"f3fc4aaf-1747-4ced-877d-63533218e8f1\") " Dec 01 08:57:27 crc kubenswrapper[4689]: I1201 08:57:27.744086 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/863bf673-6941-42ac-90ff-9e70bbf3f05a-logs\") pod \"863bf673-6941-42ac-90ff-9e70bbf3f05a\" (UID: \"863bf673-6941-42ac-90ff-9e70bbf3f05a\") " Dec 01 08:57:27 crc kubenswrapper[4689]: I1201 08:57:27.744141 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hkdtj\" (UniqueName: \"kubernetes.io/projected/f88ff950-322a-4e58-8cfa-def03f3c0752-kube-api-access-hkdtj\") pod \"f88ff950-322a-4e58-8cfa-def03f3c0752\" (UID: \"f88ff950-322a-4e58-8cfa-def03f3c0752\") " Dec 01 08:57:27 crc kubenswrapper[4689]: I1201 08:57:27.744193 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f3fc4aaf-1747-4ced-877d-63533218e8f1-ovsdbserver-nb\") pod \"f3fc4aaf-1747-4ced-877d-63533218e8f1\" (UID: \"f3fc4aaf-1747-4ced-877d-63533218e8f1\") " Dec 01 08:57:27 crc kubenswrapper[4689]: I1201 08:57:27.744260 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f88ff950-322a-4e58-8cfa-def03f3c0752-horizon-secret-key\") pod \"f88ff950-322a-4e58-8cfa-def03f3c0752\" (UID: \"f88ff950-322a-4e58-8cfa-def03f3c0752\") " Dec 01 08:57:27 crc kubenswrapper[4689]: I1201 08:57:27.744298 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f3fc4aaf-1747-4ced-877d-63533218e8f1-dns-swift-storage-0\") pod \"f3fc4aaf-1747-4ced-877d-63533218e8f1\" (UID: \"f3fc4aaf-1747-4ced-877d-63533218e8f1\") " Dec 01 08:57:27 crc kubenswrapper[4689]: I1201 08:57:27.744332 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b6551765-e11c-4cfc-a20f-976c7b1807ad-config-data\") pod \"b6551765-e11c-4cfc-a20f-976c7b1807ad\" (UID: \"b6551765-e11c-4cfc-a20f-976c7b1807ad\") " Dec 01 08:57:27 crc kubenswrapper[4689]: I1201 08:57:27.744453 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f3fc4aaf-1747-4ced-877d-63533218e8f1-ovsdbserver-sb\") pod \"f3fc4aaf-1747-4ced-877d-63533218e8f1\" (UID: \"f3fc4aaf-1747-4ced-877d-63533218e8f1\") " Dec 01 08:57:27 crc kubenswrapper[4689]: I1201 08:57:27.744479 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f88ff950-322a-4e58-8cfa-def03f3c0752-logs\") pod \"f88ff950-322a-4e58-8cfa-def03f3c0752\" (UID: \"f88ff950-322a-4e58-8cfa-def03f3c0752\") " Dec 01 08:57:27 crc kubenswrapper[4689]: I1201 08:57:27.744510 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pnd66\" (UniqueName: \"kubernetes.io/projected/f3fc4aaf-1747-4ced-877d-63533218e8f1-kube-api-access-pnd66\") pod \"f3fc4aaf-1747-4ced-877d-63533218e8f1\" (UID: \"f3fc4aaf-1747-4ced-877d-63533218e8f1\") " Dec 01 08:57:27 crc kubenswrapper[4689]: I1201 08:57:27.744539 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/863bf673-6941-42ac-90ff-9e70bbf3f05a-horizon-secret-key\") pod \"863bf673-6941-42ac-90ff-9e70bbf3f05a\" (UID: \"863bf673-6941-42ac-90ff-9e70bbf3f05a\") " Dec 01 08:57:27 crc kubenswrapper[4689]: I1201 08:57:27.744966 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f88ff950-322a-4e58-8cfa-def03f3c0752-config-data" (OuterVolumeSpecName: "config-data") pod "f88ff950-322a-4e58-8cfa-def03f3c0752" (UID: "f88ff950-322a-4e58-8cfa-def03f3c0752"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:57:27 crc kubenswrapper[4689]: I1201 08:57:27.745856 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/863bf673-6941-42ac-90ff-9e70bbf3f05a-logs" (OuterVolumeSpecName: "logs") pod "863bf673-6941-42ac-90ff-9e70bbf3f05a" (UID: "863bf673-6941-42ac-90ff-9e70bbf3f05a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 08:57:27 crc kubenswrapper[4689]: I1201 08:57:27.745972 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/863bf673-6941-42ac-90ff-9e70bbf3f05a-config-data" (OuterVolumeSpecName: "config-data") pod "863bf673-6941-42ac-90ff-9e70bbf3f05a" (UID: "863bf673-6941-42ac-90ff-9e70bbf3f05a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:57:27 crc kubenswrapper[4689]: I1201 08:57:27.746240 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6551765-e11c-4cfc-a20f-976c7b1807ad-scripts" (OuterVolumeSpecName: "scripts") pod "b6551765-e11c-4cfc-a20f-976c7b1807ad" (UID: "b6551765-e11c-4cfc-a20f-976c7b1807ad"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:57:27 crc kubenswrapper[4689]: I1201 08:57:27.746550 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6551765-e11c-4cfc-a20f-976c7b1807ad-config-data" (OuterVolumeSpecName: "config-data") pod "b6551765-e11c-4cfc-a20f-976c7b1807ad" (UID: "b6551765-e11c-4cfc-a20f-976c7b1807ad"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:57:27 crc kubenswrapper[4689]: I1201 08:57:27.747460 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6551765-e11c-4cfc-a20f-976c7b1807ad-logs" (OuterVolumeSpecName: "logs") pod "b6551765-e11c-4cfc-a20f-976c7b1807ad" (UID: "b6551765-e11c-4cfc-a20f-976c7b1807ad"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 08:57:27 crc kubenswrapper[4689]: I1201 08:57:27.748423 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/863bf673-6941-42ac-90ff-9e70bbf3f05a-scripts" (OuterVolumeSpecName: "scripts") pod "863bf673-6941-42ac-90ff-9e70bbf3f05a" (UID: "863bf673-6941-42ac-90ff-9e70bbf3f05a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:57:27 crc kubenswrapper[4689]: I1201 08:57:27.748607 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f88ff950-322a-4e58-8cfa-def03f3c0752-logs" (OuterVolumeSpecName: "logs") pod "f88ff950-322a-4e58-8cfa-def03f3c0752" (UID: "f88ff950-322a-4e58-8cfa-def03f3c0752"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 08:57:27 crc kubenswrapper[4689]: I1201 08:57:27.748659 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f88ff950-322a-4e58-8cfa-def03f3c0752-scripts" (OuterVolumeSpecName: "scripts") pod "f88ff950-322a-4e58-8cfa-def03f3c0752" (UID: "f88ff950-322a-4e58-8cfa-def03f3c0752"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:57:27 crc kubenswrapper[4689]: I1201 08:57:27.764019 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3fc4aaf-1747-4ced-877d-63533218e8f1-kube-api-access-pnd66" (OuterVolumeSpecName: "kube-api-access-pnd66") pod "f3fc4aaf-1747-4ced-877d-63533218e8f1" (UID: "f3fc4aaf-1747-4ced-877d-63533218e8f1"). InnerVolumeSpecName "kube-api-access-pnd66". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:57:27 crc kubenswrapper[4689]: I1201 08:57:27.770125 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/863bf673-6941-42ac-90ff-9e70bbf3f05a-kube-api-access-m95tg" (OuterVolumeSpecName: "kube-api-access-m95tg") pod "863bf673-6941-42ac-90ff-9e70bbf3f05a" (UID: "863bf673-6941-42ac-90ff-9e70bbf3f05a"). InnerVolumeSpecName "kube-api-access-m95tg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:57:27 crc kubenswrapper[4689]: I1201 08:57:27.772574 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6551765-e11c-4cfc-a20f-976c7b1807ad-kube-api-access-8rlqc" (OuterVolumeSpecName: "kube-api-access-8rlqc") pod "b6551765-e11c-4cfc-a20f-976c7b1807ad" (UID: "b6551765-e11c-4cfc-a20f-976c7b1807ad"). InnerVolumeSpecName "kube-api-access-8rlqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:57:27 crc kubenswrapper[4689]: I1201 08:57:27.772645 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6551765-e11c-4cfc-a20f-976c7b1807ad-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "b6551765-e11c-4cfc-a20f-976c7b1807ad" (UID: "b6551765-e11c-4cfc-a20f-976c7b1807ad"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:57:27 crc kubenswrapper[4689]: I1201 08:57:27.772699 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/863bf673-6941-42ac-90ff-9e70bbf3f05a-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "863bf673-6941-42ac-90ff-9e70bbf3f05a" (UID: "863bf673-6941-42ac-90ff-9e70bbf3f05a"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:57:27 crc kubenswrapper[4689]: I1201 08:57:27.774764 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88ff950-322a-4e58-8cfa-def03f3c0752-kube-api-access-hkdtj" (OuterVolumeSpecName: "kube-api-access-hkdtj") pod "f88ff950-322a-4e58-8cfa-def03f3c0752" (UID: "f88ff950-322a-4e58-8cfa-def03f3c0752"). InnerVolumeSpecName "kube-api-access-hkdtj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:57:27 crc kubenswrapper[4689]: I1201 08:57:27.778241 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88ff950-322a-4e58-8cfa-def03f3c0752-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "f88ff950-322a-4e58-8cfa-def03f3c0752" (UID: "f88ff950-322a-4e58-8cfa-def03f3c0752"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:57:27 crc kubenswrapper[4689]: I1201 08:57:27.800005 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3fc4aaf-1747-4ced-877d-63533218e8f1-config" (OuterVolumeSpecName: "config") pod "f3fc4aaf-1747-4ced-877d-63533218e8f1" (UID: "f3fc4aaf-1747-4ced-877d-63533218e8f1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:57:27 crc kubenswrapper[4689]: I1201 08:57:27.811079 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-776b5c685-s4c5n" event={"ID":"b6551765-e11c-4cfc-a20f-976c7b1807ad","Type":"ContainerDied","Data":"d5ec7353576391b54cedcbf4d4ad7a00296418214053bd8cd42b88cf60f9c48e"} Dec 01 08:57:27 crc kubenswrapper[4689]: I1201 08:57:27.811332 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-776b5c685-s4c5n" Dec 01 08:57:27 crc kubenswrapper[4689]: I1201 08:57:27.815053 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7585876fd5-877pk" event={"ID":"863bf673-6941-42ac-90ff-9e70bbf3f05a","Type":"ContainerDied","Data":"5a45c03f2b4b81d336a095f3169f4cfe755ada9d61d891a05c7f1d01e00c66d5"} Dec 01 08:57:27 crc kubenswrapper[4689]: I1201 08:57:27.815519 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7585876fd5-877pk" Dec 01 08:57:27 crc kubenswrapper[4689]: I1201 08:57:27.816047 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3fc4aaf-1747-4ced-877d-63533218e8f1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f3fc4aaf-1747-4ced-877d-63533218e8f1" (UID: "f3fc4aaf-1747-4ced-877d-63533218e8f1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:57:27 crc kubenswrapper[4689]: I1201 08:57:27.822391 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5cb9c55b6f-d69t9" event={"ID":"f88ff950-322a-4e58-8cfa-def03f3c0752","Type":"ContainerDied","Data":"f371d39159d4c00e3f62d58b1f07a7e75ac6c13ac6997e74e48032b333adcc21"} Dec 01 08:57:27 crc kubenswrapper[4689]: I1201 08:57:27.822715 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5cb9c55b6f-d69t9" Dec 01 08:57:27 crc kubenswrapper[4689]: I1201 08:57:27.825001 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3fc4aaf-1747-4ced-877d-63533218e8f1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f3fc4aaf-1747-4ced-877d-63533218e8f1" (UID: "f3fc4aaf-1747-4ced-877d-63533218e8f1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:57:27 crc kubenswrapper[4689]: I1201 08:57:27.825342 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3fc4aaf-1747-4ced-877d-63533218e8f1-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f3fc4aaf-1747-4ced-877d-63533218e8f1" (UID: "f3fc4aaf-1747-4ced-877d-63533218e8f1"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:57:27 crc kubenswrapper[4689]: I1201 08:57:27.827976 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-bv9wg" event={"ID":"f3fc4aaf-1747-4ced-877d-63533218e8f1","Type":"ContainerDied","Data":"6667675247a46d058418cafabd7b71cf70099ee91dc0eb6c0f41332112e3ecc9"} Dec 01 08:57:27 crc kubenswrapper[4689]: I1201 08:57:27.828743 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-bv9wg" Dec 01 08:57:27 crc kubenswrapper[4689]: I1201 08:57:27.841748 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3fc4aaf-1747-4ced-877d-63533218e8f1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f3fc4aaf-1747-4ced-877d-63533218e8f1" (UID: "f3fc4aaf-1747-4ced-877d-63533218e8f1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:57:27 crc kubenswrapper[4689]: I1201 08:57:27.846285 4689 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f88ff950-322a-4e58-8cfa-def03f3c0752-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 01 08:57:27 crc kubenswrapper[4689]: I1201 08:57:27.846321 4689 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f3fc4aaf-1747-4ced-877d-63533218e8f1-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 01 08:57:27 crc kubenswrapper[4689]: I1201 08:57:27.846335 4689 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b6551765-e11c-4cfc-a20f-976c7b1807ad-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 08:57:27 crc kubenswrapper[4689]: I1201 08:57:27.846346 4689 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f3fc4aaf-1747-4ced-877d-63533218e8f1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 01 08:57:27 crc kubenswrapper[4689]: I1201 08:57:27.846355 4689 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f88ff950-322a-4e58-8cfa-def03f3c0752-logs\") on node \"crc\" DevicePath \"\"" Dec 01 08:57:27 crc kubenswrapper[4689]: I1201 08:57:27.846376 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pnd66\" (UniqueName: \"kubernetes.io/projected/f3fc4aaf-1747-4ced-877d-63533218e8f1-kube-api-access-pnd66\") on node \"crc\" DevicePath \"\"" Dec 01 08:57:27 crc kubenswrapper[4689]: I1201 08:57:27.846385 4689 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/863bf673-6941-42ac-90ff-9e70bbf3f05a-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 01 08:57:27 crc kubenswrapper[4689]: I1201 08:57:27.846395 4689 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f88ff950-322a-4e58-8cfa-def03f3c0752-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 08:57:27 crc kubenswrapper[4689]: I1201 08:57:27.846405 4689 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b6551765-e11c-4cfc-a20f-976c7b1807ad-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 01 08:57:27 crc kubenswrapper[4689]: I1201 08:57:27.846414 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8rlqc\" (UniqueName: \"kubernetes.io/projected/b6551765-e11c-4cfc-a20f-976c7b1807ad-kube-api-access-8rlqc\") on node \"crc\" DevicePath \"\"" Dec 01 08:57:27 crc kubenswrapper[4689]: I1201 08:57:27.846424 4689 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f88ff950-322a-4e58-8cfa-def03f3c0752-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 08:57:27 crc kubenswrapper[4689]: I1201 08:57:27.846434 4689 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f3fc4aaf-1747-4ced-877d-63533218e8f1-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 01 08:57:27 crc kubenswrapper[4689]: I1201 08:57:27.846442 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m95tg\" (UniqueName: \"kubernetes.io/projected/863bf673-6941-42ac-90ff-9e70bbf3f05a-kube-api-access-m95tg\") on node \"crc\" DevicePath \"\"" Dec 01 08:57:27 crc kubenswrapper[4689]: I1201 08:57:27.846450 4689 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/863bf673-6941-42ac-90ff-9e70bbf3f05a-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 08:57:27 crc kubenswrapper[4689]: I1201 08:57:27.846458 4689 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/863bf673-6941-42ac-90ff-9e70bbf3f05a-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 08:57:27 crc kubenswrapper[4689]: I1201 08:57:27.846466 4689 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6551765-e11c-4cfc-a20f-976c7b1807ad-logs\") on node \"crc\" DevicePath \"\"" Dec 01 08:57:27 crc kubenswrapper[4689]: I1201 08:57:27.846476 4689 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b6551765-e11c-4cfc-a20f-976c7b1807ad-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 08:57:27 crc kubenswrapper[4689]: I1201 08:57:27.846484 4689 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3fc4aaf-1747-4ced-877d-63533218e8f1-config\") on node \"crc\" DevicePath \"\"" Dec 01 08:57:27 crc kubenswrapper[4689]: I1201 08:57:27.846492 4689 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/863bf673-6941-42ac-90ff-9e70bbf3f05a-logs\") on node \"crc\" DevicePath \"\"" Dec 01 08:57:27 crc kubenswrapper[4689]: I1201 08:57:27.846499 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hkdtj\" (UniqueName: \"kubernetes.io/projected/f88ff950-322a-4e58-8cfa-def03f3c0752-kube-api-access-hkdtj\") on node \"crc\" DevicePath \"\"" Dec 01 08:57:27 crc kubenswrapper[4689]: I1201 08:57:27.846507 4689 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f3fc4aaf-1747-4ced-877d-63533218e8f1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 01 08:57:27 crc kubenswrapper[4689]: I1201 08:57:27.944941 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-776b5c685-s4c5n"] Dec 01 08:57:27 crc kubenswrapper[4689]: I1201 08:57:27.952100 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-776b5c685-s4c5n"] Dec 01 08:57:27 crc kubenswrapper[4689]: I1201 08:57:27.987441 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7585876fd5-877pk"] Dec 01 08:57:28 crc kubenswrapper[4689]: I1201 08:57:28.010425 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7585876fd5-877pk"] Dec 01 08:57:28 crc kubenswrapper[4689]: I1201 08:57:28.026413 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5cb9c55b6f-d69t9"] Dec 01 08:57:28 crc kubenswrapper[4689]: I1201 08:57:28.033155 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5cb9c55b6f-d69t9"] Dec 01 08:57:28 crc kubenswrapper[4689]: I1201 08:57:28.039159 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 01 08:57:28 crc kubenswrapper[4689]: I1201 08:57:28.155946 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-kfc4d" Dec 01 08:57:28 crc kubenswrapper[4689]: E1201 08:57:28.159587 4689 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Dec 01 08:57:28 crc kubenswrapper[4689]: E1201 08:57:28.159879 4689 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n59ch88h65h5dbh558h555hf6h5b9h6fhd6h8fh6h9ch648h596h659h8ch664h5f7hd6h58bhdh577h5h657h58dh558hfh676h688h58fh66bq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r29fc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(f54de58e-9111-462b-a86e-8e324060c8aa): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 01 08:57:28 crc kubenswrapper[4689]: I1201 08:57:28.182542 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-bv9wg"] Dec 01 08:57:28 crc kubenswrapper[4689]: I1201 08:57:28.205821 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-bv9wg"] Dec 01 08:57:28 crc kubenswrapper[4689]: I1201 08:57:28.354632 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6td4z\" (UniqueName: \"kubernetes.io/projected/f240a66f-70cd-4747-b16f-807e6715e7a0-kube-api-access-6td4z\") pod \"f240a66f-70cd-4747-b16f-807e6715e7a0\" (UID: \"f240a66f-70cd-4747-b16f-807e6715e7a0\") " Dec 01 08:57:28 crc kubenswrapper[4689]: I1201 08:57:28.354720 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f240a66f-70cd-4747-b16f-807e6715e7a0-combined-ca-bundle\") pod \"f240a66f-70cd-4747-b16f-807e6715e7a0\" (UID: \"f240a66f-70cd-4747-b16f-807e6715e7a0\") " Dec 01 08:57:28 crc kubenswrapper[4689]: I1201 08:57:28.354761 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f240a66f-70cd-4747-b16f-807e6715e7a0-config\") pod \"f240a66f-70cd-4747-b16f-807e6715e7a0\" (UID: \"f240a66f-70cd-4747-b16f-807e6715e7a0\") " Dec 01 08:57:28 crc kubenswrapper[4689]: I1201 08:57:28.359104 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f240a66f-70cd-4747-b16f-807e6715e7a0-kube-api-access-6td4z" (OuterVolumeSpecName: "kube-api-access-6td4z") pod "f240a66f-70cd-4747-b16f-807e6715e7a0" (UID: "f240a66f-70cd-4747-b16f-807e6715e7a0"). InnerVolumeSpecName "kube-api-access-6td4z". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:57:28 crc kubenswrapper[4689]: I1201 08:57:28.384499 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f240a66f-70cd-4747-b16f-807e6715e7a0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f240a66f-70cd-4747-b16f-807e6715e7a0" (UID: "f240a66f-70cd-4747-b16f-807e6715e7a0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:57:28 crc kubenswrapper[4689]: I1201 08:57:28.388425 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f240a66f-70cd-4747-b16f-807e6715e7a0-config" (OuterVolumeSpecName: "config") pod "f240a66f-70cd-4747-b16f-807e6715e7a0" (UID: "f240a66f-70cd-4747-b16f-807e6715e7a0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:57:28 crc kubenswrapper[4689]: I1201 08:57:28.456430 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6td4z\" (UniqueName: \"kubernetes.io/projected/f240a66f-70cd-4747-b16f-807e6715e7a0-kube-api-access-6td4z\") on node \"crc\" DevicePath \"\"" Dec 01 08:57:28 crc kubenswrapper[4689]: I1201 08:57:28.456461 4689 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f240a66f-70cd-4747-b16f-807e6715e7a0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 08:57:28 crc kubenswrapper[4689]: I1201 08:57:28.456470 4689 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/f240a66f-70cd-4747-b16f-807e6715e7a0-config\") on node \"crc\" DevicePath \"\"" Dec 01 08:57:28 crc kubenswrapper[4689]: I1201 08:57:28.837752 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-kfc4d" event={"ID":"f240a66f-70cd-4747-b16f-807e6715e7a0","Type":"ContainerDied","Data":"1d763543cc1e20539e25adbbfd2d4452b24f04f963e4bcc196c8065082e7410c"} Dec 01 08:57:28 crc kubenswrapper[4689]: I1201 08:57:28.837788 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d763543cc1e20539e25adbbfd2d4452b24f04f963e4bcc196c8065082e7410c" Dec 01 08:57:28 crc kubenswrapper[4689]: I1201 08:57:28.837797 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-kfc4d" Dec 01 08:57:29 crc kubenswrapper[4689]: I1201 08:57:29.057692 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="863bf673-6941-42ac-90ff-9e70bbf3f05a" path="/var/lib/kubelet/pods/863bf673-6941-42ac-90ff-9e70bbf3f05a/volumes" Dec 01 08:57:29 crc kubenswrapper[4689]: I1201 08:57:29.058142 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6551765-e11c-4cfc-a20f-976c7b1807ad" path="/var/lib/kubelet/pods/b6551765-e11c-4cfc-a20f-976c7b1807ad/volumes" Dec 01 08:57:29 crc kubenswrapper[4689]: I1201 08:57:29.058524 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3fc4aaf-1747-4ced-877d-63533218e8f1" path="/var/lib/kubelet/pods/f3fc4aaf-1747-4ced-877d-63533218e8f1/volumes" Dec 01 08:57:29 crc kubenswrapper[4689]: I1201 08:57:29.059166 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88ff950-322a-4e58-8cfa-def03f3c0752" path="/var/lib/kubelet/pods/f88ff950-322a-4e58-8cfa-def03f3c0752/volumes" Dec 01 08:57:29 crc kubenswrapper[4689]: I1201 08:57:29.352237 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-64x67"] Dec 01 08:57:29 crc kubenswrapper[4689]: E1201 08:57:29.352885 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3fc4aaf-1747-4ced-877d-63533218e8f1" containerName="dnsmasq-dns" Dec 01 08:57:29 crc kubenswrapper[4689]: I1201 08:57:29.352956 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3fc4aaf-1747-4ced-877d-63533218e8f1" containerName="dnsmasq-dns" Dec 01 08:57:29 crc kubenswrapper[4689]: E1201 08:57:29.353014 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f240a66f-70cd-4747-b16f-807e6715e7a0" containerName="neutron-db-sync" Dec 01 08:57:29 crc kubenswrapper[4689]: I1201 08:57:29.353065 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="f240a66f-70cd-4747-b16f-807e6715e7a0" containerName="neutron-db-sync" Dec 01 08:57:29 crc kubenswrapper[4689]: E1201 08:57:29.353239 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3fc4aaf-1747-4ced-877d-63533218e8f1" containerName="init" Dec 01 08:57:29 crc kubenswrapper[4689]: I1201 08:57:29.353299 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3fc4aaf-1747-4ced-877d-63533218e8f1" containerName="init" Dec 01 08:57:29 crc kubenswrapper[4689]: I1201 08:57:29.353539 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3fc4aaf-1747-4ced-877d-63533218e8f1" containerName="dnsmasq-dns" Dec 01 08:57:29 crc kubenswrapper[4689]: I1201 08:57:29.353600 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="f240a66f-70cd-4747-b16f-807e6715e7a0" containerName="neutron-db-sync" Dec 01 08:57:29 crc kubenswrapper[4689]: I1201 08:57:29.354705 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84b966f6c9-64x67" Dec 01 08:57:29 crc kubenswrapper[4689]: I1201 08:57:29.391116 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-64x67"] Dec 01 08:57:29 crc kubenswrapper[4689]: I1201 08:57:29.475511 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d289ed97-fc00-401d-a724-9ff8a60cbc08-dns-svc\") pod \"dnsmasq-dns-84b966f6c9-64x67\" (UID: \"d289ed97-fc00-401d-a724-9ff8a60cbc08\") " pod="openstack/dnsmasq-dns-84b966f6c9-64x67" Dec 01 08:57:29 crc kubenswrapper[4689]: I1201 08:57:29.475583 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d289ed97-fc00-401d-a724-9ff8a60cbc08-ovsdbserver-sb\") pod \"dnsmasq-dns-84b966f6c9-64x67\" (UID: \"d289ed97-fc00-401d-a724-9ff8a60cbc08\") " pod="openstack/dnsmasq-dns-84b966f6c9-64x67" Dec 01 08:57:29 crc kubenswrapper[4689]: I1201 08:57:29.475630 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d289ed97-fc00-401d-a724-9ff8a60cbc08-ovsdbserver-nb\") pod \"dnsmasq-dns-84b966f6c9-64x67\" (UID: \"d289ed97-fc00-401d-a724-9ff8a60cbc08\") " pod="openstack/dnsmasq-dns-84b966f6c9-64x67" Dec 01 08:57:29 crc kubenswrapper[4689]: I1201 08:57:29.475654 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbnz4\" (UniqueName: \"kubernetes.io/projected/d289ed97-fc00-401d-a724-9ff8a60cbc08-kube-api-access-hbnz4\") pod \"dnsmasq-dns-84b966f6c9-64x67\" (UID: \"d289ed97-fc00-401d-a724-9ff8a60cbc08\") " pod="openstack/dnsmasq-dns-84b966f6c9-64x67" Dec 01 08:57:29 crc kubenswrapper[4689]: I1201 08:57:29.475689 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d289ed97-fc00-401d-a724-9ff8a60cbc08-dns-swift-storage-0\") pod \"dnsmasq-dns-84b966f6c9-64x67\" (UID: \"d289ed97-fc00-401d-a724-9ff8a60cbc08\") " pod="openstack/dnsmasq-dns-84b966f6c9-64x67" Dec 01 08:57:29 crc kubenswrapper[4689]: I1201 08:57:29.475740 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d289ed97-fc00-401d-a724-9ff8a60cbc08-config\") pod \"dnsmasq-dns-84b966f6c9-64x67\" (UID: \"d289ed97-fc00-401d-a724-9ff8a60cbc08\") " pod="openstack/dnsmasq-dns-84b966f6c9-64x67" Dec 01 08:57:29 crc kubenswrapper[4689]: I1201 08:57:29.483291 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-74cd45bd8d-lsl5j"] Dec 01 08:57:29 crc kubenswrapper[4689]: I1201 08:57:29.485034 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-74cd45bd8d-lsl5j" Dec 01 08:57:29 crc kubenswrapper[4689]: I1201 08:57:29.488966 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Dec 01 08:57:29 crc kubenswrapper[4689]: I1201 08:57:29.489642 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Dec 01 08:57:29 crc kubenswrapper[4689]: I1201 08:57:29.490115 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-xp6ks" Dec 01 08:57:29 crc kubenswrapper[4689]: I1201 08:57:29.490261 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Dec 01 08:57:29 crc kubenswrapper[4689]: I1201 08:57:29.516378 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-74cd45bd8d-lsl5j"] Dec 01 08:57:29 crc kubenswrapper[4689]: I1201 08:57:29.581609 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d289ed97-fc00-401d-a724-9ff8a60cbc08-dns-svc\") pod \"dnsmasq-dns-84b966f6c9-64x67\" (UID: \"d289ed97-fc00-401d-a724-9ff8a60cbc08\") " pod="openstack/dnsmasq-dns-84b966f6c9-64x67" Dec 01 08:57:29 crc kubenswrapper[4689]: I1201 08:57:29.582011 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d289ed97-fc00-401d-a724-9ff8a60cbc08-ovsdbserver-sb\") pod \"dnsmasq-dns-84b966f6c9-64x67\" (UID: \"d289ed97-fc00-401d-a724-9ff8a60cbc08\") " pod="openstack/dnsmasq-dns-84b966f6c9-64x67" Dec 01 08:57:29 crc kubenswrapper[4689]: I1201 08:57:29.582034 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d289ed97-fc00-401d-a724-9ff8a60cbc08-ovsdbserver-nb\") pod \"dnsmasq-dns-84b966f6c9-64x67\" (UID: \"d289ed97-fc00-401d-a724-9ff8a60cbc08\") " pod="openstack/dnsmasq-dns-84b966f6c9-64x67" Dec 01 08:57:29 crc kubenswrapper[4689]: I1201 08:57:29.582086 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbnz4\" (UniqueName: \"kubernetes.io/projected/d289ed97-fc00-401d-a724-9ff8a60cbc08-kube-api-access-hbnz4\") pod \"dnsmasq-dns-84b966f6c9-64x67\" (UID: \"d289ed97-fc00-401d-a724-9ff8a60cbc08\") " pod="openstack/dnsmasq-dns-84b966f6c9-64x67" Dec 01 08:57:29 crc kubenswrapper[4689]: I1201 08:57:29.582122 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d289ed97-fc00-401d-a724-9ff8a60cbc08-dns-swift-storage-0\") pod \"dnsmasq-dns-84b966f6c9-64x67\" (UID: \"d289ed97-fc00-401d-a724-9ff8a60cbc08\") " pod="openstack/dnsmasq-dns-84b966f6c9-64x67" Dec 01 08:57:29 crc kubenswrapper[4689]: I1201 08:57:29.582204 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d289ed97-fc00-401d-a724-9ff8a60cbc08-config\") pod \"dnsmasq-dns-84b966f6c9-64x67\" (UID: \"d289ed97-fc00-401d-a724-9ff8a60cbc08\") " pod="openstack/dnsmasq-dns-84b966f6c9-64x67" Dec 01 08:57:29 crc kubenswrapper[4689]: I1201 08:57:29.584063 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d289ed97-fc00-401d-a724-9ff8a60cbc08-ovsdbserver-nb\") pod \"dnsmasq-dns-84b966f6c9-64x67\" (UID: \"d289ed97-fc00-401d-a724-9ff8a60cbc08\") " pod="openstack/dnsmasq-dns-84b966f6c9-64x67" Dec 01 08:57:29 crc kubenswrapper[4689]: I1201 08:57:29.584594 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d289ed97-fc00-401d-a724-9ff8a60cbc08-dns-svc\") pod \"dnsmasq-dns-84b966f6c9-64x67\" (UID: \"d289ed97-fc00-401d-a724-9ff8a60cbc08\") " pod="openstack/dnsmasq-dns-84b966f6c9-64x67" Dec 01 08:57:29 crc kubenswrapper[4689]: I1201 08:57:29.589826 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d289ed97-fc00-401d-a724-9ff8a60cbc08-config\") pod \"dnsmasq-dns-84b966f6c9-64x67\" (UID: \"d289ed97-fc00-401d-a724-9ff8a60cbc08\") " pod="openstack/dnsmasq-dns-84b966f6c9-64x67" Dec 01 08:57:29 crc kubenswrapper[4689]: I1201 08:57:29.591356 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d289ed97-fc00-401d-a724-9ff8a60cbc08-ovsdbserver-sb\") pod \"dnsmasq-dns-84b966f6c9-64x67\" (UID: \"d289ed97-fc00-401d-a724-9ff8a60cbc08\") " pod="openstack/dnsmasq-dns-84b966f6c9-64x67" Dec 01 08:57:29 crc kubenswrapper[4689]: I1201 08:57:29.596355 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d289ed97-fc00-401d-a724-9ff8a60cbc08-dns-swift-storage-0\") pod \"dnsmasq-dns-84b966f6c9-64x67\" (UID: \"d289ed97-fc00-401d-a724-9ff8a60cbc08\") " pod="openstack/dnsmasq-dns-84b966f6c9-64x67" Dec 01 08:57:29 crc kubenswrapper[4689]: I1201 08:57:29.642483 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbnz4\" (UniqueName: \"kubernetes.io/projected/d289ed97-fc00-401d-a724-9ff8a60cbc08-kube-api-access-hbnz4\") pod \"dnsmasq-dns-84b966f6c9-64x67\" (UID: \"d289ed97-fc00-401d-a724-9ff8a60cbc08\") " pod="openstack/dnsmasq-dns-84b966f6c9-64x67" Dec 01 08:57:29 crc kubenswrapper[4689]: I1201 08:57:29.658474 4689 scope.go:117] "RemoveContainer" containerID="4f68a3dc14caa75f5c7023cb5a54f57b894f335e256ba16b2cd7461a6c777d2a" Dec 01 08:57:29 crc kubenswrapper[4689]: E1201 08:57:29.659033 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f68a3dc14caa75f5c7023cb5a54f57b894f335e256ba16b2cd7461a6c777d2a\": container with ID starting with 4f68a3dc14caa75f5c7023cb5a54f57b894f335e256ba16b2cd7461a6c777d2a not found: ID does not exist" containerID="4f68a3dc14caa75f5c7023cb5a54f57b894f335e256ba16b2cd7461a6c777d2a" Dec 01 08:57:29 crc kubenswrapper[4689]: I1201 08:57:29.659079 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f68a3dc14caa75f5c7023cb5a54f57b894f335e256ba16b2cd7461a6c777d2a"} err="failed to get container status \"4f68a3dc14caa75f5c7023cb5a54f57b894f335e256ba16b2cd7461a6c777d2a\": rpc error: code = NotFound desc = could not find container \"4f68a3dc14caa75f5c7023cb5a54f57b894f335e256ba16b2cd7461a6c777d2a\": container with ID starting with 4f68a3dc14caa75f5c7023cb5a54f57b894f335e256ba16b2cd7461a6c777d2a not found: ID does not exist" Dec 01 08:57:29 crc kubenswrapper[4689]: I1201 08:57:29.659115 4689 scope.go:117] "RemoveContainer" containerID="18f7cbe0f72892f2ef341991013471c5a3fc10ea95f6e234b6f6a02f8622c184" Dec 01 08:57:29 crc kubenswrapper[4689]: E1201 08:57:29.659753 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18f7cbe0f72892f2ef341991013471c5a3fc10ea95f6e234b6f6a02f8622c184\": container with ID starting with 18f7cbe0f72892f2ef341991013471c5a3fc10ea95f6e234b6f6a02f8622c184 not found: ID does not exist" containerID="18f7cbe0f72892f2ef341991013471c5a3fc10ea95f6e234b6f6a02f8622c184" Dec 01 08:57:29 crc kubenswrapper[4689]: I1201 08:57:29.659788 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18f7cbe0f72892f2ef341991013471c5a3fc10ea95f6e234b6f6a02f8622c184"} err="failed to get container status \"18f7cbe0f72892f2ef341991013471c5a3fc10ea95f6e234b6f6a02f8622c184\": rpc error: code = NotFound desc = could not find container \"18f7cbe0f72892f2ef341991013471c5a3fc10ea95f6e234b6f6a02f8622c184\": container with ID starting with 18f7cbe0f72892f2ef341991013471c5a3fc10ea95f6e234b6f6a02f8622c184 not found: ID does not exist" Dec 01 08:57:29 crc kubenswrapper[4689]: I1201 08:57:29.659804 4689 scope.go:117] "RemoveContainer" containerID="4f68a3dc14caa75f5c7023cb5a54f57b894f335e256ba16b2cd7461a6c777d2a" Dec 01 08:57:29 crc kubenswrapper[4689]: I1201 08:57:29.661494 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f68a3dc14caa75f5c7023cb5a54f57b894f335e256ba16b2cd7461a6c777d2a"} err="failed to get container status \"4f68a3dc14caa75f5c7023cb5a54f57b894f335e256ba16b2cd7461a6c777d2a\": rpc error: code = NotFound desc = could not find container \"4f68a3dc14caa75f5c7023cb5a54f57b894f335e256ba16b2cd7461a6c777d2a\": container with ID starting with 4f68a3dc14caa75f5c7023cb5a54f57b894f335e256ba16b2cd7461a6c777d2a not found: ID does not exist" Dec 01 08:57:29 crc kubenswrapper[4689]: I1201 08:57:29.661523 4689 scope.go:117] "RemoveContainer" containerID="18f7cbe0f72892f2ef341991013471c5a3fc10ea95f6e234b6f6a02f8622c184" Dec 01 08:57:29 crc kubenswrapper[4689]: I1201 08:57:29.662307 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18f7cbe0f72892f2ef341991013471c5a3fc10ea95f6e234b6f6a02f8622c184"} err="failed to get container status \"18f7cbe0f72892f2ef341991013471c5a3fc10ea95f6e234b6f6a02f8622c184\": rpc error: code = NotFound desc = could not find container \"18f7cbe0f72892f2ef341991013471c5a3fc10ea95f6e234b6f6a02f8622c184\": container with ID starting with 18f7cbe0f72892f2ef341991013471c5a3fc10ea95f6e234b6f6a02f8622c184 not found: ID does not exist" Dec 01 08:57:29 crc kubenswrapper[4689]: I1201 08:57:29.662380 4689 scope.go:117] "RemoveContainer" containerID="e64ddeef17e1dc2b79ca91d031e2897d8d3287a13dd93cae0a1f4c6c71e4f2e9" Dec 01 08:57:29 crc kubenswrapper[4689]: I1201 08:57:29.672939 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84b966f6c9-64x67" Dec 01 08:57:29 crc kubenswrapper[4689]: I1201 08:57:29.686459 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7jkr\" (UniqueName: \"kubernetes.io/projected/50356777-8001-44ef-95a4-73db83be36bc-kube-api-access-j7jkr\") pod \"neutron-74cd45bd8d-lsl5j\" (UID: \"50356777-8001-44ef-95a4-73db83be36bc\") " pod="openstack/neutron-74cd45bd8d-lsl5j" Dec 01 08:57:29 crc kubenswrapper[4689]: I1201 08:57:29.686513 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/50356777-8001-44ef-95a4-73db83be36bc-config\") pod \"neutron-74cd45bd8d-lsl5j\" (UID: \"50356777-8001-44ef-95a4-73db83be36bc\") " pod="openstack/neutron-74cd45bd8d-lsl5j" Dec 01 08:57:29 crc kubenswrapper[4689]: I1201 08:57:29.686553 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50356777-8001-44ef-95a4-73db83be36bc-combined-ca-bundle\") pod \"neutron-74cd45bd8d-lsl5j\" (UID: \"50356777-8001-44ef-95a4-73db83be36bc\") " pod="openstack/neutron-74cd45bd8d-lsl5j" Dec 01 08:57:29 crc kubenswrapper[4689]: I1201 08:57:29.686595 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/50356777-8001-44ef-95a4-73db83be36bc-ovndb-tls-certs\") pod \"neutron-74cd45bd8d-lsl5j\" (UID: \"50356777-8001-44ef-95a4-73db83be36bc\") " pod="openstack/neutron-74cd45bd8d-lsl5j" Dec 01 08:57:29 crc kubenswrapper[4689]: I1201 08:57:29.686619 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/50356777-8001-44ef-95a4-73db83be36bc-httpd-config\") pod \"neutron-74cd45bd8d-lsl5j\" (UID: \"50356777-8001-44ef-95a4-73db83be36bc\") " pod="openstack/neutron-74cd45bd8d-lsl5j" Dec 01 08:57:29 crc kubenswrapper[4689]: E1201 08:57:29.696424 4689 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Dec 01 08:57:29 crc kubenswrapper[4689]: E1201 08:57:29.696629 4689 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jqlzj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-kx454_openstack(767a61f9-7a7d-43df-b53f-efdc8c693381): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 01 08:57:29 crc kubenswrapper[4689]: E1201 08:57:29.701436 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-kx454" podUID="767a61f9-7a7d-43df-b53f-efdc8c693381" Dec 01 08:57:29 crc kubenswrapper[4689]: I1201 08:57:29.788819 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/50356777-8001-44ef-95a4-73db83be36bc-config\") pod \"neutron-74cd45bd8d-lsl5j\" (UID: \"50356777-8001-44ef-95a4-73db83be36bc\") " pod="openstack/neutron-74cd45bd8d-lsl5j" Dec 01 08:57:29 crc kubenswrapper[4689]: I1201 08:57:29.789206 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50356777-8001-44ef-95a4-73db83be36bc-combined-ca-bundle\") pod \"neutron-74cd45bd8d-lsl5j\" (UID: \"50356777-8001-44ef-95a4-73db83be36bc\") " pod="openstack/neutron-74cd45bd8d-lsl5j" Dec 01 08:57:29 crc kubenswrapper[4689]: I1201 08:57:29.789268 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/50356777-8001-44ef-95a4-73db83be36bc-ovndb-tls-certs\") pod \"neutron-74cd45bd8d-lsl5j\" (UID: \"50356777-8001-44ef-95a4-73db83be36bc\") " pod="openstack/neutron-74cd45bd8d-lsl5j" Dec 01 08:57:29 crc kubenswrapper[4689]: I1201 08:57:29.789293 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/50356777-8001-44ef-95a4-73db83be36bc-httpd-config\") pod \"neutron-74cd45bd8d-lsl5j\" (UID: \"50356777-8001-44ef-95a4-73db83be36bc\") " pod="openstack/neutron-74cd45bd8d-lsl5j" Dec 01 08:57:29 crc kubenswrapper[4689]: I1201 08:57:29.789408 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7jkr\" (UniqueName: \"kubernetes.io/projected/50356777-8001-44ef-95a4-73db83be36bc-kube-api-access-j7jkr\") pod \"neutron-74cd45bd8d-lsl5j\" (UID: \"50356777-8001-44ef-95a4-73db83be36bc\") " pod="openstack/neutron-74cd45bd8d-lsl5j" Dec 01 08:57:29 crc kubenswrapper[4689]: I1201 08:57:29.795123 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/50356777-8001-44ef-95a4-73db83be36bc-config\") pod \"neutron-74cd45bd8d-lsl5j\" (UID: \"50356777-8001-44ef-95a4-73db83be36bc\") " pod="openstack/neutron-74cd45bd8d-lsl5j" Dec 01 08:57:29 crc kubenswrapper[4689]: I1201 08:57:29.815381 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7jkr\" (UniqueName: \"kubernetes.io/projected/50356777-8001-44ef-95a4-73db83be36bc-kube-api-access-j7jkr\") pod \"neutron-74cd45bd8d-lsl5j\" (UID: \"50356777-8001-44ef-95a4-73db83be36bc\") " pod="openstack/neutron-74cd45bd8d-lsl5j" Dec 01 08:57:29 crc kubenswrapper[4689]: I1201 08:57:29.817698 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/50356777-8001-44ef-95a4-73db83be36bc-httpd-config\") pod \"neutron-74cd45bd8d-lsl5j\" (UID: \"50356777-8001-44ef-95a4-73db83be36bc\") " pod="openstack/neutron-74cd45bd8d-lsl5j" Dec 01 08:57:29 crc kubenswrapper[4689]: I1201 08:57:29.818252 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50356777-8001-44ef-95a4-73db83be36bc-combined-ca-bundle\") pod \"neutron-74cd45bd8d-lsl5j\" (UID: \"50356777-8001-44ef-95a4-73db83be36bc\") " pod="openstack/neutron-74cd45bd8d-lsl5j" Dec 01 08:57:29 crc kubenswrapper[4689]: I1201 08:57:29.833298 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/50356777-8001-44ef-95a4-73db83be36bc-ovndb-tls-certs\") pod \"neutron-74cd45bd8d-lsl5j\" (UID: \"50356777-8001-44ef-95a4-73db83be36bc\") " pod="openstack/neutron-74cd45bd8d-lsl5j" Dec 01 08:57:29 crc kubenswrapper[4689]: I1201 08:57:29.934926 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f48e5437-c685-4121-954b-1d3d8625bd28","Type":"ContainerStarted","Data":"a891fe1912e34c8d6a036dcffde054cc1af488674122f634483c0c29bd56944c"} Dec 01 08:57:29 crc kubenswrapper[4689]: I1201 08:57:29.940547 4689 scope.go:117] "RemoveContainer" containerID="b9350be88f0bc28216b5fdffcd4433e24639a21ad8f7800737108559ad9fb387" Dec 01 08:57:29 crc kubenswrapper[4689]: E1201 08:57:29.941182 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-kx454" podUID="767a61f9-7a7d-43df-b53f-efdc8c693381" Dec 01 08:57:30 crc kubenswrapper[4689]: I1201 08:57:30.112726 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-74cd45bd8d-lsl5j" Dec 01 08:57:30 crc kubenswrapper[4689]: I1201 08:57:30.221907 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-4tdn6"] Dec 01 08:57:30 crc kubenswrapper[4689]: I1201 08:57:30.456964 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-d65b9788-2kr5p"] Dec 01 08:57:30 crc kubenswrapper[4689]: I1201 08:57:30.467493 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-78d9cd9dbd-qxwq7"] Dec 01 08:57:30 crc kubenswrapper[4689]: W1201 08:57:30.487812 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode88c04bb_01ff_47a6_8942_05a9a2a68416.slice/crio-775cf8bfba0575c3f0bd28d24b74d52353c3436f003d538b93c085ffb30a1471 WatchSource:0}: Error finding container 775cf8bfba0575c3f0bd28d24b74d52353c3436f003d538b93c085ffb30a1471: Status 404 returned error can't find the container with id 775cf8bfba0575c3f0bd28d24b74d52353c3436f003d538b93c085ffb30a1471 Dec 01 08:57:30 crc kubenswrapper[4689]: I1201 08:57:30.641251 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-64x67"] Dec 01 08:57:30 crc kubenswrapper[4689]: W1201 08:57:30.853481 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod50356777_8001_44ef_95a4_73db83be36bc.slice/crio-c98d481d294e1ef82b1f23d6f84fc86296637070030df151f61d7d928ae97441 WatchSource:0}: Error finding container c98d481d294e1ef82b1f23d6f84fc86296637070030df151f61d7d928ae97441: Status 404 returned error can't find the container with id c98d481d294e1ef82b1f23d6f84fc86296637070030df151f61d7d928ae97441 Dec 01 08:57:30 crc kubenswrapper[4689]: I1201 08:57:30.857291 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-74cd45bd8d-lsl5j"] Dec 01 08:57:30 crc kubenswrapper[4689]: I1201 08:57:30.954831 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-74cd45bd8d-lsl5j" event={"ID":"50356777-8001-44ef-95a4-73db83be36bc","Type":"ContainerStarted","Data":"c98d481d294e1ef82b1f23d6f84fc86296637070030df151f61d7d928ae97441"} Dec 01 08:57:30 crc kubenswrapper[4689]: I1201 08:57:30.956556 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-f9pr6" event={"ID":"dc8aad14-4d75-45c4-9456-db0e80ffd8e7","Type":"ContainerStarted","Data":"1cd2be9868dfb5bb0036601eef7275e918e44677b634a65081e0bc418c903a5e"} Dec 01 08:57:30 crc kubenswrapper[4689]: I1201 08:57:30.961733 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-78d9cd9dbd-qxwq7" event={"ID":"e88c04bb-01ff-47a6-8942-05a9a2a68416","Type":"ContainerStarted","Data":"775cf8bfba0575c3f0bd28d24b74d52353c3436f003d538b93c085ffb30a1471"} Dec 01 08:57:30 crc kubenswrapper[4689]: I1201 08:57:30.975826 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-64x67" event={"ID":"d289ed97-fc00-401d-a724-9ff8a60cbc08","Type":"ContainerStarted","Data":"90a21ce1fc6f0995f10c2de3ec7e77758dc11416f41f68a601c1ae868dd10e32"} Dec 01 08:57:30 crc kubenswrapper[4689]: I1201 08:57:30.978526 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-d65b9788-2kr5p" event={"ID":"fcebf70c-3de0-499e-928d-3419299a512f","Type":"ContainerStarted","Data":"5e013e95f8a08923953392b326758bcc0679776297f833e9a2cb2b9e72eb53ec"} Dec 01 08:57:30 crc kubenswrapper[4689]: I1201 08:57:30.980044 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-f9pr6" podStartSLOduration=5.253341151 podStartE2EDuration="41.980027588s" podCreationTimestamp="2025-12-01 08:56:49 +0000 UTC" firstStartedPulling="2025-12-01 08:56:52.859479943 +0000 UTC m=+1092.931767847" lastFinishedPulling="2025-12-01 08:57:29.58616637 +0000 UTC m=+1129.658454284" observedRunningTime="2025-12-01 08:57:30.97611629 +0000 UTC m=+1131.048404184" watchObservedRunningTime="2025-12-01 08:57:30.980027588 +0000 UTC m=+1131.052315482" Dec 01 08:57:30 crc kubenswrapper[4689]: I1201 08:57:30.983242 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-4tdn6" event={"ID":"498e2dd1-b659-447d-9f5d-8a86c48fae77","Type":"ContainerStarted","Data":"2f7514baf81930de56bddc5961cad21fec755c798db6ba412d4b0bbaadcf391a"} Dec 01 08:57:30 crc kubenswrapper[4689]: I1201 08:57:30.983269 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-4tdn6" event={"ID":"498e2dd1-b659-447d-9f5d-8a86c48fae77","Type":"ContainerStarted","Data":"7c8215da768b97abe0d46d4c958b8f3ce95de5e7b7d62023a93781f923fbc53a"} Dec 01 08:57:31 crc kubenswrapper[4689]: I1201 08:57:31.009774 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-77585f5f8c-bv9wg" podUID="f3fc4aaf-1747-4ced-877d-63533218e8f1" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.125:5353: i/o timeout" Dec 01 08:57:31 crc kubenswrapper[4689]: I1201 08:57:31.010294 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-4tdn6" podStartSLOduration=19.010273497 podStartE2EDuration="19.010273497s" podCreationTimestamp="2025-12-01 08:57:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:57:31.009885175 +0000 UTC m=+1131.082173079" watchObservedRunningTime="2025-12-01 08:57:31.010273497 +0000 UTC m=+1131.082561401" Dec 01 08:57:32 crc kubenswrapper[4689]: I1201 08:57:32.009198 4689 generic.go:334] "Generic (PLEG): container finished" podID="d289ed97-fc00-401d-a724-9ff8a60cbc08" containerID="ae987a74afbed6f910f8ca5aaedca7f12194eaf2b46b2eaf88f4063d6bd6822e" exitCode=0 Dec 01 08:57:32 crc kubenswrapper[4689]: I1201 08:57:32.009472 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-64x67" event={"ID":"d289ed97-fc00-401d-a724-9ff8a60cbc08","Type":"ContainerDied","Data":"ae987a74afbed6f910f8ca5aaedca7f12194eaf2b46b2eaf88f4063d6bd6822e"} Dec 01 08:57:32 crc kubenswrapper[4689]: I1201 08:57:32.019799 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f48e5437-c685-4121-954b-1d3d8625bd28","Type":"ContainerStarted","Data":"c2296bd87726d5970873afe432db5d9aae5507574e5750f07b1aeaaaa77c2675"} Dec 01 08:57:32 crc kubenswrapper[4689]: I1201 08:57:32.051737 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-74cd45bd8d-lsl5j" event={"ID":"50356777-8001-44ef-95a4-73db83be36bc","Type":"ContainerStarted","Data":"ecd2fb94a7a6b1d73891d1bc1d4a2c543437d33c69142f7a4fb2f8cdfbbbf97e"} Dec 01 08:57:32 crc kubenswrapper[4689]: I1201 08:57:32.063575 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3d35154f-4a7e-4d08-935d-4fadbcd89379","Type":"ContainerStarted","Data":"03fec2da359a871d10ad6ab01a043508cb972c9cdaf3fa3c5b4c7087146611a7"} Dec 01 08:57:32 crc kubenswrapper[4689]: I1201 08:57:32.438641 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-58c7f9c74f-nqnzt"] Dec 01 08:57:32 crc kubenswrapper[4689]: I1201 08:57:32.440196 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-58c7f9c74f-nqnzt" Dec 01 08:57:32 crc kubenswrapper[4689]: I1201 08:57:32.443984 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Dec 01 08:57:32 crc kubenswrapper[4689]: I1201 08:57:32.444226 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Dec 01 08:57:32 crc kubenswrapper[4689]: I1201 08:57:32.453862 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-58c7f9c74f-nqnzt"] Dec 01 08:57:32 crc kubenswrapper[4689]: I1201 08:57:32.567290 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/9834ce74-a0c7-4e32-9d8b-1d39b27c62b6-httpd-config\") pod \"neutron-58c7f9c74f-nqnzt\" (UID: \"9834ce74-a0c7-4e32-9d8b-1d39b27c62b6\") " pod="openstack/neutron-58c7f9c74f-nqnzt" Dec 01 08:57:32 crc kubenswrapper[4689]: I1201 08:57:32.567748 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7sqlc\" (UniqueName: \"kubernetes.io/projected/9834ce74-a0c7-4e32-9d8b-1d39b27c62b6-kube-api-access-7sqlc\") pod \"neutron-58c7f9c74f-nqnzt\" (UID: \"9834ce74-a0c7-4e32-9d8b-1d39b27c62b6\") " pod="openstack/neutron-58c7f9c74f-nqnzt" Dec 01 08:57:32 crc kubenswrapper[4689]: I1201 08:57:32.567817 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9834ce74-a0c7-4e32-9d8b-1d39b27c62b6-internal-tls-certs\") pod \"neutron-58c7f9c74f-nqnzt\" (UID: \"9834ce74-a0c7-4e32-9d8b-1d39b27c62b6\") " pod="openstack/neutron-58c7f9c74f-nqnzt" Dec 01 08:57:32 crc kubenswrapper[4689]: I1201 08:57:32.567846 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9834ce74-a0c7-4e32-9d8b-1d39b27c62b6-public-tls-certs\") pod \"neutron-58c7f9c74f-nqnzt\" (UID: \"9834ce74-a0c7-4e32-9d8b-1d39b27c62b6\") " pod="openstack/neutron-58c7f9c74f-nqnzt" Dec 01 08:57:32 crc kubenswrapper[4689]: I1201 08:57:32.568053 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9834ce74-a0c7-4e32-9d8b-1d39b27c62b6-combined-ca-bundle\") pod \"neutron-58c7f9c74f-nqnzt\" (UID: \"9834ce74-a0c7-4e32-9d8b-1d39b27c62b6\") " pod="openstack/neutron-58c7f9c74f-nqnzt" Dec 01 08:57:32 crc kubenswrapper[4689]: I1201 08:57:32.568211 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9834ce74-a0c7-4e32-9d8b-1d39b27c62b6-config\") pod \"neutron-58c7f9c74f-nqnzt\" (UID: \"9834ce74-a0c7-4e32-9d8b-1d39b27c62b6\") " pod="openstack/neutron-58c7f9c74f-nqnzt" Dec 01 08:57:32 crc kubenswrapper[4689]: I1201 08:57:32.568269 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9834ce74-a0c7-4e32-9d8b-1d39b27c62b6-ovndb-tls-certs\") pod \"neutron-58c7f9c74f-nqnzt\" (UID: \"9834ce74-a0c7-4e32-9d8b-1d39b27c62b6\") " pod="openstack/neutron-58c7f9c74f-nqnzt" Dec 01 08:57:32 crc kubenswrapper[4689]: I1201 08:57:32.670264 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9834ce74-a0c7-4e32-9d8b-1d39b27c62b6-config\") pod \"neutron-58c7f9c74f-nqnzt\" (UID: \"9834ce74-a0c7-4e32-9d8b-1d39b27c62b6\") " pod="openstack/neutron-58c7f9c74f-nqnzt" Dec 01 08:57:32 crc kubenswrapper[4689]: I1201 08:57:32.670345 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9834ce74-a0c7-4e32-9d8b-1d39b27c62b6-ovndb-tls-certs\") pod \"neutron-58c7f9c74f-nqnzt\" (UID: \"9834ce74-a0c7-4e32-9d8b-1d39b27c62b6\") " pod="openstack/neutron-58c7f9c74f-nqnzt" Dec 01 08:57:32 crc kubenswrapper[4689]: I1201 08:57:32.670430 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/9834ce74-a0c7-4e32-9d8b-1d39b27c62b6-httpd-config\") pod \"neutron-58c7f9c74f-nqnzt\" (UID: \"9834ce74-a0c7-4e32-9d8b-1d39b27c62b6\") " pod="openstack/neutron-58c7f9c74f-nqnzt" Dec 01 08:57:32 crc kubenswrapper[4689]: I1201 08:57:32.670500 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7sqlc\" (UniqueName: \"kubernetes.io/projected/9834ce74-a0c7-4e32-9d8b-1d39b27c62b6-kube-api-access-7sqlc\") pod \"neutron-58c7f9c74f-nqnzt\" (UID: \"9834ce74-a0c7-4e32-9d8b-1d39b27c62b6\") " pod="openstack/neutron-58c7f9c74f-nqnzt" Dec 01 08:57:32 crc kubenswrapper[4689]: I1201 08:57:32.670544 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9834ce74-a0c7-4e32-9d8b-1d39b27c62b6-internal-tls-certs\") pod \"neutron-58c7f9c74f-nqnzt\" (UID: \"9834ce74-a0c7-4e32-9d8b-1d39b27c62b6\") " pod="openstack/neutron-58c7f9c74f-nqnzt" Dec 01 08:57:32 crc kubenswrapper[4689]: I1201 08:57:32.670566 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9834ce74-a0c7-4e32-9d8b-1d39b27c62b6-public-tls-certs\") pod \"neutron-58c7f9c74f-nqnzt\" (UID: \"9834ce74-a0c7-4e32-9d8b-1d39b27c62b6\") " pod="openstack/neutron-58c7f9c74f-nqnzt" Dec 01 08:57:32 crc kubenswrapper[4689]: I1201 08:57:32.670610 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9834ce74-a0c7-4e32-9d8b-1d39b27c62b6-combined-ca-bundle\") pod \"neutron-58c7f9c74f-nqnzt\" (UID: \"9834ce74-a0c7-4e32-9d8b-1d39b27c62b6\") " pod="openstack/neutron-58c7f9c74f-nqnzt" Dec 01 08:57:32 crc kubenswrapper[4689]: I1201 08:57:32.678236 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9834ce74-a0c7-4e32-9d8b-1d39b27c62b6-ovndb-tls-certs\") pod \"neutron-58c7f9c74f-nqnzt\" (UID: \"9834ce74-a0c7-4e32-9d8b-1d39b27c62b6\") " pod="openstack/neutron-58c7f9c74f-nqnzt" Dec 01 08:57:32 crc kubenswrapper[4689]: I1201 08:57:32.680160 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9834ce74-a0c7-4e32-9d8b-1d39b27c62b6-public-tls-certs\") pod \"neutron-58c7f9c74f-nqnzt\" (UID: \"9834ce74-a0c7-4e32-9d8b-1d39b27c62b6\") " pod="openstack/neutron-58c7f9c74f-nqnzt" Dec 01 08:57:32 crc kubenswrapper[4689]: I1201 08:57:32.680835 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9834ce74-a0c7-4e32-9d8b-1d39b27c62b6-internal-tls-certs\") pod \"neutron-58c7f9c74f-nqnzt\" (UID: \"9834ce74-a0c7-4e32-9d8b-1d39b27c62b6\") " pod="openstack/neutron-58c7f9c74f-nqnzt" Dec 01 08:57:32 crc kubenswrapper[4689]: I1201 08:57:32.685146 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9834ce74-a0c7-4e32-9d8b-1d39b27c62b6-combined-ca-bundle\") pod \"neutron-58c7f9c74f-nqnzt\" (UID: \"9834ce74-a0c7-4e32-9d8b-1d39b27c62b6\") " pod="openstack/neutron-58c7f9c74f-nqnzt" Dec 01 08:57:32 crc kubenswrapper[4689]: I1201 08:57:32.686472 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/9834ce74-a0c7-4e32-9d8b-1d39b27c62b6-config\") pod \"neutron-58c7f9c74f-nqnzt\" (UID: \"9834ce74-a0c7-4e32-9d8b-1d39b27c62b6\") " pod="openstack/neutron-58c7f9c74f-nqnzt" Dec 01 08:57:32 crc kubenswrapper[4689]: I1201 08:57:32.689971 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/9834ce74-a0c7-4e32-9d8b-1d39b27c62b6-httpd-config\") pod \"neutron-58c7f9c74f-nqnzt\" (UID: \"9834ce74-a0c7-4e32-9d8b-1d39b27c62b6\") " pod="openstack/neutron-58c7f9c74f-nqnzt" Dec 01 08:57:32 crc kubenswrapper[4689]: I1201 08:57:32.692798 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7sqlc\" (UniqueName: \"kubernetes.io/projected/9834ce74-a0c7-4e32-9d8b-1d39b27c62b6-kube-api-access-7sqlc\") pod \"neutron-58c7f9c74f-nqnzt\" (UID: \"9834ce74-a0c7-4e32-9d8b-1d39b27c62b6\") " pod="openstack/neutron-58c7f9c74f-nqnzt" Dec 01 08:57:32 crc kubenswrapper[4689]: I1201 08:57:32.771415 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-58c7f9c74f-nqnzt" Dec 01 08:57:33 crc kubenswrapper[4689]: I1201 08:57:33.097767 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-74cd45bd8d-lsl5j" event={"ID":"50356777-8001-44ef-95a4-73db83be36bc","Type":"ContainerStarted","Data":"24bf8999cb46d83ed2d5ee7aaee2ccfe420144dea0e10fc595c4d0ca1733718f"} Dec 01 08:57:33 crc kubenswrapper[4689]: I1201 08:57:33.098764 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-74cd45bd8d-lsl5j" Dec 01 08:57:33 crc kubenswrapper[4689]: I1201 08:57:33.111258 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-78d9cd9dbd-qxwq7" event={"ID":"e88c04bb-01ff-47a6-8942-05a9a2a68416","Type":"ContainerStarted","Data":"a091448b207aa75d136d6feb237ad0fa14303d634a2df9de676e06282a8c25ec"} Dec 01 08:57:33 crc kubenswrapper[4689]: I1201 08:57:33.111299 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-78d9cd9dbd-qxwq7" event={"ID":"e88c04bb-01ff-47a6-8942-05a9a2a68416","Type":"ContainerStarted","Data":"ad38fd3db04934de38e1df2739d0df091d88ad6e59e5e1dc95f4167e1c88b624"} Dec 01 08:57:33 crc kubenswrapper[4689]: I1201 08:57:33.119686 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-5ttrw" event={"ID":"878af3f4-684c-457b-b943-b47aa64dcb58","Type":"ContainerStarted","Data":"333e324eabbfcf368117f7274a7058ff8bdd97d9cf55201eb1b467d1366bbc8e"} Dec 01 08:57:33 crc kubenswrapper[4689]: I1201 08:57:33.150649 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3d35154f-4a7e-4d08-935d-4fadbcd89379","Type":"ContainerStarted","Data":"8993880b1c784dbdab4ca56af6a8e3244b372fd012a9f79a8485e3ed351fefca"} Dec 01 08:57:33 crc kubenswrapper[4689]: I1201 08:57:33.150820 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="3d35154f-4a7e-4d08-935d-4fadbcd89379" containerName="glance-log" containerID="cri-o://03fec2da359a871d10ad6ab01a043508cb972c9cdaf3fa3c5b4c7087146611a7" gracePeriod=30 Dec 01 08:57:33 crc kubenswrapper[4689]: I1201 08:57:33.151061 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="3d35154f-4a7e-4d08-935d-4fadbcd89379" containerName="glance-httpd" containerID="cri-o://8993880b1c784dbdab4ca56af6a8e3244b372fd012a9f79a8485e3ed351fefca" gracePeriod=30 Dec 01 08:57:33 crc kubenswrapper[4689]: I1201 08:57:33.157849 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-74cd45bd8d-lsl5j" podStartSLOduration=4.157831853 podStartE2EDuration="4.157831853s" podCreationTimestamp="2025-12-01 08:57:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:57:33.149122905 +0000 UTC m=+1133.221410809" watchObservedRunningTime="2025-12-01 08:57:33.157831853 +0000 UTC m=+1133.230119757" Dec 01 08:57:33 crc kubenswrapper[4689]: I1201 08:57:33.178768 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-64x67" event={"ID":"d289ed97-fc00-401d-a724-9ff8a60cbc08","Type":"ContainerStarted","Data":"acfbfb3d1430ee001c9b6d16ca0aacd3ac7696374e4acaa8c614ca2a4f667d20"} Dec 01 08:57:33 crc kubenswrapper[4689]: I1201 08:57:33.179553 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-84b966f6c9-64x67" Dec 01 08:57:33 crc kubenswrapper[4689]: I1201 08:57:33.203746 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f48e5437-c685-4121-954b-1d3d8625bd28","Type":"ContainerStarted","Data":"283d2b6c71d1749815ec0a018e49bcc345e1fbcbf62043d131edc7e8dc4572c1"} Dec 01 08:57:33 crc kubenswrapper[4689]: I1201 08:57:33.203970 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="f48e5437-c685-4121-954b-1d3d8625bd28" containerName="glance-log" containerID="cri-o://c2296bd87726d5970873afe432db5d9aae5507574e5750f07b1aeaaaa77c2675" gracePeriod=30 Dec 01 08:57:33 crc kubenswrapper[4689]: I1201 08:57:33.204103 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="f48e5437-c685-4121-954b-1d3d8625bd28" containerName="glance-httpd" containerID="cri-o://283d2b6c71d1749815ec0a018e49bcc345e1fbcbf62043d131edc7e8dc4572c1" gracePeriod=30 Dec 01 08:57:33 crc kubenswrapper[4689]: I1201 08:57:33.210199 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-78d9cd9dbd-qxwq7" podStartSLOduration=30.935209917999998 podStartE2EDuration="32.210174858s" podCreationTimestamp="2025-12-01 08:57:01 +0000 UTC" firstStartedPulling="2025-12-01 08:57:30.492709527 +0000 UTC m=+1130.564997431" lastFinishedPulling="2025-12-01 08:57:31.767674467 +0000 UTC m=+1131.839962371" observedRunningTime="2025-12-01 08:57:33.205810207 +0000 UTC m=+1133.278098111" watchObservedRunningTime="2025-12-01 08:57:33.210174858 +0000 UTC m=+1133.282462762" Dec 01 08:57:33 crc kubenswrapper[4689]: I1201 08:57:33.221909 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f54de58e-9111-462b-a86e-8e324060c8aa","Type":"ContainerStarted","Data":"5d84c0bc33fa0c594dd2e4ac53c19ea7e3a986eb17d8a353c63f16fb5ad089d6"} Dec 01 08:57:33 crc kubenswrapper[4689]: I1201 08:57:33.279400 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=35.279383734 podStartE2EDuration="35.279383734s" podCreationTimestamp="2025-12-01 08:56:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:57:33.276191486 +0000 UTC m=+1133.348479390" watchObservedRunningTime="2025-12-01 08:57:33.279383734 +0000 UTC m=+1133.351671628" Dec 01 08:57:33 crc kubenswrapper[4689]: I1201 08:57:33.280004 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-5ttrw" podStartSLOduration=5.27371726 podStartE2EDuration="44.280000281s" podCreationTimestamp="2025-12-01 08:56:49 +0000 UTC" firstStartedPulling="2025-12-01 08:56:52.859209846 +0000 UTC m=+1092.931497750" lastFinishedPulling="2025-12-01 08:57:31.865492867 +0000 UTC m=+1131.937780771" observedRunningTime="2025-12-01 08:57:33.246073381 +0000 UTC m=+1133.318361285" watchObservedRunningTime="2025-12-01 08:57:33.280000281 +0000 UTC m=+1133.352288185" Dec 01 08:57:33 crc kubenswrapper[4689]: I1201 08:57:33.284174 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-d65b9788-2kr5p" event={"ID":"fcebf70c-3de0-499e-928d-3419299a512f","Type":"ContainerStarted","Data":"fab80120b3cdcb11f34e6bc51dab2ce8ef0833fb8a3e2dbb9da58553b25ef62f"} Dec 01 08:57:33 crc kubenswrapper[4689]: I1201 08:57:33.284223 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-d65b9788-2kr5p" event={"ID":"fcebf70c-3de0-499e-928d-3419299a512f","Type":"ContainerStarted","Data":"06336f59b10ead68cbab11f316eecd5c0e3f08ddf6602c9f77b8f11c685967b5"} Dec 01 08:57:33 crc kubenswrapper[4689]: I1201 08:57:33.320006 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-84b966f6c9-64x67" podStartSLOduration=4.319983686 podStartE2EDuration="4.319983686s" podCreationTimestamp="2025-12-01 08:57:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:57:33.306730752 +0000 UTC m=+1133.379018656" watchObservedRunningTime="2025-12-01 08:57:33.319983686 +0000 UTC m=+1133.392271590" Dec 01 08:57:33 crc kubenswrapper[4689]: I1201 08:57:33.346436 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=36.346415199 podStartE2EDuration="36.346415199s" podCreationTimestamp="2025-12-01 08:56:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:57:33.34382939 +0000 UTC m=+1133.416117294" watchObservedRunningTime="2025-12-01 08:57:33.346415199 +0000 UTC m=+1133.418703103" Dec 01 08:57:33 crc kubenswrapper[4689]: I1201 08:57:33.380175 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-d65b9788-2kr5p" podStartSLOduration=31.094575073 podStartE2EDuration="32.380156045s" podCreationTimestamp="2025-12-01 08:57:01 +0000 UTC" firstStartedPulling="2025-12-01 08:57:30.477086098 +0000 UTC m=+1130.549374002" lastFinishedPulling="2025-12-01 08:57:31.76266707 +0000 UTC m=+1131.834954974" observedRunningTime="2025-12-01 08:57:33.375675832 +0000 UTC m=+1133.447963746" watchObservedRunningTime="2025-12-01 08:57:33.380156045 +0000 UTC m=+1133.452443949" Dec 01 08:57:33 crc kubenswrapper[4689]: I1201 08:57:33.541676 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-58c7f9c74f-nqnzt"] Dec 01 08:57:33 crc kubenswrapper[4689]: W1201 08:57:33.553788 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9834ce74_a0c7_4e32_9d8b_1d39b27c62b6.slice/crio-e99537f2de38fa5a977f7ab777ce44df0fdfb58dad2f4743560e1753c4793988 WatchSource:0}: Error finding container e99537f2de38fa5a977f7ab777ce44df0fdfb58dad2f4743560e1753c4793988: Status 404 returned error can't find the container with id e99537f2de38fa5a977f7ab777ce44df0fdfb58dad2f4743560e1753c4793988 Dec 01 08:57:34 crc kubenswrapper[4689]: I1201 08:57:34.318474 4689 generic.go:334] "Generic (PLEG): container finished" podID="3d35154f-4a7e-4d08-935d-4fadbcd89379" containerID="8993880b1c784dbdab4ca56af6a8e3244b372fd012a9f79a8485e3ed351fefca" exitCode=143 Dec 01 08:57:34 crc kubenswrapper[4689]: I1201 08:57:34.319317 4689 generic.go:334] "Generic (PLEG): container finished" podID="3d35154f-4a7e-4d08-935d-4fadbcd89379" containerID="03fec2da359a871d10ad6ab01a043508cb972c9cdaf3fa3c5b4c7087146611a7" exitCode=143 Dec 01 08:57:34 crc kubenswrapper[4689]: I1201 08:57:34.319786 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3d35154f-4a7e-4d08-935d-4fadbcd89379","Type":"ContainerDied","Data":"8993880b1c784dbdab4ca56af6a8e3244b372fd012a9f79a8485e3ed351fefca"} Dec 01 08:57:34 crc kubenswrapper[4689]: I1201 08:57:34.319841 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3d35154f-4a7e-4d08-935d-4fadbcd89379","Type":"ContainerDied","Data":"03fec2da359a871d10ad6ab01a043508cb972c9cdaf3fa3c5b4c7087146611a7"} Dec 01 08:57:34 crc kubenswrapper[4689]: I1201 08:57:34.336399 4689 generic.go:334] "Generic (PLEG): container finished" podID="f48e5437-c685-4121-954b-1d3d8625bd28" containerID="283d2b6c71d1749815ec0a018e49bcc345e1fbcbf62043d131edc7e8dc4572c1" exitCode=0 Dec 01 08:57:34 crc kubenswrapper[4689]: I1201 08:57:34.336424 4689 generic.go:334] "Generic (PLEG): container finished" podID="f48e5437-c685-4121-954b-1d3d8625bd28" containerID="c2296bd87726d5970873afe432db5d9aae5507574e5750f07b1aeaaaa77c2675" exitCode=143 Dec 01 08:57:34 crc kubenswrapper[4689]: I1201 08:57:34.336499 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f48e5437-c685-4121-954b-1d3d8625bd28","Type":"ContainerDied","Data":"283d2b6c71d1749815ec0a018e49bcc345e1fbcbf62043d131edc7e8dc4572c1"} Dec 01 08:57:34 crc kubenswrapper[4689]: I1201 08:57:34.336529 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f48e5437-c685-4121-954b-1d3d8625bd28","Type":"ContainerDied","Data":"c2296bd87726d5970873afe432db5d9aae5507574e5750f07b1aeaaaa77c2675"} Dec 01 08:57:34 crc kubenswrapper[4689]: I1201 08:57:34.369710 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-58c7f9c74f-nqnzt" event={"ID":"9834ce74-a0c7-4e32-9d8b-1d39b27c62b6","Type":"ContainerStarted","Data":"bc43668ef623a0e6a869f03e3da07bda56595552ec8a5eefee58ab3f93110654"} Dec 01 08:57:34 crc kubenswrapper[4689]: I1201 08:57:34.369761 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-58c7f9c74f-nqnzt" event={"ID":"9834ce74-a0c7-4e32-9d8b-1d39b27c62b6","Type":"ContainerStarted","Data":"e99537f2de38fa5a977f7ab777ce44df0fdfb58dad2f4743560e1753c4793988"} Dec 01 08:57:34 crc kubenswrapper[4689]: I1201 08:57:34.840676 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 01 08:57:34 crc kubenswrapper[4689]: I1201 08:57:34.852448 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 01 08:57:34 crc kubenswrapper[4689]: I1201 08:57:34.921948 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d35154f-4a7e-4d08-935d-4fadbcd89379-config-data\") pod \"3d35154f-4a7e-4d08-935d-4fadbcd89379\" (UID: \"3d35154f-4a7e-4d08-935d-4fadbcd89379\") " Dec 01 08:57:34 crc kubenswrapper[4689]: I1201 08:57:34.922014 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3d35154f-4a7e-4d08-935d-4fadbcd89379-httpd-run\") pod \"3d35154f-4a7e-4d08-935d-4fadbcd89379\" (UID: \"3d35154f-4a7e-4d08-935d-4fadbcd89379\") " Dec 01 08:57:34 crc kubenswrapper[4689]: I1201 08:57:34.922046 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f48e5437-c685-4121-954b-1d3d8625bd28-config-data\") pod \"f48e5437-c685-4121-954b-1d3d8625bd28\" (UID: \"f48e5437-c685-4121-954b-1d3d8625bd28\") " Dec 01 08:57:34 crc kubenswrapper[4689]: I1201 08:57:34.922070 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f48e5437-c685-4121-954b-1d3d8625bd28-httpd-run\") pod \"f48e5437-c685-4121-954b-1d3d8625bd28\" (UID: \"f48e5437-c685-4121-954b-1d3d8625bd28\") " Dec 01 08:57:34 crc kubenswrapper[4689]: I1201 08:57:34.922145 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"3d35154f-4a7e-4d08-935d-4fadbcd89379\" (UID: \"3d35154f-4a7e-4d08-935d-4fadbcd89379\") " Dec 01 08:57:34 crc kubenswrapper[4689]: I1201 08:57:34.922166 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d35154f-4a7e-4d08-935d-4fadbcd89379-scripts\") pod \"3d35154f-4a7e-4d08-935d-4fadbcd89379\" (UID: \"3d35154f-4a7e-4d08-935d-4fadbcd89379\") " Dec 01 08:57:34 crc kubenswrapper[4689]: I1201 08:57:34.922183 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"f48e5437-c685-4121-954b-1d3d8625bd28\" (UID: \"f48e5437-c685-4121-954b-1d3d8625bd28\") " Dec 01 08:57:34 crc kubenswrapper[4689]: I1201 08:57:34.922205 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d35154f-4a7e-4d08-935d-4fadbcd89379-logs\") pod \"3d35154f-4a7e-4d08-935d-4fadbcd89379\" (UID: \"3d35154f-4a7e-4d08-935d-4fadbcd89379\") " Dec 01 08:57:34 crc kubenswrapper[4689]: I1201 08:57:34.922233 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d35154f-4a7e-4d08-935d-4fadbcd89379-combined-ca-bundle\") pod \"3d35154f-4a7e-4d08-935d-4fadbcd89379\" (UID: \"3d35154f-4a7e-4d08-935d-4fadbcd89379\") " Dec 01 08:57:34 crc kubenswrapper[4689]: I1201 08:57:34.922258 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f48e5437-c685-4121-954b-1d3d8625bd28-logs\") pod \"f48e5437-c685-4121-954b-1d3d8625bd28\" (UID: \"f48e5437-c685-4121-954b-1d3d8625bd28\") " Dec 01 08:57:34 crc kubenswrapper[4689]: I1201 08:57:34.922280 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f48e5437-c685-4121-954b-1d3d8625bd28-scripts\") pod \"f48e5437-c685-4121-954b-1d3d8625bd28\" (UID: \"f48e5437-c685-4121-954b-1d3d8625bd28\") " Dec 01 08:57:34 crc kubenswrapper[4689]: I1201 08:57:34.922322 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j9tmx\" (UniqueName: \"kubernetes.io/projected/f48e5437-c685-4121-954b-1d3d8625bd28-kube-api-access-j9tmx\") pod \"f48e5437-c685-4121-954b-1d3d8625bd28\" (UID: \"f48e5437-c685-4121-954b-1d3d8625bd28\") " Dec 01 08:57:34 crc kubenswrapper[4689]: I1201 08:57:34.922344 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kpk5p\" (UniqueName: \"kubernetes.io/projected/3d35154f-4a7e-4d08-935d-4fadbcd89379-kube-api-access-kpk5p\") pod \"3d35154f-4a7e-4d08-935d-4fadbcd89379\" (UID: \"3d35154f-4a7e-4d08-935d-4fadbcd89379\") " Dec 01 08:57:34 crc kubenswrapper[4689]: I1201 08:57:34.922924 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f48e5437-c685-4121-954b-1d3d8625bd28-combined-ca-bundle\") pod \"f48e5437-c685-4121-954b-1d3d8625bd28\" (UID: \"f48e5437-c685-4121-954b-1d3d8625bd28\") " Dec 01 08:57:34 crc kubenswrapper[4689]: I1201 08:57:34.924302 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f48e5437-c685-4121-954b-1d3d8625bd28-logs" (OuterVolumeSpecName: "logs") pod "f48e5437-c685-4121-954b-1d3d8625bd28" (UID: "f48e5437-c685-4121-954b-1d3d8625bd28"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 08:57:34 crc kubenswrapper[4689]: I1201 08:57:34.924783 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d35154f-4a7e-4d08-935d-4fadbcd89379-logs" (OuterVolumeSpecName: "logs") pod "3d35154f-4a7e-4d08-935d-4fadbcd89379" (UID: "3d35154f-4a7e-4d08-935d-4fadbcd89379"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 08:57:34 crc kubenswrapper[4689]: I1201 08:57:34.925214 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f48e5437-c685-4121-954b-1d3d8625bd28-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "f48e5437-c685-4121-954b-1d3d8625bd28" (UID: "f48e5437-c685-4121-954b-1d3d8625bd28"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 08:57:34 crc kubenswrapper[4689]: I1201 08:57:34.925471 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d35154f-4a7e-4d08-935d-4fadbcd89379-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "3d35154f-4a7e-4d08-935d-4fadbcd89379" (UID: "3d35154f-4a7e-4d08-935d-4fadbcd89379"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 08:57:34 crc kubenswrapper[4689]: I1201 08:57:34.936585 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "glance") pod "f48e5437-c685-4121-954b-1d3d8625bd28" (UID: "f48e5437-c685-4121-954b-1d3d8625bd28"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 01 08:57:34 crc kubenswrapper[4689]: I1201 08:57:34.940091 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f48e5437-c685-4121-954b-1d3d8625bd28-kube-api-access-j9tmx" (OuterVolumeSpecName: "kube-api-access-j9tmx") pod "f48e5437-c685-4121-954b-1d3d8625bd28" (UID: "f48e5437-c685-4121-954b-1d3d8625bd28"). InnerVolumeSpecName "kube-api-access-j9tmx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:57:34 crc kubenswrapper[4689]: I1201 08:57:34.946045 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "glance") pod "3d35154f-4a7e-4d08-935d-4fadbcd89379" (UID: "3d35154f-4a7e-4d08-935d-4fadbcd89379"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 01 08:57:34 crc kubenswrapper[4689]: I1201 08:57:34.948659 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d35154f-4a7e-4d08-935d-4fadbcd89379-kube-api-access-kpk5p" (OuterVolumeSpecName: "kube-api-access-kpk5p") pod "3d35154f-4a7e-4d08-935d-4fadbcd89379" (UID: "3d35154f-4a7e-4d08-935d-4fadbcd89379"). InnerVolumeSpecName "kube-api-access-kpk5p". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:57:34 crc kubenswrapper[4689]: I1201 08:57:34.953573 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f48e5437-c685-4121-954b-1d3d8625bd28-scripts" (OuterVolumeSpecName: "scripts") pod "f48e5437-c685-4121-954b-1d3d8625bd28" (UID: "f48e5437-c685-4121-954b-1d3d8625bd28"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:57:34 crc kubenswrapper[4689]: I1201 08:57:34.967999 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d35154f-4a7e-4d08-935d-4fadbcd89379-scripts" (OuterVolumeSpecName: "scripts") pod "3d35154f-4a7e-4d08-935d-4fadbcd89379" (UID: "3d35154f-4a7e-4d08-935d-4fadbcd89379"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.019834 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f48e5437-c685-4121-954b-1d3d8625bd28-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f48e5437-c685-4121-954b-1d3d8625bd28" (UID: "f48e5437-c685-4121-954b-1d3d8625bd28"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.032477 4689 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.032510 4689 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d35154f-4a7e-4d08-935d-4fadbcd89379-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.032524 4689 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.032534 4689 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d35154f-4a7e-4d08-935d-4fadbcd89379-logs\") on node \"crc\" DevicePath \"\"" Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.032545 4689 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f48e5437-c685-4121-954b-1d3d8625bd28-logs\") on node \"crc\" DevicePath \"\"" Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.032555 4689 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f48e5437-c685-4121-954b-1d3d8625bd28-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.032563 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j9tmx\" (UniqueName: \"kubernetes.io/projected/f48e5437-c685-4121-954b-1d3d8625bd28-kube-api-access-j9tmx\") on node \"crc\" DevicePath \"\"" Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.032572 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kpk5p\" (UniqueName: \"kubernetes.io/projected/3d35154f-4a7e-4d08-935d-4fadbcd89379-kube-api-access-kpk5p\") on node \"crc\" DevicePath \"\"" Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.032580 4689 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f48e5437-c685-4121-954b-1d3d8625bd28-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.032588 4689 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3d35154f-4a7e-4d08-935d-4fadbcd89379-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.032620 4689 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f48e5437-c685-4121-954b-1d3d8625bd28-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.074864 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d35154f-4a7e-4d08-935d-4fadbcd89379-config-data" (OuterVolumeSpecName: "config-data") pod "3d35154f-4a7e-4d08-935d-4fadbcd89379" (UID: "3d35154f-4a7e-4d08-935d-4fadbcd89379"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.105314 4689 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.110228 4689 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.121537 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d35154f-4a7e-4d08-935d-4fadbcd89379-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3d35154f-4a7e-4d08-935d-4fadbcd89379" (UID: "3d35154f-4a7e-4d08-935d-4fadbcd89379"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.135598 4689 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d35154f-4a7e-4d08-935d-4fadbcd89379-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.135626 4689 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.135635 4689 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.135644 4689 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d35154f-4a7e-4d08-935d-4fadbcd89379-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.137781 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f48e5437-c685-4121-954b-1d3d8625bd28-config-data" (OuterVolumeSpecName: "config-data") pod "f48e5437-c685-4121-954b-1d3d8625bd28" (UID: "f48e5437-c685-4121-954b-1d3d8625bd28"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.237105 4689 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f48e5437-c685-4121-954b-1d3d8625bd28-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.384452 4689 generic.go:334] "Generic (PLEG): container finished" podID="dc8aad14-4d75-45c4-9456-db0e80ffd8e7" containerID="1cd2be9868dfb5bb0036601eef7275e918e44677b634a65081e0bc418c903a5e" exitCode=0 Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.384558 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-f9pr6" event={"ID":"dc8aad14-4d75-45c4-9456-db0e80ffd8e7","Type":"ContainerDied","Data":"1cd2be9868dfb5bb0036601eef7275e918e44677b634a65081e0bc418c903a5e"} Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.391326 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3d35154f-4a7e-4d08-935d-4fadbcd89379","Type":"ContainerDied","Data":"d472fe1ac12848acf093cba9e81f55fd128ba744a9c704dfce58b95f282f1b4e"} Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.391415 4689 scope.go:117] "RemoveContainer" containerID="8993880b1c784dbdab4ca56af6a8e3244b372fd012a9f79a8485e3ed351fefca" Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.391612 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.396346 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f48e5437-c685-4121-954b-1d3d8625bd28","Type":"ContainerDied","Data":"a891fe1912e34c8d6a036dcffde054cc1af488674122f634483c0c29bd56944c"} Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.396498 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.424394 4689 scope.go:117] "RemoveContainer" containerID="03fec2da359a871d10ad6ab01a043508cb972c9cdaf3fa3c5b4c7087146611a7" Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.424631 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-58c7f9c74f-nqnzt" event={"ID":"9834ce74-a0c7-4e32-9d8b-1d39b27c62b6","Type":"ContainerStarted","Data":"1e8989bb09ba4cab063051dccce3a179acf8f6d719c13faa0552b72b5f3491d7"} Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.425296 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-58c7f9c74f-nqnzt" Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.450484 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.455426 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.493120 4689 scope.go:117] "RemoveContainer" containerID="283d2b6c71d1749815ec0a018e49bcc345e1fbcbf62043d131edc7e8dc4572c1" Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.493687 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.513049 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.526433 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 01 08:57:35 crc kubenswrapper[4689]: E1201 08:57:35.526927 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f48e5437-c685-4121-954b-1d3d8625bd28" containerName="glance-log" Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.526941 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="f48e5437-c685-4121-954b-1d3d8625bd28" containerName="glance-log" Dec 01 08:57:35 crc kubenswrapper[4689]: E1201 08:57:35.526955 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d35154f-4a7e-4d08-935d-4fadbcd89379" containerName="glance-log" Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.526961 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d35154f-4a7e-4d08-935d-4fadbcd89379" containerName="glance-log" Dec 01 08:57:35 crc kubenswrapper[4689]: E1201 08:57:35.526972 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f48e5437-c685-4121-954b-1d3d8625bd28" containerName="glance-httpd" Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.526978 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="f48e5437-c685-4121-954b-1d3d8625bd28" containerName="glance-httpd" Dec 01 08:57:35 crc kubenswrapper[4689]: E1201 08:57:35.527005 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d35154f-4a7e-4d08-935d-4fadbcd89379" containerName="glance-httpd" Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.527011 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d35154f-4a7e-4d08-935d-4fadbcd89379" containerName="glance-httpd" Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.527169 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d35154f-4a7e-4d08-935d-4fadbcd89379" containerName="glance-log" Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.527186 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="f48e5437-c685-4121-954b-1d3d8625bd28" containerName="glance-httpd" Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.527201 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d35154f-4a7e-4d08-935d-4fadbcd89379" containerName="glance-httpd" Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.527212 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="f48e5437-c685-4121-954b-1d3d8625bd28" containerName="glance-log" Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.528168 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.534883 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.535082 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.535184 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.535288 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-p2bzt" Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.535440 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.542789 4689 scope.go:117] "RemoveContainer" containerID="c2296bd87726d5970873afe432db5d9aae5507574e5750f07b1aeaaaa77c2675" Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.549960 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.559162 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.564986 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.565255 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.595298 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.600577 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-58c7f9c74f-nqnzt" podStartSLOduration=3.6005513369999997 podStartE2EDuration="3.600551337s" podCreationTimestamp="2025-12-01 08:57:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:57:35.50757828 +0000 UTC m=+1135.579866184" watchObservedRunningTime="2025-12-01 08:57:35.600551337 +0000 UTC m=+1135.672839241" Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.649198 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2df3ac61-47b1-4ca0-a5a2-80b2a94d1361-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2df3ac61-47b1-4ca0-a5a2-80b2a94d1361\") " pod="openstack/glance-default-external-api-0" Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.649257 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhbjv\" (UniqueName: \"kubernetes.io/projected/2df3ac61-47b1-4ca0-a5a2-80b2a94d1361-kube-api-access-fhbjv\") pod \"glance-default-external-api-0\" (UID: \"2df3ac61-47b1-4ca0-a5a2-80b2a94d1361\") " pod="openstack/glance-default-external-api-0" Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.649282 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b734f82-967b-49b2-bb9e-2f17fdcf54d3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4b734f82-967b-49b2-bb9e-2f17fdcf54d3\") " pod="openstack/glance-default-internal-api-0" Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.649305 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2df3ac61-47b1-4ca0-a5a2-80b2a94d1361-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"2df3ac61-47b1-4ca0-a5a2-80b2a94d1361\") " pod="openstack/glance-default-external-api-0" Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.649481 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b734f82-967b-49b2-bb9e-2f17fdcf54d3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4b734f82-967b-49b2-bb9e-2f17fdcf54d3\") " pod="openstack/glance-default-internal-api-0" Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.649552 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"2df3ac61-47b1-4ca0-a5a2-80b2a94d1361\") " pod="openstack/glance-default-external-api-0" Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.649611 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2df3ac61-47b1-4ca0-a5a2-80b2a94d1361-scripts\") pod \"glance-default-external-api-0\" (UID: \"2df3ac61-47b1-4ca0-a5a2-80b2a94d1361\") " pod="openstack/glance-default-external-api-0" Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.649695 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2df3ac61-47b1-4ca0-a5a2-80b2a94d1361-config-data\") pod \"glance-default-external-api-0\" (UID: \"2df3ac61-47b1-4ca0-a5a2-80b2a94d1361\") " pod="openstack/glance-default-external-api-0" Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.649734 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4b734f82-967b-49b2-bb9e-2f17fdcf54d3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4b734f82-967b-49b2-bb9e-2f17fdcf54d3\") " pod="openstack/glance-default-internal-api-0" Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.649844 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"4b734f82-967b-49b2-bb9e-2f17fdcf54d3\") " pod="openstack/glance-default-internal-api-0" Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.649888 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2df3ac61-47b1-4ca0-a5a2-80b2a94d1361-logs\") pod \"glance-default-external-api-0\" (UID: \"2df3ac61-47b1-4ca0-a5a2-80b2a94d1361\") " pod="openstack/glance-default-external-api-0" Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.649913 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b734f82-967b-49b2-bb9e-2f17fdcf54d3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4b734f82-967b-49b2-bb9e-2f17fdcf54d3\") " pod="openstack/glance-default-internal-api-0" Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.649973 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b734f82-967b-49b2-bb9e-2f17fdcf54d3-logs\") pod \"glance-default-internal-api-0\" (UID: \"4b734f82-967b-49b2-bb9e-2f17fdcf54d3\") " pod="openstack/glance-default-internal-api-0" Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.649997 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2df3ac61-47b1-4ca0-a5a2-80b2a94d1361-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2df3ac61-47b1-4ca0-a5a2-80b2a94d1361\") " pod="openstack/glance-default-external-api-0" Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.650081 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b734f82-967b-49b2-bb9e-2f17fdcf54d3-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4b734f82-967b-49b2-bb9e-2f17fdcf54d3\") " pod="openstack/glance-default-internal-api-0" Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.650100 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bgtn\" (UniqueName: \"kubernetes.io/projected/4b734f82-967b-49b2-bb9e-2f17fdcf54d3-kube-api-access-4bgtn\") pod \"glance-default-internal-api-0\" (UID: \"4b734f82-967b-49b2-bb9e-2f17fdcf54d3\") " pod="openstack/glance-default-internal-api-0" Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.752093 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhbjv\" (UniqueName: \"kubernetes.io/projected/2df3ac61-47b1-4ca0-a5a2-80b2a94d1361-kube-api-access-fhbjv\") pod \"glance-default-external-api-0\" (UID: \"2df3ac61-47b1-4ca0-a5a2-80b2a94d1361\") " pod="openstack/glance-default-external-api-0" Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.752141 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b734f82-967b-49b2-bb9e-2f17fdcf54d3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4b734f82-967b-49b2-bb9e-2f17fdcf54d3\") " pod="openstack/glance-default-internal-api-0" Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.752170 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2df3ac61-47b1-4ca0-a5a2-80b2a94d1361-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"2df3ac61-47b1-4ca0-a5a2-80b2a94d1361\") " pod="openstack/glance-default-external-api-0" Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.752210 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b734f82-967b-49b2-bb9e-2f17fdcf54d3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4b734f82-967b-49b2-bb9e-2f17fdcf54d3\") " pod="openstack/glance-default-internal-api-0" Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.752229 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"2df3ac61-47b1-4ca0-a5a2-80b2a94d1361\") " pod="openstack/glance-default-external-api-0" Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.752251 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2df3ac61-47b1-4ca0-a5a2-80b2a94d1361-scripts\") pod \"glance-default-external-api-0\" (UID: \"2df3ac61-47b1-4ca0-a5a2-80b2a94d1361\") " pod="openstack/glance-default-external-api-0" Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.752281 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2df3ac61-47b1-4ca0-a5a2-80b2a94d1361-config-data\") pod \"glance-default-external-api-0\" (UID: \"2df3ac61-47b1-4ca0-a5a2-80b2a94d1361\") " pod="openstack/glance-default-external-api-0" Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.752302 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4b734f82-967b-49b2-bb9e-2f17fdcf54d3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4b734f82-967b-49b2-bb9e-2f17fdcf54d3\") " pod="openstack/glance-default-internal-api-0" Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.752328 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"4b734f82-967b-49b2-bb9e-2f17fdcf54d3\") " pod="openstack/glance-default-internal-api-0" Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.752353 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2df3ac61-47b1-4ca0-a5a2-80b2a94d1361-logs\") pod \"glance-default-external-api-0\" (UID: \"2df3ac61-47b1-4ca0-a5a2-80b2a94d1361\") " pod="openstack/glance-default-external-api-0" Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.752385 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b734f82-967b-49b2-bb9e-2f17fdcf54d3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4b734f82-967b-49b2-bb9e-2f17fdcf54d3\") " pod="openstack/glance-default-internal-api-0" Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.752413 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b734f82-967b-49b2-bb9e-2f17fdcf54d3-logs\") pod \"glance-default-internal-api-0\" (UID: \"4b734f82-967b-49b2-bb9e-2f17fdcf54d3\") " pod="openstack/glance-default-internal-api-0" Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.752430 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2df3ac61-47b1-4ca0-a5a2-80b2a94d1361-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2df3ac61-47b1-4ca0-a5a2-80b2a94d1361\") " pod="openstack/glance-default-external-api-0" Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.752463 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b734f82-967b-49b2-bb9e-2f17fdcf54d3-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4b734f82-967b-49b2-bb9e-2f17fdcf54d3\") " pod="openstack/glance-default-internal-api-0" Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.752479 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bgtn\" (UniqueName: \"kubernetes.io/projected/4b734f82-967b-49b2-bb9e-2f17fdcf54d3-kube-api-access-4bgtn\") pod \"glance-default-internal-api-0\" (UID: \"4b734f82-967b-49b2-bb9e-2f17fdcf54d3\") " pod="openstack/glance-default-internal-api-0" Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.752501 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2df3ac61-47b1-4ca0-a5a2-80b2a94d1361-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2df3ac61-47b1-4ca0-a5a2-80b2a94d1361\") " pod="openstack/glance-default-external-api-0" Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.752589 4689 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"2df3ac61-47b1-4ca0-a5a2-80b2a94d1361\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-external-api-0" Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.753660 4689 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"4b734f82-967b-49b2-bb9e-2f17fdcf54d3\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-internal-api-0" Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.753806 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4b734f82-967b-49b2-bb9e-2f17fdcf54d3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4b734f82-967b-49b2-bb9e-2f17fdcf54d3\") " pod="openstack/glance-default-internal-api-0" Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.753949 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b734f82-967b-49b2-bb9e-2f17fdcf54d3-logs\") pod \"glance-default-internal-api-0\" (UID: \"4b734f82-967b-49b2-bb9e-2f17fdcf54d3\") " pod="openstack/glance-default-internal-api-0" Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.755350 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2df3ac61-47b1-4ca0-a5a2-80b2a94d1361-logs\") pod \"glance-default-external-api-0\" (UID: \"2df3ac61-47b1-4ca0-a5a2-80b2a94d1361\") " pod="openstack/glance-default-external-api-0" Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.758607 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2df3ac61-47b1-4ca0-a5a2-80b2a94d1361-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2df3ac61-47b1-4ca0-a5a2-80b2a94d1361\") " pod="openstack/glance-default-external-api-0" Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.762779 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2df3ac61-47b1-4ca0-a5a2-80b2a94d1361-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"2df3ac61-47b1-4ca0-a5a2-80b2a94d1361\") " pod="openstack/glance-default-external-api-0" Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.767334 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2df3ac61-47b1-4ca0-a5a2-80b2a94d1361-scripts\") pod \"glance-default-external-api-0\" (UID: \"2df3ac61-47b1-4ca0-a5a2-80b2a94d1361\") " pod="openstack/glance-default-external-api-0" Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.773490 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b734f82-967b-49b2-bb9e-2f17fdcf54d3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4b734f82-967b-49b2-bb9e-2f17fdcf54d3\") " pod="openstack/glance-default-internal-api-0" Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.780962 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b734f82-967b-49b2-bb9e-2f17fdcf54d3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4b734f82-967b-49b2-bb9e-2f17fdcf54d3\") " pod="openstack/glance-default-internal-api-0" Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.781778 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2df3ac61-47b1-4ca0-a5a2-80b2a94d1361-config-data\") pod \"glance-default-external-api-0\" (UID: \"2df3ac61-47b1-4ca0-a5a2-80b2a94d1361\") " pod="openstack/glance-default-external-api-0" Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.781912 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b734f82-967b-49b2-bb9e-2f17fdcf54d3-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4b734f82-967b-49b2-bb9e-2f17fdcf54d3\") " pod="openstack/glance-default-internal-api-0" Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.802998 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b734f82-967b-49b2-bb9e-2f17fdcf54d3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4b734f82-967b-49b2-bb9e-2f17fdcf54d3\") " pod="openstack/glance-default-internal-api-0" Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.804287 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2df3ac61-47b1-4ca0-a5a2-80b2a94d1361-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2df3ac61-47b1-4ca0-a5a2-80b2a94d1361\") " pod="openstack/glance-default-external-api-0" Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.806956 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhbjv\" (UniqueName: \"kubernetes.io/projected/2df3ac61-47b1-4ca0-a5a2-80b2a94d1361-kube-api-access-fhbjv\") pod \"glance-default-external-api-0\" (UID: \"2df3ac61-47b1-4ca0-a5a2-80b2a94d1361\") " pod="openstack/glance-default-external-api-0" Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.910941 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bgtn\" (UniqueName: \"kubernetes.io/projected/4b734f82-967b-49b2-bb9e-2f17fdcf54d3-kube-api-access-4bgtn\") pod \"glance-default-internal-api-0\" (UID: \"4b734f82-967b-49b2-bb9e-2f17fdcf54d3\") " pod="openstack/glance-default-internal-api-0" Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.937628 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"4b734f82-967b-49b2-bb9e-2f17fdcf54d3\") " pod="openstack/glance-default-internal-api-0" Dec 01 08:57:35 crc kubenswrapper[4689]: I1201 08:57:35.969664 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"2df3ac61-47b1-4ca0-a5a2-80b2a94d1361\") " pod="openstack/glance-default-external-api-0" Dec 01 08:57:36 crc kubenswrapper[4689]: I1201 08:57:36.161722 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 01 08:57:36 crc kubenswrapper[4689]: I1201 08:57:36.183682 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 01 08:57:36 crc kubenswrapper[4689]: I1201 08:57:36.469587 4689 generic.go:334] "Generic (PLEG): container finished" podID="498e2dd1-b659-447d-9f5d-8a86c48fae77" containerID="2f7514baf81930de56bddc5961cad21fec755c798db6ba412d4b0bbaadcf391a" exitCode=0 Dec 01 08:57:36 crc kubenswrapper[4689]: I1201 08:57:36.469765 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-4tdn6" event={"ID":"498e2dd1-b659-447d-9f5d-8a86c48fae77","Type":"ContainerDied","Data":"2f7514baf81930de56bddc5961cad21fec755c798db6ba412d4b0bbaadcf391a"} Dec 01 08:57:36 crc kubenswrapper[4689]: I1201 08:57:36.592721 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 01 08:57:36 crc kubenswrapper[4689]: I1201 08:57:36.894932 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 01 08:57:37 crc kubenswrapper[4689]: I1201 08:57:37.059259 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d35154f-4a7e-4d08-935d-4fadbcd89379" path="/var/lib/kubelet/pods/3d35154f-4a7e-4d08-935d-4fadbcd89379/volumes" Dec 01 08:57:37 crc kubenswrapper[4689]: I1201 08:57:37.060211 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f48e5437-c685-4121-954b-1d3d8625bd28" path="/var/lib/kubelet/pods/f48e5437-c685-4121-954b-1d3d8625bd28/volumes" Dec 01 08:57:37 crc kubenswrapper[4689]: I1201 08:57:37.494819 4689 generic.go:334] "Generic (PLEG): container finished" podID="878af3f4-684c-457b-b943-b47aa64dcb58" containerID="333e324eabbfcf368117f7274a7058ff8bdd97d9cf55201eb1b467d1366bbc8e" exitCode=0 Dec 01 08:57:37 crc kubenswrapper[4689]: I1201 08:57:37.494899 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-5ttrw" event={"ID":"878af3f4-684c-457b-b943-b47aa64dcb58","Type":"ContainerDied","Data":"333e324eabbfcf368117f7274a7058ff8bdd97d9cf55201eb1b467d1366bbc8e"} Dec 01 08:57:39 crc kubenswrapper[4689]: W1201 08:57:39.064574 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2df3ac61_47b1_4ca0_a5a2_80b2a94d1361.slice/crio-a6d217e8225f02cd410f30f481332214f8dc6e5ac2c96ec662bf76de52745393 WatchSource:0}: Error finding container a6d217e8225f02cd410f30f481332214f8dc6e5ac2c96ec662bf76de52745393: Status 404 returned error can't find the container with id a6d217e8225f02cd410f30f481332214f8dc6e5ac2c96ec662bf76de52745393 Dec 01 08:57:39 crc kubenswrapper[4689]: W1201 08:57:39.072262 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4b734f82_967b_49b2_bb9e_2f17fdcf54d3.slice/crio-23979d6da9c474832d80c68bd7e3d13ee4fd6ef75303ee623b5c595760455ab0 WatchSource:0}: Error finding container 23979d6da9c474832d80c68bd7e3d13ee4fd6ef75303ee623b5c595760455ab0: Status 404 returned error can't find the container with id 23979d6da9c474832d80c68bd7e3d13ee4fd6ef75303ee623b5c595760455ab0 Dec 01 08:57:39 crc kubenswrapper[4689]: I1201 08:57:39.159978 4689 patch_prober.go:28] interesting pod/machine-config-daemon-hmdnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 08:57:39 crc kubenswrapper[4689]: I1201 08:57:39.160044 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 08:57:39 crc kubenswrapper[4689]: I1201 08:57:39.160096 4689 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" Dec 01 08:57:39 crc kubenswrapper[4689]: I1201 08:57:39.160844 4689 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d1e70c73c88326989d073faf6067f98f45b064162bc9402e3b9575ef624c63ae"} pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 01 08:57:39 crc kubenswrapper[4689]: I1201 08:57:39.160899 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" containerName="machine-config-daemon" containerID="cri-o://d1e70c73c88326989d073faf6067f98f45b064162bc9402e3b9575ef624c63ae" gracePeriod=600 Dec 01 08:57:39 crc kubenswrapper[4689]: I1201 08:57:39.198538 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-5ttrw" Dec 01 08:57:39 crc kubenswrapper[4689]: I1201 08:57:39.225750 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-f9pr6" Dec 01 08:57:39 crc kubenswrapper[4689]: I1201 08:57:39.230388 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-4tdn6" Dec 01 08:57:39 crc kubenswrapper[4689]: I1201 08:57:39.349827 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g54tb\" (UniqueName: \"kubernetes.io/projected/498e2dd1-b659-447d-9f5d-8a86c48fae77-kube-api-access-g54tb\") pod \"498e2dd1-b659-447d-9f5d-8a86c48fae77\" (UID: \"498e2dd1-b659-447d-9f5d-8a86c48fae77\") " Dec 01 08:57:39 crc kubenswrapper[4689]: I1201 08:57:39.349876 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n8txg\" (UniqueName: \"kubernetes.io/projected/dc8aad14-4d75-45c4-9456-db0e80ffd8e7-kube-api-access-n8txg\") pod \"dc8aad14-4d75-45c4-9456-db0e80ffd8e7\" (UID: \"dc8aad14-4d75-45c4-9456-db0e80ffd8e7\") " Dec 01 08:57:39 crc kubenswrapper[4689]: I1201 08:57:39.349922 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/878af3f4-684c-457b-b943-b47aa64dcb58-db-sync-config-data\") pod \"878af3f4-684c-457b-b943-b47aa64dcb58\" (UID: \"878af3f4-684c-457b-b943-b47aa64dcb58\") " Dec 01 08:57:39 crc kubenswrapper[4689]: I1201 08:57:39.349953 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/498e2dd1-b659-447d-9f5d-8a86c48fae77-credential-keys\") pod \"498e2dd1-b659-447d-9f5d-8a86c48fae77\" (UID: \"498e2dd1-b659-447d-9f5d-8a86c48fae77\") " Dec 01 08:57:39 crc kubenswrapper[4689]: I1201 08:57:39.350010 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/498e2dd1-b659-447d-9f5d-8a86c48fae77-config-data\") pod \"498e2dd1-b659-447d-9f5d-8a86c48fae77\" (UID: \"498e2dd1-b659-447d-9f5d-8a86c48fae77\") " Dec 01 08:57:39 crc kubenswrapper[4689]: I1201 08:57:39.350032 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc8aad14-4d75-45c4-9456-db0e80ffd8e7-config-data\") pod \"dc8aad14-4d75-45c4-9456-db0e80ffd8e7\" (UID: \"dc8aad14-4d75-45c4-9456-db0e80ffd8e7\") " Dec 01 08:57:39 crc kubenswrapper[4689]: I1201 08:57:39.350078 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/498e2dd1-b659-447d-9f5d-8a86c48fae77-fernet-keys\") pod \"498e2dd1-b659-447d-9f5d-8a86c48fae77\" (UID: \"498e2dd1-b659-447d-9f5d-8a86c48fae77\") " Dec 01 08:57:39 crc kubenswrapper[4689]: I1201 08:57:39.350113 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/878af3f4-684c-457b-b943-b47aa64dcb58-combined-ca-bundle\") pod \"878af3f4-684c-457b-b943-b47aa64dcb58\" (UID: \"878af3f4-684c-457b-b943-b47aa64dcb58\") " Dec 01 08:57:39 crc kubenswrapper[4689]: I1201 08:57:39.350141 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc8aad14-4d75-45c4-9456-db0e80ffd8e7-combined-ca-bundle\") pod \"dc8aad14-4d75-45c4-9456-db0e80ffd8e7\" (UID: \"dc8aad14-4d75-45c4-9456-db0e80ffd8e7\") " Dec 01 08:57:39 crc kubenswrapper[4689]: I1201 08:57:39.350168 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/498e2dd1-b659-447d-9f5d-8a86c48fae77-combined-ca-bundle\") pod \"498e2dd1-b659-447d-9f5d-8a86c48fae77\" (UID: \"498e2dd1-b659-447d-9f5d-8a86c48fae77\") " Dec 01 08:57:39 crc kubenswrapper[4689]: I1201 08:57:39.350216 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/498e2dd1-b659-447d-9f5d-8a86c48fae77-scripts\") pod \"498e2dd1-b659-447d-9f5d-8a86c48fae77\" (UID: \"498e2dd1-b659-447d-9f5d-8a86c48fae77\") " Dec 01 08:57:39 crc kubenswrapper[4689]: I1201 08:57:39.350264 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc8aad14-4d75-45c4-9456-db0e80ffd8e7-logs\") pod \"dc8aad14-4d75-45c4-9456-db0e80ffd8e7\" (UID: \"dc8aad14-4d75-45c4-9456-db0e80ffd8e7\") " Dec 01 08:57:39 crc kubenswrapper[4689]: I1201 08:57:39.350313 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc8aad14-4d75-45c4-9456-db0e80ffd8e7-scripts\") pod \"dc8aad14-4d75-45c4-9456-db0e80ffd8e7\" (UID: \"dc8aad14-4d75-45c4-9456-db0e80ffd8e7\") " Dec 01 08:57:39 crc kubenswrapper[4689]: I1201 08:57:39.350343 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l4v76\" (UniqueName: \"kubernetes.io/projected/878af3f4-684c-457b-b943-b47aa64dcb58-kube-api-access-l4v76\") pod \"878af3f4-684c-457b-b943-b47aa64dcb58\" (UID: \"878af3f4-684c-457b-b943-b47aa64dcb58\") " Dec 01 08:57:39 crc kubenswrapper[4689]: I1201 08:57:39.384989 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc8aad14-4d75-45c4-9456-db0e80ffd8e7-logs" (OuterVolumeSpecName: "logs") pod "dc8aad14-4d75-45c4-9456-db0e80ffd8e7" (UID: "dc8aad14-4d75-45c4-9456-db0e80ffd8e7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 08:57:39 crc kubenswrapper[4689]: I1201 08:57:39.389177 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/878af3f4-684c-457b-b943-b47aa64dcb58-kube-api-access-l4v76" (OuterVolumeSpecName: "kube-api-access-l4v76") pod "878af3f4-684c-457b-b943-b47aa64dcb58" (UID: "878af3f4-684c-457b-b943-b47aa64dcb58"). InnerVolumeSpecName "kube-api-access-l4v76". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:57:39 crc kubenswrapper[4689]: I1201 08:57:39.389745 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/498e2dd1-b659-447d-9f5d-8a86c48fae77-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "498e2dd1-b659-447d-9f5d-8a86c48fae77" (UID: "498e2dd1-b659-447d-9f5d-8a86c48fae77"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:57:39 crc kubenswrapper[4689]: I1201 08:57:39.389826 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/498e2dd1-b659-447d-9f5d-8a86c48fae77-scripts" (OuterVolumeSpecName: "scripts") pod "498e2dd1-b659-447d-9f5d-8a86c48fae77" (UID: "498e2dd1-b659-447d-9f5d-8a86c48fae77"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:57:39 crc kubenswrapper[4689]: I1201 08:57:39.390100 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/878af3f4-684c-457b-b943-b47aa64dcb58-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "878af3f4-684c-457b-b943-b47aa64dcb58" (UID: "878af3f4-684c-457b-b943-b47aa64dcb58"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:57:39 crc kubenswrapper[4689]: I1201 08:57:39.390308 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/498e2dd1-b659-447d-9f5d-8a86c48fae77-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "498e2dd1-b659-447d-9f5d-8a86c48fae77" (UID: "498e2dd1-b659-447d-9f5d-8a86c48fae77"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:57:39 crc kubenswrapper[4689]: I1201 08:57:39.390399 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc8aad14-4d75-45c4-9456-db0e80ffd8e7-kube-api-access-n8txg" (OuterVolumeSpecName: "kube-api-access-n8txg") pod "dc8aad14-4d75-45c4-9456-db0e80ffd8e7" (UID: "dc8aad14-4d75-45c4-9456-db0e80ffd8e7"). InnerVolumeSpecName "kube-api-access-n8txg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:57:39 crc kubenswrapper[4689]: I1201 08:57:39.400131 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc8aad14-4d75-45c4-9456-db0e80ffd8e7-scripts" (OuterVolumeSpecName: "scripts") pod "dc8aad14-4d75-45c4-9456-db0e80ffd8e7" (UID: "dc8aad14-4d75-45c4-9456-db0e80ffd8e7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:57:39 crc kubenswrapper[4689]: I1201 08:57:39.414612 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/498e2dd1-b659-447d-9f5d-8a86c48fae77-kube-api-access-g54tb" (OuterVolumeSpecName: "kube-api-access-g54tb") pod "498e2dd1-b659-447d-9f5d-8a86c48fae77" (UID: "498e2dd1-b659-447d-9f5d-8a86c48fae77"). InnerVolumeSpecName "kube-api-access-g54tb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:57:39 crc kubenswrapper[4689]: I1201 08:57:39.429794 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc8aad14-4d75-45c4-9456-db0e80ffd8e7-config-data" (OuterVolumeSpecName: "config-data") pod "dc8aad14-4d75-45c4-9456-db0e80ffd8e7" (UID: "dc8aad14-4d75-45c4-9456-db0e80ffd8e7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:57:39 crc kubenswrapper[4689]: I1201 08:57:39.438105 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/498e2dd1-b659-447d-9f5d-8a86c48fae77-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "498e2dd1-b659-447d-9f5d-8a86c48fae77" (UID: "498e2dd1-b659-447d-9f5d-8a86c48fae77"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:57:39 crc kubenswrapper[4689]: I1201 08:57:39.438628 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/878af3f4-684c-457b-b943-b47aa64dcb58-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "878af3f4-684c-457b-b943-b47aa64dcb58" (UID: "878af3f4-684c-457b-b943-b47aa64dcb58"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:57:39 crc kubenswrapper[4689]: I1201 08:57:39.453745 4689 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc8aad14-4d75-45c4-9456-db0e80ffd8e7-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 08:57:39 crc kubenswrapper[4689]: I1201 08:57:39.453786 4689 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/498e2dd1-b659-447d-9f5d-8a86c48fae77-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 01 08:57:39 crc kubenswrapper[4689]: I1201 08:57:39.453799 4689 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/878af3f4-684c-457b-b943-b47aa64dcb58-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 08:57:39 crc kubenswrapper[4689]: I1201 08:57:39.453811 4689 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/498e2dd1-b659-447d-9f5d-8a86c48fae77-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 08:57:39 crc kubenswrapper[4689]: I1201 08:57:39.453821 4689 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/498e2dd1-b659-447d-9f5d-8a86c48fae77-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 08:57:39 crc kubenswrapper[4689]: I1201 08:57:39.453830 4689 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc8aad14-4d75-45c4-9456-db0e80ffd8e7-logs\") on node \"crc\" DevicePath \"\"" Dec 01 08:57:39 crc kubenswrapper[4689]: I1201 08:57:39.453840 4689 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc8aad14-4d75-45c4-9456-db0e80ffd8e7-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 08:57:39 crc kubenswrapper[4689]: I1201 08:57:39.453849 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l4v76\" (UniqueName: \"kubernetes.io/projected/878af3f4-684c-457b-b943-b47aa64dcb58-kube-api-access-l4v76\") on node \"crc\" DevicePath \"\"" Dec 01 08:57:39 crc kubenswrapper[4689]: I1201 08:57:39.453860 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g54tb\" (UniqueName: \"kubernetes.io/projected/498e2dd1-b659-447d-9f5d-8a86c48fae77-kube-api-access-g54tb\") on node \"crc\" DevicePath \"\"" Dec 01 08:57:39 crc kubenswrapper[4689]: I1201 08:57:39.453869 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n8txg\" (UniqueName: \"kubernetes.io/projected/dc8aad14-4d75-45c4-9456-db0e80ffd8e7-kube-api-access-n8txg\") on node \"crc\" DevicePath \"\"" Dec 01 08:57:39 crc kubenswrapper[4689]: I1201 08:57:39.453878 4689 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/878af3f4-684c-457b-b943-b47aa64dcb58-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 08:57:39 crc kubenswrapper[4689]: I1201 08:57:39.453887 4689 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/498e2dd1-b659-447d-9f5d-8a86c48fae77-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 01 08:57:39 crc kubenswrapper[4689]: I1201 08:57:39.459524 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc8aad14-4d75-45c4-9456-db0e80ffd8e7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dc8aad14-4d75-45c4-9456-db0e80ffd8e7" (UID: "dc8aad14-4d75-45c4-9456-db0e80ffd8e7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:57:39 crc kubenswrapper[4689]: I1201 08:57:39.467530 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/498e2dd1-b659-447d-9f5d-8a86c48fae77-config-data" (OuterVolumeSpecName: "config-data") pod "498e2dd1-b659-447d-9f5d-8a86c48fae77" (UID: "498e2dd1-b659-447d-9f5d-8a86c48fae77"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:57:39 crc kubenswrapper[4689]: I1201 08:57:39.521523 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2df3ac61-47b1-4ca0-a5a2-80b2a94d1361","Type":"ContainerStarted","Data":"a6d217e8225f02cd410f30f481332214f8dc6e5ac2c96ec662bf76de52745393"} Dec 01 08:57:39 crc kubenswrapper[4689]: I1201 08:57:39.541154 4689 generic.go:334] "Generic (PLEG): container finished" podID="3947625d-75bf-4332-a233-1491b2ee9d96" containerID="d1e70c73c88326989d073faf6067f98f45b064162bc9402e3b9575ef624c63ae" exitCode=0 Dec 01 08:57:39 crc kubenswrapper[4689]: I1201 08:57:39.541192 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" event={"ID":"3947625d-75bf-4332-a233-1491b2ee9d96","Type":"ContainerDied","Data":"d1e70c73c88326989d073faf6067f98f45b064162bc9402e3b9575ef624c63ae"} Dec 01 08:57:39 crc kubenswrapper[4689]: I1201 08:57:39.541247 4689 scope.go:117] "RemoveContainer" containerID="dc69fc59569a57f3230435206eb87de05390f897bd389b5558c6be2f4c0990e0" Dec 01 08:57:39 crc kubenswrapper[4689]: I1201 08:57:39.548660 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-f9pr6" event={"ID":"dc8aad14-4d75-45c4-9456-db0e80ffd8e7","Type":"ContainerDied","Data":"7636d0ac6006d6e95d22685f651202fa724ec186aafbdfed52740e373ba5841b"} Dec 01 08:57:39 crc kubenswrapper[4689]: I1201 08:57:39.548700 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7636d0ac6006d6e95d22685f651202fa724ec186aafbdfed52740e373ba5841b" Dec 01 08:57:39 crc kubenswrapper[4689]: I1201 08:57:39.548755 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-f9pr6" Dec 01 08:57:39 crc kubenswrapper[4689]: I1201 08:57:39.558356 4689 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/498e2dd1-b659-447d-9f5d-8a86c48fae77-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 08:57:39 crc kubenswrapper[4689]: I1201 08:57:39.558407 4689 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc8aad14-4d75-45c4-9456-db0e80ffd8e7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 08:57:39 crc kubenswrapper[4689]: I1201 08:57:39.586494 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-5ttrw" Dec 01 08:57:39 crc kubenswrapper[4689]: I1201 08:57:39.586571 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-5ttrw" event={"ID":"878af3f4-684c-457b-b943-b47aa64dcb58","Type":"ContainerDied","Data":"480b5f881fb111563da3c8b941ed218a9b39f5962e3551f07aad6d0594280ec2"} Dec 01 08:57:39 crc kubenswrapper[4689]: I1201 08:57:39.586616 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="480b5f881fb111563da3c8b941ed218a9b39f5962e3551f07aad6d0594280ec2" Dec 01 08:57:39 crc kubenswrapper[4689]: I1201 08:57:39.598165 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4b734f82-967b-49b2-bb9e-2f17fdcf54d3","Type":"ContainerStarted","Data":"23979d6da9c474832d80c68bd7e3d13ee4fd6ef75303ee623b5c595760455ab0"} Dec 01 08:57:39 crc kubenswrapper[4689]: I1201 08:57:39.605993 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-4tdn6" event={"ID":"498e2dd1-b659-447d-9f5d-8a86c48fae77","Type":"ContainerDied","Data":"7c8215da768b97abe0d46d4c958b8f3ce95de5e7b7d62023a93781f923fbc53a"} Dec 01 08:57:39 crc kubenswrapper[4689]: I1201 08:57:39.606043 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c8215da768b97abe0d46d4c958b8f3ce95de5e7b7d62023a93781f923fbc53a" Dec 01 08:57:39 crc kubenswrapper[4689]: I1201 08:57:39.606173 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-4tdn6" Dec 01 08:57:39 crc kubenswrapper[4689]: I1201 08:57:39.682567 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-84b966f6c9-64x67" Dec 01 08:57:39 crc kubenswrapper[4689]: I1201 08:57:39.878711 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-v5gml"] Dec 01 08:57:39 crc kubenswrapper[4689]: I1201 08:57:39.879179 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8b5c85b87-v5gml" podUID="7f5f7d36-b4d5-4fa6-9b34-0fb15a5a17b2" containerName="dnsmasq-dns" containerID="cri-o://dc579cedb070df927524192a142e9de958280e6e24ec7109f350def44b4af7de" gracePeriod=10 Dec 01 08:57:39 crc kubenswrapper[4689]: I1201 08:57:39.911526 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-844c6c5cff-mqnnk"] Dec 01 08:57:39 crc kubenswrapper[4689]: E1201 08:57:39.911958 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="498e2dd1-b659-447d-9f5d-8a86c48fae77" containerName="keystone-bootstrap" Dec 01 08:57:39 crc kubenswrapper[4689]: I1201 08:57:39.911983 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="498e2dd1-b659-447d-9f5d-8a86c48fae77" containerName="keystone-bootstrap" Dec 01 08:57:39 crc kubenswrapper[4689]: E1201 08:57:39.911996 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc8aad14-4d75-45c4-9456-db0e80ffd8e7" containerName="placement-db-sync" Dec 01 08:57:39 crc kubenswrapper[4689]: I1201 08:57:39.912002 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc8aad14-4d75-45c4-9456-db0e80ffd8e7" containerName="placement-db-sync" Dec 01 08:57:39 crc kubenswrapper[4689]: E1201 08:57:39.912020 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="878af3f4-684c-457b-b943-b47aa64dcb58" containerName="barbican-db-sync" Dec 01 08:57:39 crc kubenswrapper[4689]: I1201 08:57:39.912026 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="878af3f4-684c-457b-b943-b47aa64dcb58" containerName="barbican-db-sync" Dec 01 08:57:39 crc kubenswrapper[4689]: I1201 08:57:39.912225 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="498e2dd1-b659-447d-9f5d-8a86c48fae77" containerName="keystone-bootstrap" Dec 01 08:57:39 crc kubenswrapper[4689]: I1201 08:57:39.912246 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="878af3f4-684c-457b-b943-b47aa64dcb58" containerName="barbican-db-sync" Dec 01 08:57:39 crc kubenswrapper[4689]: I1201 08:57:39.912270 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc8aad14-4d75-45c4-9456-db0e80ffd8e7" containerName="placement-db-sync" Dec 01 08:57:39 crc kubenswrapper[4689]: I1201 08:57:39.913217 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-844c6c5cff-mqnnk" Dec 01 08:57:39 crc kubenswrapper[4689]: I1201 08:57:39.922508 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Dec 01 08:57:39 crc kubenswrapper[4689]: I1201 08:57:39.922739 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-d7gf9" Dec 01 08:57:39 crc kubenswrapper[4689]: I1201 08:57:39.932425 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-7cdd6b5dcb-j5dgx"] Dec 01 08:57:39 crc kubenswrapper[4689]: I1201 08:57:39.934170 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7cdd6b5dcb-j5dgx" Dec 01 08:57:39 crc kubenswrapper[4689]: I1201 08:57:39.934179 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Dec 01 08:57:39 crc kubenswrapper[4689]: I1201 08:57:39.937894 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Dec 01 08:57:39 crc kubenswrapper[4689]: I1201 08:57:39.948512 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-844c6c5cff-mqnnk"] Dec 01 08:57:39 crc kubenswrapper[4689]: I1201 08:57:39.994341 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7cdd6b5dcb-j5dgx"] Dec 01 08:57:40 crc kubenswrapper[4689]: I1201 08:57:40.039560 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/059abe7a-8a94-4c9a-8ac2-1830fffad22c-config-data\") pod \"barbican-worker-844c6c5cff-mqnnk\" (UID: \"059abe7a-8a94-4c9a-8ac2-1830fffad22c\") " pod="openstack/barbican-worker-844c6c5cff-mqnnk" Dec 01 08:57:40 crc kubenswrapper[4689]: I1201 08:57:40.039620 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwt47\" (UniqueName: \"kubernetes.io/projected/215d6908-3cbd-486b-adc3-82cdaddef118-kube-api-access-hwt47\") pod \"barbican-keystone-listener-7cdd6b5dcb-j5dgx\" (UID: \"215d6908-3cbd-486b-adc3-82cdaddef118\") " pod="openstack/barbican-keystone-listener-7cdd6b5dcb-j5dgx" Dec 01 08:57:40 crc kubenswrapper[4689]: I1201 08:57:40.039662 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/059abe7a-8a94-4c9a-8ac2-1830fffad22c-logs\") pod \"barbican-worker-844c6c5cff-mqnnk\" (UID: \"059abe7a-8a94-4c9a-8ac2-1830fffad22c\") " pod="openstack/barbican-worker-844c6c5cff-mqnnk" Dec 01 08:57:40 crc kubenswrapper[4689]: I1201 08:57:40.039693 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/215d6908-3cbd-486b-adc3-82cdaddef118-combined-ca-bundle\") pod \"barbican-keystone-listener-7cdd6b5dcb-j5dgx\" (UID: \"215d6908-3cbd-486b-adc3-82cdaddef118\") " pod="openstack/barbican-keystone-listener-7cdd6b5dcb-j5dgx" Dec 01 08:57:40 crc kubenswrapper[4689]: I1201 08:57:40.039732 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/059abe7a-8a94-4c9a-8ac2-1830fffad22c-combined-ca-bundle\") pod \"barbican-worker-844c6c5cff-mqnnk\" (UID: \"059abe7a-8a94-4c9a-8ac2-1830fffad22c\") " pod="openstack/barbican-worker-844c6c5cff-mqnnk" Dec 01 08:57:40 crc kubenswrapper[4689]: I1201 08:57:40.039756 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/215d6908-3cbd-486b-adc3-82cdaddef118-config-data\") pod \"barbican-keystone-listener-7cdd6b5dcb-j5dgx\" (UID: \"215d6908-3cbd-486b-adc3-82cdaddef118\") " pod="openstack/barbican-keystone-listener-7cdd6b5dcb-j5dgx" Dec 01 08:57:40 crc kubenswrapper[4689]: I1201 08:57:40.039784 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/215d6908-3cbd-486b-adc3-82cdaddef118-logs\") pod \"barbican-keystone-listener-7cdd6b5dcb-j5dgx\" (UID: \"215d6908-3cbd-486b-adc3-82cdaddef118\") " pod="openstack/barbican-keystone-listener-7cdd6b5dcb-j5dgx" Dec 01 08:57:40 crc kubenswrapper[4689]: I1201 08:57:40.039836 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/059abe7a-8a94-4c9a-8ac2-1830fffad22c-config-data-custom\") pod \"barbican-worker-844c6c5cff-mqnnk\" (UID: \"059abe7a-8a94-4c9a-8ac2-1830fffad22c\") " pod="openstack/barbican-worker-844c6c5cff-mqnnk" Dec 01 08:57:40 crc kubenswrapper[4689]: I1201 08:57:40.039857 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tm9h\" (UniqueName: \"kubernetes.io/projected/059abe7a-8a94-4c9a-8ac2-1830fffad22c-kube-api-access-7tm9h\") pod \"barbican-worker-844c6c5cff-mqnnk\" (UID: \"059abe7a-8a94-4c9a-8ac2-1830fffad22c\") " pod="openstack/barbican-worker-844c6c5cff-mqnnk" Dec 01 08:57:40 crc kubenswrapper[4689]: I1201 08:57:40.039875 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/215d6908-3cbd-486b-adc3-82cdaddef118-config-data-custom\") pod \"barbican-keystone-listener-7cdd6b5dcb-j5dgx\" (UID: \"215d6908-3cbd-486b-adc3-82cdaddef118\") " pod="openstack/barbican-keystone-listener-7cdd6b5dcb-j5dgx" Dec 01 08:57:40 crc kubenswrapper[4689]: I1201 08:57:40.149867 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/215d6908-3cbd-486b-adc3-82cdaddef118-combined-ca-bundle\") pod \"barbican-keystone-listener-7cdd6b5dcb-j5dgx\" (UID: \"215d6908-3cbd-486b-adc3-82cdaddef118\") " pod="openstack/barbican-keystone-listener-7cdd6b5dcb-j5dgx" Dec 01 08:57:40 crc kubenswrapper[4689]: I1201 08:57:40.149943 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/059abe7a-8a94-4c9a-8ac2-1830fffad22c-combined-ca-bundle\") pod \"barbican-worker-844c6c5cff-mqnnk\" (UID: \"059abe7a-8a94-4c9a-8ac2-1830fffad22c\") " pod="openstack/barbican-worker-844c6c5cff-mqnnk" Dec 01 08:57:40 crc kubenswrapper[4689]: I1201 08:57:40.149967 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/215d6908-3cbd-486b-adc3-82cdaddef118-config-data\") pod \"barbican-keystone-listener-7cdd6b5dcb-j5dgx\" (UID: \"215d6908-3cbd-486b-adc3-82cdaddef118\") " pod="openstack/barbican-keystone-listener-7cdd6b5dcb-j5dgx" Dec 01 08:57:40 crc kubenswrapper[4689]: I1201 08:57:40.150001 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/215d6908-3cbd-486b-adc3-82cdaddef118-logs\") pod \"barbican-keystone-listener-7cdd6b5dcb-j5dgx\" (UID: \"215d6908-3cbd-486b-adc3-82cdaddef118\") " pod="openstack/barbican-keystone-listener-7cdd6b5dcb-j5dgx" Dec 01 08:57:40 crc kubenswrapper[4689]: I1201 08:57:40.150060 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/059abe7a-8a94-4c9a-8ac2-1830fffad22c-config-data-custom\") pod \"barbican-worker-844c6c5cff-mqnnk\" (UID: \"059abe7a-8a94-4c9a-8ac2-1830fffad22c\") " pod="openstack/barbican-worker-844c6c5cff-mqnnk" Dec 01 08:57:40 crc kubenswrapper[4689]: I1201 08:57:40.150078 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tm9h\" (UniqueName: \"kubernetes.io/projected/059abe7a-8a94-4c9a-8ac2-1830fffad22c-kube-api-access-7tm9h\") pod \"barbican-worker-844c6c5cff-mqnnk\" (UID: \"059abe7a-8a94-4c9a-8ac2-1830fffad22c\") " pod="openstack/barbican-worker-844c6c5cff-mqnnk" Dec 01 08:57:40 crc kubenswrapper[4689]: I1201 08:57:40.150098 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/215d6908-3cbd-486b-adc3-82cdaddef118-config-data-custom\") pod \"barbican-keystone-listener-7cdd6b5dcb-j5dgx\" (UID: \"215d6908-3cbd-486b-adc3-82cdaddef118\") " pod="openstack/barbican-keystone-listener-7cdd6b5dcb-j5dgx" Dec 01 08:57:40 crc kubenswrapper[4689]: I1201 08:57:40.150123 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/059abe7a-8a94-4c9a-8ac2-1830fffad22c-config-data\") pod \"barbican-worker-844c6c5cff-mqnnk\" (UID: \"059abe7a-8a94-4c9a-8ac2-1830fffad22c\") " pod="openstack/barbican-worker-844c6c5cff-mqnnk" Dec 01 08:57:40 crc kubenswrapper[4689]: I1201 08:57:40.150146 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwt47\" (UniqueName: \"kubernetes.io/projected/215d6908-3cbd-486b-adc3-82cdaddef118-kube-api-access-hwt47\") pod \"barbican-keystone-listener-7cdd6b5dcb-j5dgx\" (UID: \"215d6908-3cbd-486b-adc3-82cdaddef118\") " pod="openstack/barbican-keystone-listener-7cdd6b5dcb-j5dgx" Dec 01 08:57:40 crc kubenswrapper[4689]: I1201 08:57:40.150182 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/059abe7a-8a94-4c9a-8ac2-1830fffad22c-logs\") pod \"barbican-worker-844c6c5cff-mqnnk\" (UID: \"059abe7a-8a94-4c9a-8ac2-1830fffad22c\") " pod="openstack/barbican-worker-844c6c5cff-mqnnk" Dec 01 08:57:40 crc kubenswrapper[4689]: I1201 08:57:40.151921 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/215d6908-3cbd-486b-adc3-82cdaddef118-logs\") pod \"barbican-keystone-listener-7cdd6b5dcb-j5dgx\" (UID: \"215d6908-3cbd-486b-adc3-82cdaddef118\") " pod="openstack/barbican-keystone-listener-7cdd6b5dcb-j5dgx" Dec 01 08:57:40 crc kubenswrapper[4689]: I1201 08:57:40.163839 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/059abe7a-8a94-4c9a-8ac2-1830fffad22c-logs\") pod \"barbican-worker-844c6c5cff-mqnnk\" (UID: \"059abe7a-8a94-4c9a-8ac2-1830fffad22c\") " pod="openstack/barbican-worker-844c6c5cff-mqnnk" Dec 01 08:57:40 crc kubenswrapper[4689]: I1201 08:57:40.170388 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/059abe7a-8a94-4c9a-8ac2-1830fffad22c-config-data-custom\") pod \"barbican-worker-844c6c5cff-mqnnk\" (UID: \"059abe7a-8a94-4c9a-8ac2-1830fffad22c\") " pod="openstack/barbican-worker-844c6c5cff-mqnnk" Dec 01 08:57:40 crc kubenswrapper[4689]: I1201 08:57:40.179270 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/215d6908-3cbd-486b-adc3-82cdaddef118-combined-ca-bundle\") pod \"barbican-keystone-listener-7cdd6b5dcb-j5dgx\" (UID: \"215d6908-3cbd-486b-adc3-82cdaddef118\") " pod="openstack/barbican-keystone-listener-7cdd6b5dcb-j5dgx" Dec 01 08:57:40 crc kubenswrapper[4689]: I1201 08:57:40.202162 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/215d6908-3cbd-486b-adc3-82cdaddef118-config-data-custom\") pod \"barbican-keystone-listener-7cdd6b5dcb-j5dgx\" (UID: \"215d6908-3cbd-486b-adc3-82cdaddef118\") " pod="openstack/barbican-keystone-listener-7cdd6b5dcb-j5dgx" Dec 01 08:57:40 crc kubenswrapper[4689]: I1201 08:57:40.202774 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/059abe7a-8a94-4c9a-8ac2-1830fffad22c-combined-ca-bundle\") pod \"barbican-worker-844c6c5cff-mqnnk\" (UID: \"059abe7a-8a94-4c9a-8ac2-1830fffad22c\") " pod="openstack/barbican-worker-844c6c5cff-mqnnk" Dec 01 08:57:40 crc kubenswrapper[4689]: I1201 08:57:40.216575 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/215d6908-3cbd-486b-adc3-82cdaddef118-config-data\") pod \"barbican-keystone-listener-7cdd6b5dcb-j5dgx\" (UID: \"215d6908-3cbd-486b-adc3-82cdaddef118\") " pod="openstack/barbican-keystone-listener-7cdd6b5dcb-j5dgx" Dec 01 08:57:40 crc kubenswrapper[4689]: I1201 08:57:40.242105 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tm9h\" (UniqueName: \"kubernetes.io/projected/059abe7a-8a94-4c9a-8ac2-1830fffad22c-kube-api-access-7tm9h\") pod \"barbican-worker-844c6c5cff-mqnnk\" (UID: \"059abe7a-8a94-4c9a-8ac2-1830fffad22c\") " pod="openstack/barbican-worker-844c6c5cff-mqnnk" Dec 01 08:57:40 crc kubenswrapper[4689]: I1201 08:57:40.243072 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwt47\" (UniqueName: \"kubernetes.io/projected/215d6908-3cbd-486b-adc3-82cdaddef118-kube-api-access-hwt47\") pod \"barbican-keystone-listener-7cdd6b5dcb-j5dgx\" (UID: \"215d6908-3cbd-486b-adc3-82cdaddef118\") " pod="openstack/barbican-keystone-listener-7cdd6b5dcb-j5dgx" Dec 01 08:57:40 crc kubenswrapper[4689]: I1201 08:57:40.255573 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-9ggn8"] Dec 01 08:57:40 crc kubenswrapper[4689]: I1201 08:57:40.257134 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c8ddd69c-9ggn8" Dec 01 08:57:40 crc kubenswrapper[4689]: I1201 08:57:40.284972 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7cdd6b5dcb-j5dgx" Dec 01 08:57:40 crc kubenswrapper[4689]: I1201 08:57:40.286030 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-9ggn8"] Dec 01 08:57:40 crc kubenswrapper[4689]: I1201 08:57:40.323794 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/059abe7a-8a94-4c9a-8ac2-1830fffad22c-config-data\") pod \"barbican-worker-844c6c5cff-mqnnk\" (UID: \"059abe7a-8a94-4c9a-8ac2-1830fffad22c\") " pod="openstack/barbican-worker-844c6c5cff-mqnnk" Dec 01 08:57:40 crc kubenswrapper[4689]: I1201 08:57:40.361404 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/49961df2-6dfd-485f-8f00-3645c115c7f0-dns-swift-storage-0\") pod \"dnsmasq-dns-75c8ddd69c-9ggn8\" (UID: \"49961df2-6dfd-485f-8f00-3645c115c7f0\") " pod="openstack/dnsmasq-dns-75c8ddd69c-9ggn8" Dec 01 08:57:40 crc kubenswrapper[4689]: I1201 08:57:40.361452 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6546\" (UniqueName: \"kubernetes.io/projected/49961df2-6dfd-485f-8f00-3645c115c7f0-kube-api-access-s6546\") pod \"dnsmasq-dns-75c8ddd69c-9ggn8\" (UID: \"49961df2-6dfd-485f-8f00-3645c115c7f0\") " pod="openstack/dnsmasq-dns-75c8ddd69c-9ggn8" Dec 01 08:57:40 crc kubenswrapper[4689]: I1201 08:57:40.361556 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/49961df2-6dfd-485f-8f00-3645c115c7f0-ovsdbserver-sb\") pod \"dnsmasq-dns-75c8ddd69c-9ggn8\" (UID: \"49961df2-6dfd-485f-8f00-3645c115c7f0\") " pod="openstack/dnsmasq-dns-75c8ddd69c-9ggn8" Dec 01 08:57:40 crc kubenswrapper[4689]: I1201 08:57:40.361759 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/49961df2-6dfd-485f-8f00-3645c115c7f0-ovsdbserver-nb\") pod \"dnsmasq-dns-75c8ddd69c-9ggn8\" (UID: \"49961df2-6dfd-485f-8f00-3645c115c7f0\") " pod="openstack/dnsmasq-dns-75c8ddd69c-9ggn8" Dec 01 08:57:40 crc kubenswrapper[4689]: I1201 08:57:40.361825 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49961df2-6dfd-485f-8f00-3645c115c7f0-config\") pod \"dnsmasq-dns-75c8ddd69c-9ggn8\" (UID: \"49961df2-6dfd-485f-8f00-3645c115c7f0\") " pod="openstack/dnsmasq-dns-75c8ddd69c-9ggn8" Dec 01 08:57:40 crc kubenswrapper[4689]: I1201 08:57:40.361924 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/49961df2-6dfd-485f-8f00-3645c115c7f0-dns-svc\") pod \"dnsmasq-dns-75c8ddd69c-9ggn8\" (UID: \"49961df2-6dfd-485f-8f00-3645c115c7f0\") " pod="openstack/dnsmasq-dns-75c8ddd69c-9ggn8" Dec 01 08:57:40 crc kubenswrapper[4689]: I1201 08:57:40.387983 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8b5c85b87-v5gml" podUID="7f5f7d36-b4d5-4fa6-9b34-0fb15a5a17b2" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.143:5353: connect: connection refused" Dec 01 08:57:40 crc kubenswrapper[4689]: I1201 08:57:40.472502 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/49961df2-6dfd-485f-8f00-3645c115c7f0-ovsdbserver-nb\") pod \"dnsmasq-dns-75c8ddd69c-9ggn8\" (UID: \"49961df2-6dfd-485f-8f00-3645c115c7f0\") " pod="openstack/dnsmasq-dns-75c8ddd69c-9ggn8" Dec 01 08:57:40 crc kubenswrapper[4689]: I1201 08:57:40.472570 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49961df2-6dfd-485f-8f00-3645c115c7f0-config\") pod \"dnsmasq-dns-75c8ddd69c-9ggn8\" (UID: \"49961df2-6dfd-485f-8f00-3645c115c7f0\") " pod="openstack/dnsmasq-dns-75c8ddd69c-9ggn8" Dec 01 08:57:40 crc kubenswrapper[4689]: I1201 08:57:40.472617 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/49961df2-6dfd-485f-8f00-3645c115c7f0-dns-svc\") pod \"dnsmasq-dns-75c8ddd69c-9ggn8\" (UID: \"49961df2-6dfd-485f-8f00-3645c115c7f0\") " pod="openstack/dnsmasq-dns-75c8ddd69c-9ggn8" Dec 01 08:57:40 crc kubenswrapper[4689]: I1201 08:57:40.472709 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/49961df2-6dfd-485f-8f00-3645c115c7f0-dns-swift-storage-0\") pod \"dnsmasq-dns-75c8ddd69c-9ggn8\" (UID: \"49961df2-6dfd-485f-8f00-3645c115c7f0\") " pod="openstack/dnsmasq-dns-75c8ddd69c-9ggn8" Dec 01 08:57:40 crc kubenswrapper[4689]: I1201 08:57:40.472747 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6546\" (UniqueName: \"kubernetes.io/projected/49961df2-6dfd-485f-8f00-3645c115c7f0-kube-api-access-s6546\") pod \"dnsmasq-dns-75c8ddd69c-9ggn8\" (UID: \"49961df2-6dfd-485f-8f00-3645c115c7f0\") " pod="openstack/dnsmasq-dns-75c8ddd69c-9ggn8" Dec 01 08:57:40 crc kubenswrapper[4689]: I1201 08:57:40.472808 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/49961df2-6dfd-485f-8f00-3645c115c7f0-ovsdbserver-sb\") pod \"dnsmasq-dns-75c8ddd69c-9ggn8\" (UID: \"49961df2-6dfd-485f-8f00-3645c115c7f0\") " pod="openstack/dnsmasq-dns-75c8ddd69c-9ggn8" Dec 01 08:57:40 crc kubenswrapper[4689]: I1201 08:57:40.473823 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/49961df2-6dfd-485f-8f00-3645c115c7f0-ovsdbserver-sb\") pod \"dnsmasq-dns-75c8ddd69c-9ggn8\" (UID: \"49961df2-6dfd-485f-8f00-3645c115c7f0\") " pod="openstack/dnsmasq-dns-75c8ddd69c-9ggn8" Dec 01 08:57:40 crc kubenswrapper[4689]: I1201 08:57:40.474334 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/49961df2-6dfd-485f-8f00-3645c115c7f0-ovsdbserver-nb\") pod \"dnsmasq-dns-75c8ddd69c-9ggn8\" (UID: \"49961df2-6dfd-485f-8f00-3645c115c7f0\") " pod="openstack/dnsmasq-dns-75c8ddd69c-9ggn8" Dec 01 08:57:40 crc kubenswrapper[4689]: I1201 08:57:40.486544 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/49961df2-6dfd-485f-8f00-3645c115c7f0-dns-swift-storage-0\") pod \"dnsmasq-dns-75c8ddd69c-9ggn8\" (UID: \"49961df2-6dfd-485f-8f00-3645c115c7f0\") " pod="openstack/dnsmasq-dns-75c8ddd69c-9ggn8" Dec 01 08:57:40 crc kubenswrapper[4689]: I1201 08:57:40.489417 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49961df2-6dfd-485f-8f00-3645c115c7f0-config\") pod \"dnsmasq-dns-75c8ddd69c-9ggn8\" (UID: \"49961df2-6dfd-485f-8f00-3645c115c7f0\") " pod="openstack/dnsmasq-dns-75c8ddd69c-9ggn8" Dec 01 08:57:40 crc kubenswrapper[4689]: I1201 08:57:40.494499 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/49961df2-6dfd-485f-8f00-3645c115c7f0-dns-svc\") pod \"dnsmasq-dns-75c8ddd69c-9ggn8\" (UID: \"49961df2-6dfd-485f-8f00-3645c115c7f0\") " pod="openstack/dnsmasq-dns-75c8ddd69c-9ggn8" Dec 01 08:57:40 crc kubenswrapper[4689]: I1201 08:57:40.567128 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-844c6c5cff-mqnnk" Dec 01 08:57:40 crc kubenswrapper[4689]: I1201 08:57:40.598847 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6546\" (UniqueName: \"kubernetes.io/projected/49961df2-6dfd-485f-8f00-3645c115c7f0-kube-api-access-s6546\") pod \"dnsmasq-dns-75c8ddd69c-9ggn8\" (UID: \"49961df2-6dfd-485f-8f00-3645c115c7f0\") " pod="openstack/dnsmasq-dns-75c8ddd69c-9ggn8" Dec 01 08:57:40 crc kubenswrapper[4689]: I1201 08:57:40.606439 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-b66ff89fd-wdk5g"] Dec 01 08:57:40 crc kubenswrapper[4689]: I1201 08:57:40.608233 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-b66ff89fd-wdk5g" Dec 01 08:57:40 crc kubenswrapper[4689]: I1201 08:57:40.614988 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Dec 01 08:57:40 crc kubenswrapper[4689]: I1201 08:57:40.636993 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-b66ff89fd-wdk5g"] Dec 01 08:57:40 crc kubenswrapper[4689]: I1201 08:57:40.698766 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" event={"ID":"3947625d-75bf-4332-a233-1491b2ee9d96","Type":"ContainerStarted","Data":"a73b6758eaf1af9bc3a327d8874afb8d2ff28265d999a583ab055845b6607b6a"} Dec 01 08:57:40 crc kubenswrapper[4689]: I1201 08:57:40.700876 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-7575f55b68-75xn5"] Dec 01 08:57:40 crc kubenswrapper[4689]: I1201 08:57:40.702248 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7575f55b68-75xn5" Dec 01 08:57:40 crc kubenswrapper[4689]: I1201 08:57:40.709205 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-44r55" Dec 01 08:57:40 crc kubenswrapper[4689]: I1201 08:57:40.709350 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 01 08:57:40 crc kubenswrapper[4689]: I1201 08:57:40.709556 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Dec 01 08:57:40 crc kubenswrapper[4689]: I1201 08:57:40.709673 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Dec 01 08:57:40 crc kubenswrapper[4689]: I1201 08:57:40.709774 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 01 08:57:40 crc kubenswrapper[4689]: I1201 08:57:40.712645 4689 generic.go:334] "Generic (PLEG): container finished" podID="7f5f7d36-b4d5-4fa6-9b34-0fb15a5a17b2" containerID="dc579cedb070df927524192a142e9de958280e6e24ec7109f350def44b4af7de" exitCode=0 Dec 01 08:57:40 crc kubenswrapper[4689]: I1201 08:57:40.712749 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-v5gml" event={"ID":"7f5f7d36-b4d5-4fa6-9b34-0fb15a5a17b2","Type":"ContainerDied","Data":"dc579cedb070df927524192a142e9de958280e6e24ec7109f350def44b4af7de"} Dec 01 08:57:40 crc kubenswrapper[4689]: I1201 08:57:40.722337 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 01 08:57:40 crc kubenswrapper[4689]: I1201 08:57:40.769309 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7575f55b68-75xn5"] Dec 01 08:57:40 crc kubenswrapper[4689]: I1201 08:57:40.785631 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5f4mr\" (UniqueName: \"kubernetes.io/projected/2943fb44-3da9-4d20-a7ff-7561e1eca1b1-kube-api-access-5f4mr\") pod \"barbican-api-b66ff89fd-wdk5g\" (UID: \"2943fb44-3da9-4d20-a7ff-7561e1eca1b1\") " pod="openstack/barbican-api-b66ff89fd-wdk5g" Dec 01 08:57:40 crc kubenswrapper[4689]: I1201 08:57:40.785677 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c402617-8f98-4531-b798-f395844db3ea-public-tls-certs\") pod \"keystone-7575f55b68-75xn5\" (UID: \"3c402617-8f98-4531-b798-f395844db3ea\") " pod="openstack/keystone-7575f55b68-75xn5" Dec 01 08:57:40 crc kubenswrapper[4689]: I1201 08:57:40.785700 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c402617-8f98-4531-b798-f395844db3ea-combined-ca-bundle\") pod \"keystone-7575f55b68-75xn5\" (UID: \"3c402617-8f98-4531-b798-f395844db3ea\") " pod="openstack/keystone-7575f55b68-75xn5" Dec 01 08:57:40 crc kubenswrapper[4689]: I1201 08:57:40.785876 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hz7hw\" (UniqueName: \"kubernetes.io/projected/3c402617-8f98-4531-b798-f395844db3ea-kube-api-access-hz7hw\") pod \"keystone-7575f55b68-75xn5\" (UID: \"3c402617-8f98-4531-b798-f395844db3ea\") " pod="openstack/keystone-7575f55b68-75xn5" Dec 01 08:57:40 crc kubenswrapper[4689]: I1201 08:57:40.785897 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3c402617-8f98-4531-b798-f395844db3ea-fernet-keys\") pod \"keystone-7575f55b68-75xn5\" (UID: \"3c402617-8f98-4531-b798-f395844db3ea\") " pod="openstack/keystone-7575f55b68-75xn5" Dec 01 08:57:40 crc kubenswrapper[4689]: I1201 08:57:40.785917 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2943fb44-3da9-4d20-a7ff-7561e1eca1b1-logs\") pod \"barbican-api-b66ff89fd-wdk5g\" (UID: \"2943fb44-3da9-4d20-a7ff-7561e1eca1b1\") " pod="openstack/barbican-api-b66ff89fd-wdk5g" Dec 01 08:57:40 crc kubenswrapper[4689]: I1201 08:57:40.785959 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2943fb44-3da9-4d20-a7ff-7561e1eca1b1-combined-ca-bundle\") pod \"barbican-api-b66ff89fd-wdk5g\" (UID: \"2943fb44-3da9-4d20-a7ff-7561e1eca1b1\") " pod="openstack/barbican-api-b66ff89fd-wdk5g" Dec 01 08:57:40 crc kubenswrapper[4689]: I1201 08:57:40.786082 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2943fb44-3da9-4d20-a7ff-7561e1eca1b1-config-data\") pod \"barbican-api-b66ff89fd-wdk5g\" (UID: \"2943fb44-3da9-4d20-a7ff-7561e1eca1b1\") " pod="openstack/barbican-api-b66ff89fd-wdk5g" Dec 01 08:57:40 crc kubenswrapper[4689]: I1201 08:57:40.786195 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2943fb44-3da9-4d20-a7ff-7561e1eca1b1-config-data-custom\") pod \"barbican-api-b66ff89fd-wdk5g\" (UID: \"2943fb44-3da9-4d20-a7ff-7561e1eca1b1\") " pod="openstack/barbican-api-b66ff89fd-wdk5g" Dec 01 08:57:40 crc kubenswrapper[4689]: I1201 08:57:40.786236 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c402617-8f98-4531-b798-f395844db3ea-config-data\") pod \"keystone-7575f55b68-75xn5\" (UID: \"3c402617-8f98-4531-b798-f395844db3ea\") " pod="openstack/keystone-7575f55b68-75xn5" Dec 01 08:57:40 crc kubenswrapper[4689]: I1201 08:57:40.786284 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c402617-8f98-4531-b798-f395844db3ea-scripts\") pod \"keystone-7575f55b68-75xn5\" (UID: \"3c402617-8f98-4531-b798-f395844db3ea\") " pod="openstack/keystone-7575f55b68-75xn5" Dec 01 08:57:40 crc kubenswrapper[4689]: I1201 08:57:40.786327 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3c402617-8f98-4531-b798-f395844db3ea-credential-keys\") pod \"keystone-7575f55b68-75xn5\" (UID: \"3c402617-8f98-4531-b798-f395844db3ea\") " pod="openstack/keystone-7575f55b68-75xn5" Dec 01 08:57:40 crc kubenswrapper[4689]: I1201 08:57:40.786349 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c402617-8f98-4531-b798-f395844db3ea-internal-tls-certs\") pod \"keystone-7575f55b68-75xn5\" (UID: \"3c402617-8f98-4531-b798-f395844db3ea\") " pod="openstack/keystone-7575f55b68-75xn5" Dec 01 08:57:40 crc kubenswrapper[4689]: I1201 08:57:40.792591 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-5454b5d64d-5p8d8"] Dec 01 08:57:40 crc kubenswrapper[4689]: I1201 08:57:40.794133 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5454b5d64d-5p8d8" Dec 01 08:57:40 crc kubenswrapper[4689]: I1201 08:57:40.800706 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Dec 01 08:57:40 crc kubenswrapper[4689]: I1201 08:57:40.800905 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-bhgfx" Dec 01 08:57:40 crc kubenswrapper[4689]: I1201 08:57:40.801457 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Dec 01 08:57:40 crc kubenswrapper[4689]: I1201 08:57:40.803337 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Dec 01 08:57:40 crc kubenswrapper[4689]: I1201 08:57:40.803562 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Dec 01 08:57:40 crc kubenswrapper[4689]: I1201 08:57:40.823471 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5454b5d64d-5p8d8"] Dec 01 08:57:41 crc kubenswrapper[4689]: I1201 08:57:40.853991 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c8ddd69c-9ggn8" Dec 01 08:57:41 crc kubenswrapper[4689]: I1201 08:57:40.946342 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2943fb44-3da9-4d20-a7ff-7561e1eca1b1-config-data-custom\") pod \"barbican-api-b66ff89fd-wdk5g\" (UID: \"2943fb44-3da9-4d20-a7ff-7561e1eca1b1\") " pod="openstack/barbican-api-b66ff89fd-wdk5g" Dec 01 08:57:41 crc kubenswrapper[4689]: I1201 08:57:40.946565 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c402617-8f98-4531-b798-f395844db3ea-config-data\") pod \"keystone-7575f55b68-75xn5\" (UID: \"3c402617-8f98-4531-b798-f395844db3ea\") " pod="openstack/keystone-7575f55b68-75xn5" Dec 01 08:57:41 crc kubenswrapper[4689]: I1201 08:57:40.946620 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c402617-8f98-4531-b798-f395844db3ea-scripts\") pod \"keystone-7575f55b68-75xn5\" (UID: \"3c402617-8f98-4531-b798-f395844db3ea\") " pod="openstack/keystone-7575f55b68-75xn5" Dec 01 08:57:41 crc kubenswrapper[4689]: I1201 08:57:40.946659 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3c402617-8f98-4531-b798-f395844db3ea-credential-keys\") pod \"keystone-7575f55b68-75xn5\" (UID: \"3c402617-8f98-4531-b798-f395844db3ea\") " pod="openstack/keystone-7575f55b68-75xn5" Dec 01 08:57:41 crc kubenswrapper[4689]: I1201 08:57:40.946677 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c402617-8f98-4531-b798-f395844db3ea-internal-tls-certs\") pod \"keystone-7575f55b68-75xn5\" (UID: \"3c402617-8f98-4531-b798-f395844db3ea\") " pod="openstack/keystone-7575f55b68-75xn5" Dec 01 08:57:41 crc kubenswrapper[4689]: I1201 08:57:40.946701 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a31e6c25-e2a2-4c12-9138-6969155a7f20-logs\") pod \"placement-5454b5d64d-5p8d8\" (UID: \"a31e6c25-e2a2-4c12-9138-6969155a7f20\") " pod="openstack/placement-5454b5d64d-5p8d8" Dec 01 08:57:41 crc kubenswrapper[4689]: I1201 08:57:40.946740 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5f4mr\" (UniqueName: \"kubernetes.io/projected/2943fb44-3da9-4d20-a7ff-7561e1eca1b1-kube-api-access-5f4mr\") pod \"barbican-api-b66ff89fd-wdk5g\" (UID: \"2943fb44-3da9-4d20-a7ff-7561e1eca1b1\") " pod="openstack/barbican-api-b66ff89fd-wdk5g" Dec 01 08:57:41 crc kubenswrapper[4689]: I1201 08:57:40.946760 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82fl6\" (UniqueName: \"kubernetes.io/projected/a31e6c25-e2a2-4c12-9138-6969155a7f20-kube-api-access-82fl6\") pod \"placement-5454b5d64d-5p8d8\" (UID: \"a31e6c25-e2a2-4c12-9138-6969155a7f20\") " pod="openstack/placement-5454b5d64d-5p8d8" Dec 01 08:57:41 crc kubenswrapper[4689]: I1201 08:57:40.946786 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c402617-8f98-4531-b798-f395844db3ea-public-tls-certs\") pod \"keystone-7575f55b68-75xn5\" (UID: \"3c402617-8f98-4531-b798-f395844db3ea\") " pod="openstack/keystone-7575f55b68-75xn5" Dec 01 08:57:41 crc kubenswrapper[4689]: I1201 08:57:40.946817 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c402617-8f98-4531-b798-f395844db3ea-combined-ca-bundle\") pod \"keystone-7575f55b68-75xn5\" (UID: \"3c402617-8f98-4531-b798-f395844db3ea\") " pod="openstack/keystone-7575f55b68-75xn5" Dec 01 08:57:41 crc kubenswrapper[4689]: I1201 08:57:40.946837 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a31e6c25-e2a2-4c12-9138-6969155a7f20-config-data\") pod \"placement-5454b5d64d-5p8d8\" (UID: \"a31e6c25-e2a2-4c12-9138-6969155a7f20\") " pod="openstack/placement-5454b5d64d-5p8d8" Dec 01 08:57:41 crc kubenswrapper[4689]: I1201 08:57:40.946866 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a31e6c25-e2a2-4c12-9138-6969155a7f20-internal-tls-certs\") pod \"placement-5454b5d64d-5p8d8\" (UID: \"a31e6c25-e2a2-4c12-9138-6969155a7f20\") " pod="openstack/placement-5454b5d64d-5p8d8" Dec 01 08:57:41 crc kubenswrapper[4689]: I1201 08:57:40.946920 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hz7hw\" (UniqueName: \"kubernetes.io/projected/3c402617-8f98-4531-b798-f395844db3ea-kube-api-access-hz7hw\") pod \"keystone-7575f55b68-75xn5\" (UID: \"3c402617-8f98-4531-b798-f395844db3ea\") " pod="openstack/keystone-7575f55b68-75xn5" Dec 01 08:57:41 crc kubenswrapper[4689]: I1201 08:57:40.946947 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3c402617-8f98-4531-b798-f395844db3ea-fernet-keys\") pod \"keystone-7575f55b68-75xn5\" (UID: \"3c402617-8f98-4531-b798-f395844db3ea\") " pod="openstack/keystone-7575f55b68-75xn5" Dec 01 08:57:41 crc kubenswrapper[4689]: I1201 08:57:40.947011 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2943fb44-3da9-4d20-a7ff-7561e1eca1b1-logs\") pod \"barbican-api-b66ff89fd-wdk5g\" (UID: \"2943fb44-3da9-4d20-a7ff-7561e1eca1b1\") " pod="openstack/barbican-api-b66ff89fd-wdk5g" Dec 01 08:57:41 crc kubenswrapper[4689]: I1201 08:57:40.947077 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a31e6c25-e2a2-4c12-9138-6969155a7f20-public-tls-certs\") pod \"placement-5454b5d64d-5p8d8\" (UID: \"a31e6c25-e2a2-4c12-9138-6969155a7f20\") " pod="openstack/placement-5454b5d64d-5p8d8" Dec 01 08:57:41 crc kubenswrapper[4689]: I1201 08:57:40.947181 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2943fb44-3da9-4d20-a7ff-7561e1eca1b1-combined-ca-bundle\") pod \"barbican-api-b66ff89fd-wdk5g\" (UID: \"2943fb44-3da9-4d20-a7ff-7561e1eca1b1\") " pod="openstack/barbican-api-b66ff89fd-wdk5g" Dec 01 08:57:41 crc kubenswrapper[4689]: I1201 08:57:40.947255 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a31e6c25-e2a2-4c12-9138-6969155a7f20-combined-ca-bundle\") pod \"placement-5454b5d64d-5p8d8\" (UID: \"a31e6c25-e2a2-4c12-9138-6969155a7f20\") " pod="openstack/placement-5454b5d64d-5p8d8" Dec 01 08:57:41 crc kubenswrapper[4689]: I1201 08:57:40.947329 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a31e6c25-e2a2-4c12-9138-6969155a7f20-scripts\") pod \"placement-5454b5d64d-5p8d8\" (UID: \"a31e6c25-e2a2-4c12-9138-6969155a7f20\") " pod="openstack/placement-5454b5d64d-5p8d8" Dec 01 08:57:41 crc kubenswrapper[4689]: I1201 08:57:40.947360 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2943fb44-3da9-4d20-a7ff-7561e1eca1b1-config-data\") pod \"barbican-api-b66ff89fd-wdk5g\" (UID: \"2943fb44-3da9-4d20-a7ff-7561e1eca1b1\") " pod="openstack/barbican-api-b66ff89fd-wdk5g" Dec 01 08:57:41 crc kubenswrapper[4689]: I1201 08:57:40.949104 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Dec 01 08:57:41 crc kubenswrapper[4689]: I1201 08:57:40.949758 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2943fb44-3da9-4d20-a7ff-7561e1eca1b1-logs\") pod \"barbican-api-b66ff89fd-wdk5g\" (UID: \"2943fb44-3da9-4d20-a7ff-7561e1eca1b1\") " pod="openstack/barbican-api-b66ff89fd-wdk5g" Dec 01 08:57:41 crc kubenswrapper[4689]: I1201 08:57:40.952221 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 01 08:57:41 crc kubenswrapper[4689]: I1201 08:57:40.953654 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c402617-8f98-4531-b798-f395844db3ea-combined-ca-bundle\") pod \"keystone-7575f55b68-75xn5\" (UID: \"3c402617-8f98-4531-b798-f395844db3ea\") " pod="openstack/keystone-7575f55b68-75xn5" Dec 01 08:57:41 crc kubenswrapper[4689]: I1201 08:57:40.955271 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 01 08:57:41 crc kubenswrapper[4689]: I1201 08:57:40.955977 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Dec 01 08:57:41 crc kubenswrapper[4689]: I1201 08:57:40.963214 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3c402617-8f98-4531-b798-f395844db3ea-credential-keys\") pod \"keystone-7575f55b68-75xn5\" (UID: \"3c402617-8f98-4531-b798-f395844db3ea\") " pod="openstack/keystone-7575f55b68-75xn5" Dec 01 08:57:41 crc kubenswrapper[4689]: I1201 08:57:40.965731 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2943fb44-3da9-4d20-a7ff-7561e1eca1b1-config-data-custom\") pod \"barbican-api-b66ff89fd-wdk5g\" (UID: \"2943fb44-3da9-4d20-a7ff-7561e1eca1b1\") " pod="openstack/barbican-api-b66ff89fd-wdk5g" Dec 01 08:57:41 crc kubenswrapper[4689]: I1201 08:57:40.966345 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3c402617-8f98-4531-b798-f395844db3ea-fernet-keys\") pod \"keystone-7575f55b68-75xn5\" (UID: \"3c402617-8f98-4531-b798-f395844db3ea\") " pod="openstack/keystone-7575f55b68-75xn5" Dec 01 08:57:41 crc kubenswrapper[4689]: I1201 08:57:40.971582 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c402617-8f98-4531-b798-f395844db3ea-public-tls-certs\") pod \"keystone-7575f55b68-75xn5\" (UID: \"3c402617-8f98-4531-b798-f395844db3ea\") " pod="openstack/keystone-7575f55b68-75xn5" Dec 01 08:57:41 crc kubenswrapper[4689]: I1201 08:57:40.975032 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c402617-8f98-4531-b798-f395844db3ea-config-data\") pod \"keystone-7575f55b68-75xn5\" (UID: \"3c402617-8f98-4531-b798-f395844db3ea\") " pod="openstack/keystone-7575f55b68-75xn5" Dec 01 08:57:41 crc kubenswrapper[4689]: I1201 08:57:40.975665 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 01 08:57:41 crc kubenswrapper[4689]: I1201 08:57:40.975950 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Dec 01 08:57:41 crc kubenswrapper[4689]: I1201 08:57:40.981340 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hz7hw\" (UniqueName: \"kubernetes.io/projected/3c402617-8f98-4531-b798-f395844db3ea-kube-api-access-hz7hw\") pod \"keystone-7575f55b68-75xn5\" (UID: \"3c402617-8f98-4531-b798-f395844db3ea\") " pod="openstack/keystone-7575f55b68-75xn5" Dec 01 08:57:41 crc kubenswrapper[4689]: I1201 08:57:40.982481 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5f4mr\" (UniqueName: \"kubernetes.io/projected/2943fb44-3da9-4d20-a7ff-7561e1eca1b1-kube-api-access-5f4mr\") pod \"barbican-api-b66ff89fd-wdk5g\" (UID: \"2943fb44-3da9-4d20-a7ff-7561e1eca1b1\") " pod="openstack/barbican-api-b66ff89fd-wdk5g" Dec 01 08:57:41 crc kubenswrapper[4689]: I1201 08:57:40.984474 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2943fb44-3da9-4d20-a7ff-7561e1eca1b1-config-data\") pod \"barbican-api-b66ff89fd-wdk5g\" (UID: \"2943fb44-3da9-4d20-a7ff-7561e1eca1b1\") " pod="openstack/barbican-api-b66ff89fd-wdk5g" Dec 01 08:57:41 crc kubenswrapper[4689]: I1201 08:57:40.989234 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2943fb44-3da9-4d20-a7ff-7561e1eca1b1-combined-ca-bundle\") pod \"barbican-api-b66ff89fd-wdk5g\" (UID: \"2943fb44-3da9-4d20-a7ff-7561e1eca1b1\") " pod="openstack/barbican-api-b66ff89fd-wdk5g" Dec 01 08:57:41 crc kubenswrapper[4689]: I1201 08:57:40.998277 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c402617-8f98-4531-b798-f395844db3ea-scripts\") pod \"keystone-7575f55b68-75xn5\" (UID: \"3c402617-8f98-4531-b798-f395844db3ea\") " pod="openstack/keystone-7575f55b68-75xn5" Dec 01 08:57:41 crc kubenswrapper[4689]: I1201 08:57:40.998573 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c402617-8f98-4531-b798-f395844db3ea-internal-tls-certs\") pod \"keystone-7575f55b68-75xn5\" (UID: \"3c402617-8f98-4531-b798-f395844db3ea\") " pod="openstack/keystone-7575f55b68-75xn5" Dec 01 08:57:41 crc kubenswrapper[4689]: I1201 08:57:41.047962 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a31e6c25-e2a2-4c12-9138-6969155a7f20-public-tls-certs\") pod \"placement-5454b5d64d-5p8d8\" (UID: \"a31e6c25-e2a2-4c12-9138-6969155a7f20\") " pod="openstack/placement-5454b5d64d-5p8d8" Dec 01 08:57:41 crc kubenswrapper[4689]: I1201 08:57:41.048021 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a31e6c25-e2a2-4c12-9138-6969155a7f20-combined-ca-bundle\") pod \"placement-5454b5d64d-5p8d8\" (UID: \"a31e6c25-e2a2-4c12-9138-6969155a7f20\") " pod="openstack/placement-5454b5d64d-5p8d8" Dec 01 08:57:41 crc kubenswrapper[4689]: I1201 08:57:41.048057 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a31e6c25-e2a2-4c12-9138-6969155a7f20-scripts\") pod \"placement-5454b5d64d-5p8d8\" (UID: \"a31e6c25-e2a2-4c12-9138-6969155a7f20\") " pod="openstack/placement-5454b5d64d-5p8d8" Dec 01 08:57:41 crc kubenswrapper[4689]: I1201 08:57:41.048113 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a31e6c25-e2a2-4c12-9138-6969155a7f20-logs\") pod \"placement-5454b5d64d-5p8d8\" (UID: \"a31e6c25-e2a2-4c12-9138-6969155a7f20\") " pod="openstack/placement-5454b5d64d-5p8d8" Dec 01 08:57:41 crc kubenswrapper[4689]: I1201 08:57:41.048143 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82fl6\" (UniqueName: \"kubernetes.io/projected/a31e6c25-e2a2-4c12-9138-6969155a7f20-kube-api-access-82fl6\") pod \"placement-5454b5d64d-5p8d8\" (UID: \"a31e6c25-e2a2-4c12-9138-6969155a7f20\") " pod="openstack/placement-5454b5d64d-5p8d8" Dec 01 08:57:41 crc kubenswrapper[4689]: I1201 08:57:41.048167 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a31e6c25-e2a2-4c12-9138-6969155a7f20-config-data\") pod \"placement-5454b5d64d-5p8d8\" (UID: \"a31e6c25-e2a2-4c12-9138-6969155a7f20\") " pod="openstack/placement-5454b5d64d-5p8d8" Dec 01 08:57:41 crc kubenswrapper[4689]: I1201 08:57:41.048184 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a31e6c25-e2a2-4c12-9138-6969155a7f20-internal-tls-certs\") pod \"placement-5454b5d64d-5p8d8\" (UID: \"a31e6c25-e2a2-4c12-9138-6969155a7f20\") " pod="openstack/placement-5454b5d64d-5p8d8" Dec 01 08:57:41 crc kubenswrapper[4689]: I1201 08:57:41.050897 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a31e6c25-e2a2-4c12-9138-6969155a7f20-logs\") pod \"placement-5454b5d64d-5p8d8\" (UID: \"a31e6c25-e2a2-4c12-9138-6969155a7f20\") " pod="openstack/placement-5454b5d64d-5p8d8" Dec 01 08:57:41 crc kubenswrapper[4689]: I1201 08:57:41.066471 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a31e6c25-e2a2-4c12-9138-6969155a7f20-config-data\") pod \"placement-5454b5d64d-5p8d8\" (UID: \"a31e6c25-e2a2-4c12-9138-6969155a7f20\") " pod="openstack/placement-5454b5d64d-5p8d8" Dec 01 08:57:41 crc kubenswrapper[4689]: I1201 08:57:41.066661 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a31e6c25-e2a2-4c12-9138-6969155a7f20-internal-tls-certs\") pod \"placement-5454b5d64d-5p8d8\" (UID: \"a31e6c25-e2a2-4c12-9138-6969155a7f20\") " pod="openstack/placement-5454b5d64d-5p8d8" Dec 01 08:57:41 crc kubenswrapper[4689]: I1201 08:57:41.066837 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a31e6c25-e2a2-4c12-9138-6969155a7f20-public-tls-certs\") pod \"placement-5454b5d64d-5p8d8\" (UID: \"a31e6c25-e2a2-4c12-9138-6969155a7f20\") " pod="openstack/placement-5454b5d64d-5p8d8" Dec 01 08:57:41 crc kubenswrapper[4689]: I1201 08:57:41.067497 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a31e6c25-e2a2-4c12-9138-6969155a7f20-combined-ca-bundle\") pod \"placement-5454b5d64d-5p8d8\" (UID: \"a31e6c25-e2a2-4c12-9138-6969155a7f20\") " pod="openstack/placement-5454b5d64d-5p8d8" Dec 01 08:57:41 crc kubenswrapper[4689]: I1201 08:57:41.086221 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82fl6\" (UniqueName: \"kubernetes.io/projected/a31e6c25-e2a2-4c12-9138-6969155a7f20-kube-api-access-82fl6\") pod \"placement-5454b5d64d-5p8d8\" (UID: \"a31e6c25-e2a2-4c12-9138-6969155a7f20\") " pod="openstack/placement-5454b5d64d-5p8d8" Dec 01 08:57:41 crc kubenswrapper[4689]: I1201 08:57:41.087207 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a31e6c25-e2a2-4c12-9138-6969155a7f20-scripts\") pod \"placement-5454b5d64d-5p8d8\" (UID: \"a31e6c25-e2a2-4c12-9138-6969155a7f20\") " pod="openstack/placement-5454b5d64d-5p8d8" Dec 01 08:57:41 crc kubenswrapper[4689]: I1201 08:57:41.281524 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-b66ff89fd-wdk5g" Dec 01 08:57:41 crc kubenswrapper[4689]: I1201 08:57:41.322808 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-44r55" Dec 01 08:57:41 crc kubenswrapper[4689]: I1201 08:57:41.333313 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7575f55b68-75xn5" Dec 01 08:57:41 crc kubenswrapper[4689]: I1201 08:57:41.472848 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5454b5d64d-5p8d8" Dec 01 08:57:41 crc kubenswrapper[4689]: I1201 08:57:41.518080 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-v5gml" Dec 01 08:57:41 crc kubenswrapper[4689]: I1201 08:57:41.546999 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7cdd6b5dcb-j5dgx"] Dec 01 08:57:41 crc kubenswrapper[4689]: E1201 08:57:41.568995 4689 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7f5f7d36_b4d5_4fa6_9b34_0fb15a5a17b2.slice/crio-conmon-dc579cedb070df927524192a142e9de958280e6e24ec7109f350def44b4af7de.scope\": RecentStats: unable to find data in memory cache]" Dec 01 08:57:41 crc kubenswrapper[4689]: I1201 08:57:41.571525 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-844c6c5cff-mqnnk"] Dec 01 08:57:41 crc kubenswrapper[4689]: I1201 08:57:41.742910 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7f5f7d36-b4d5-4fa6-9b34-0fb15a5a17b2-ovsdbserver-sb\") pod \"7f5f7d36-b4d5-4fa6-9b34-0fb15a5a17b2\" (UID: \"7f5f7d36-b4d5-4fa6-9b34-0fb15a5a17b2\") " Dec 01 08:57:41 crc kubenswrapper[4689]: I1201 08:57:41.743113 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7f5f7d36-b4d5-4fa6-9b34-0fb15a5a17b2-dns-swift-storage-0\") pod \"7f5f7d36-b4d5-4fa6-9b34-0fb15a5a17b2\" (UID: \"7f5f7d36-b4d5-4fa6-9b34-0fb15a5a17b2\") " Dec 01 08:57:41 crc kubenswrapper[4689]: I1201 08:57:41.743137 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f5f7d36-b4d5-4fa6-9b34-0fb15a5a17b2-ovsdbserver-nb\") pod \"7f5f7d36-b4d5-4fa6-9b34-0fb15a5a17b2\" (UID: \"7f5f7d36-b4d5-4fa6-9b34-0fb15a5a17b2\") " Dec 01 08:57:41 crc kubenswrapper[4689]: I1201 08:57:41.743178 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f5f7d36-b4d5-4fa6-9b34-0fb15a5a17b2-dns-svc\") pod \"7f5f7d36-b4d5-4fa6-9b34-0fb15a5a17b2\" (UID: \"7f5f7d36-b4d5-4fa6-9b34-0fb15a5a17b2\") " Dec 01 08:57:41 crc kubenswrapper[4689]: I1201 08:57:41.743224 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f5f7d36-b4d5-4fa6-9b34-0fb15a5a17b2-config\") pod \"7f5f7d36-b4d5-4fa6-9b34-0fb15a5a17b2\" (UID: \"7f5f7d36-b4d5-4fa6-9b34-0fb15a5a17b2\") " Dec 01 08:57:41 crc kubenswrapper[4689]: I1201 08:57:41.743268 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j2tlg\" (UniqueName: \"kubernetes.io/projected/7f5f7d36-b4d5-4fa6-9b34-0fb15a5a17b2-kube-api-access-j2tlg\") pod \"7f5f7d36-b4d5-4fa6-9b34-0fb15a5a17b2\" (UID: \"7f5f7d36-b4d5-4fa6-9b34-0fb15a5a17b2\") " Dec 01 08:57:41 crc kubenswrapper[4689]: I1201 08:57:41.795048 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-9ggn8"] Dec 01 08:57:41 crc kubenswrapper[4689]: I1201 08:57:41.797653 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f5f7d36-b4d5-4fa6-9b34-0fb15a5a17b2-kube-api-access-j2tlg" (OuterVolumeSpecName: "kube-api-access-j2tlg") pod "7f5f7d36-b4d5-4fa6-9b34-0fb15a5a17b2" (UID: "7f5f7d36-b4d5-4fa6-9b34-0fb15a5a17b2"). InnerVolumeSpecName "kube-api-access-j2tlg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:57:41 crc kubenswrapper[4689]: I1201 08:57:41.845657 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j2tlg\" (UniqueName: \"kubernetes.io/projected/7f5f7d36-b4d5-4fa6-9b34-0fb15a5a17b2-kube-api-access-j2tlg\") on node \"crc\" DevicePath \"\"" Dec 01 08:57:41 crc kubenswrapper[4689]: I1201 08:57:41.867483 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-v5gml" event={"ID":"7f5f7d36-b4d5-4fa6-9b34-0fb15a5a17b2","Type":"ContainerDied","Data":"6a244b9a6cf6e697d35c8d0d4fc07f358f5fa0f527e2fa1fbcee3724013ea09c"} Dec 01 08:57:41 crc kubenswrapper[4689]: I1201 08:57:41.867542 4689 scope.go:117] "RemoveContainer" containerID="dc579cedb070df927524192a142e9de958280e6e24ec7109f350def44b4af7de" Dec 01 08:57:41 crc kubenswrapper[4689]: I1201 08:57:41.867695 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-v5gml" Dec 01 08:57:41 crc kubenswrapper[4689]: I1201 08:57:41.894791 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4b734f82-967b-49b2-bb9e-2f17fdcf54d3","Type":"ContainerStarted","Data":"bceeee0903dfff5ae94b2c6888c44d8c4722f09c38c476bf04eac8d1ee7b266d"} Dec 01 08:57:41 crc kubenswrapper[4689]: I1201 08:57:41.895533 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f5f7d36-b4d5-4fa6-9b34-0fb15a5a17b2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7f5f7d36-b4d5-4fa6-9b34-0fb15a5a17b2" (UID: "7f5f7d36-b4d5-4fa6-9b34-0fb15a5a17b2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:57:41 crc kubenswrapper[4689]: I1201 08:57:41.910788 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-844c6c5cff-mqnnk" event={"ID":"059abe7a-8a94-4c9a-8ac2-1830fffad22c","Type":"ContainerStarted","Data":"067955b53dff6299c011f72c575d8c769936338d22a7728b048cc86359d671e6"} Dec 01 08:57:41 crc kubenswrapper[4689]: I1201 08:57:41.920942 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7cdd6b5dcb-j5dgx" event={"ID":"215d6908-3cbd-486b-adc3-82cdaddef118","Type":"ContainerStarted","Data":"695fecdde4ce9e8bef1b595aefca866a921b96503405aa365e9341727894fc83"} Dec 01 08:57:41 crc kubenswrapper[4689]: I1201 08:57:41.923063 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f5f7d36-b4d5-4fa6-9b34-0fb15a5a17b2-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "7f5f7d36-b4d5-4fa6-9b34-0fb15a5a17b2" (UID: "7f5f7d36-b4d5-4fa6-9b34-0fb15a5a17b2"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:57:41 crc kubenswrapper[4689]: I1201 08:57:41.945185 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f5f7d36-b4d5-4fa6-9b34-0fb15a5a17b2-config" (OuterVolumeSpecName: "config") pod "7f5f7d36-b4d5-4fa6-9b34-0fb15a5a17b2" (UID: "7f5f7d36-b4d5-4fa6-9b34-0fb15a5a17b2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:57:41 crc kubenswrapper[4689]: I1201 08:57:41.946730 4689 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f5f7d36-b4d5-4fa6-9b34-0fb15a5a17b2-config\") on node \"crc\" DevicePath \"\"" Dec 01 08:57:41 crc kubenswrapper[4689]: I1201 08:57:41.946755 4689 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7f5f7d36-b4d5-4fa6-9b34-0fb15a5a17b2-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 01 08:57:41 crc kubenswrapper[4689]: I1201 08:57:41.946765 4689 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f5f7d36-b4d5-4fa6-9b34-0fb15a5a17b2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 01 08:57:41 crc kubenswrapper[4689]: I1201 08:57:41.948284 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f5f7d36-b4d5-4fa6-9b34-0fb15a5a17b2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7f5f7d36-b4d5-4fa6-9b34-0fb15a5a17b2" (UID: "7f5f7d36-b4d5-4fa6-9b34-0fb15a5a17b2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:57:42 crc kubenswrapper[4689]: I1201 08:57:42.001595 4689 scope.go:117] "RemoveContainer" containerID="96526ff594f5b22294f4fa9b589c03d2574c4f45f687c40417695c662a77ee2d" Dec 01 08:57:42 crc kubenswrapper[4689]: I1201 08:57:42.005726 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f5f7d36-b4d5-4fa6-9b34-0fb15a5a17b2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7f5f7d36-b4d5-4fa6-9b34-0fb15a5a17b2" (UID: "7f5f7d36-b4d5-4fa6-9b34-0fb15a5a17b2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:57:42 crc kubenswrapper[4689]: I1201 08:57:42.049586 4689 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7f5f7d36-b4d5-4fa6-9b34-0fb15a5a17b2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 01 08:57:42 crc kubenswrapper[4689]: I1201 08:57:42.049783 4689 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f5f7d36-b4d5-4fa6-9b34-0fb15a5a17b2-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 01 08:57:42 crc kubenswrapper[4689]: I1201 08:57:42.050425 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-78d9cd9dbd-qxwq7" Dec 01 08:57:42 crc kubenswrapper[4689]: I1201 08:57:42.050448 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-78d9cd9dbd-qxwq7" Dec 01 08:57:42 crc kubenswrapper[4689]: I1201 08:57:42.080540 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-78d9cd9dbd-qxwq7" podUID="e88c04bb-01ff-47a6-8942-05a9a2a68416" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Dec 01 08:57:42 crc kubenswrapper[4689]: I1201 08:57:42.188728 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7575f55b68-75xn5"] Dec 01 08:57:42 crc kubenswrapper[4689]: I1201 08:57:42.238956 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-d65b9788-2kr5p" Dec 01 08:57:42 crc kubenswrapper[4689]: I1201 08:57:42.239329 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-d65b9788-2kr5p" Dec 01 08:57:42 crc kubenswrapper[4689]: I1201 08:57:42.253588 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-d65b9788-2kr5p" podUID="fcebf70c-3de0-499e-928d-3419299a512f" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Dec 01 08:57:42 crc kubenswrapper[4689]: I1201 08:57:42.306530 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-b66ff89fd-wdk5g"] Dec 01 08:57:42 crc kubenswrapper[4689]: I1201 08:57:42.439094 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-v5gml"] Dec 01 08:57:42 crc kubenswrapper[4689]: I1201 08:57:42.448637 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-v5gml"] Dec 01 08:57:42 crc kubenswrapper[4689]: I1201 08:57:42.556127 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5454b5d64d-5p8d8"] Dec 01 08:57:42 crc kubenswrapper[4689]: I1201 08:57:42.979892 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2df3ac61-47b1-4ca0-a5a2-80b2a94d1361","Type":"ContainerStarted","Data":"50c87837343c21f6c696f8a4d45c097939acce86c6e84bcf6063952335c4f682"} Dec 01 08:57:42 crc kubenswrapper[4689]: I1201 08:57:42.984255 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7575f55b68-75xn5" event={"ID":"3c402617-8f98-4531-b798-f395844db3ea","Type":"ContainerStarted","Data":"35ae51de47023cd99a60d38bfe104c2fd42dd116a9a225db9ae04b2137423e6c"} Dec 01 08:57:42 crc kubenswrapper[4689]: I1201 08:57:42.984390 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-7575f55b68-75xn5" Dec 01 08:57:42 crc kubenswrapper[4689]: I1201 08:57:42.984407 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7575f55b68-75xn5" event={"ID":"3c402617-8f98-4531-b798-f395844db3ea","Type":"ContainerStarted","Data":"0a797a311b01a9c0af4f00e6d2b858447509a2b9cabacfc096b814a2e7f959bb"} Dec 01 08:57:43 crc kubenswrapper[4689]: I1201 08:57:43.020328 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-7575f55b68-75xn5" podStartSLOduration=3.020304606 podStartE2EDuration="3.020304606s" podCreationTimestamp="2025-12-01 08:57:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:57:43.005088988 +0000 UTC m=+1143.077376892" watchObservedRunningTime="2025-12-01 08:57:43.020304606 +0000 UTC m=+1143.092592510" Dec 01 08:57:43 crc kubenswrapper[4689]: I1201 08:57:43.045239 4689 generic.go:334] "Generic (PLEG): container finished" podID="49961df2-6dfd-485f-8f00-3645c115c7f0" containerID="34e82cbade1d32e224ae96afb028dce4fb90180addee695920d1de52c2043cc5" exitCode=0 Dec 01 08:57:43 crc kubenswrapper[4689]: I1201 08:57:43.045520 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c8ddd69c-9ggn8" event={"ID":"49961df2-6dfd-485f-8f00-3645c115c7f0","Type":"ContainerDied","Data":"34e82cbade1d32e224ae96afb028dce4fb90180addee695920d1de52c2043cc5"} Dec 01 08:57:43 crc kubenswrapper[4689]: I1201 08:57:43.045575 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c8ddd69c-9ggn8" event={"ID":"49961df2-6dfd-485f-8f00-3645c115c7f0","Type":"ContainerStarted","Data":"049f62b77233a003b5ffa0ca17a68f7ae69a7f4f1326100fa830cd63dc2cf1ac"} Dec 01 08:57:43 crc kubenswrapper[4689]: I1201 08:57:43.073044 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f5f7d36-b4d5-4fa6-9b34-0fb15a5a17b2" path="/var/lib/kubelet/pods/7f5f7d36-b4d5-4fa6-9b34-0fb15a5a17b2/volumes" Dec 01 08:57:43 crc kubenswrapper[4689]: I1201 08:57:43.077786 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-b66ff89fd-wdk5g" event={"ID":"2943fb44-3da9-4d20-a7ff-7561e1eca1b1","Type":"ContainerStarted","Data":"2e0dabe2ffd5b964b1be2793850d334bd8e6dc08d7f3e264cac7ce5861b091f0"} Dec 01 08:57:43 crc kubenswrapper[4689]: I1201 08:57:43.077849 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-b66ff89fd-wdk5g" event={"ID":"2943fb44-3da9-4d20-a7ff-7561e1eca1b1","Type":"ContainerStarted","Data":"e0e065fbd581fd1051894ae8e54a3f2863d0c3352dde57320ce1dde7b0b4f5e7"} Dec 01 08:57:43 crc kubenswrapper[4689]: I1201 08:57:43.106282 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5454b5d64d-5p8d8" event={"ID":"a31e6c25-e2a2-4c12-9138-6969155a7f20","Type":"ContainerStarted","Data":"fcc026c1beca513401d167ecbc5e9517ad1cfdfcfd3ebd0123cd7f985d69ca66"} Dec 01 08:57:44 crc kubenswrapper[4689]: I1201 08:57:44.120082 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4b734f82-967b-49b2-bb9e-2f17fdcf54d3","Type":"ContainerStarted","Data":"707fec18c0267bdb413677b34b46b97f18071f6dc9d667142ed084a3e1f0ab5d"} Dec 01 08:57:44 crc kubenswrapper[4689]: I1201 08:57:44.124757 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5454b5d64d-5p8d8" event={"ID":"a31e6c25-e2a2-4c12-9138-6969155a7f20","Type":"ContainerStarted","Data":"f9408521694509316e8edc7a22486d5c9565b39ffe9f0966a4959c67153ac6bb"} Dec 01 08:57:44 crc kubenswrapper[4689]: I1201 08:57:44.126984 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-kx454" event={"ID":"767a61f9-7a7d-43df-b53f-efdc8c693381","Type":"ContainerStarted","Data":"5b0fae6c6cdf40c359bd0d9c1173095622267bdfc4784caa27b64ba809f165ad"} Dec 01 08:57:44 crc kubenswrapper[4689]: I1201 08:57:44.129820 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2df3ac61-47b1-4ca0-a5a2-80b2a94d1361","Type":"ContainerStarted","Data":"983d509035efff3130b097cb56b9c7a0702e2f2a59c994ed32cf7b1f0a76b86c"} Dec 01 08:57:44 crc kubenswrapper[4689]: I1201 08:57:44.165337 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=9.165316624999999 podStartE2EDuration="9.165316625s" podCreationTimestamp="2025-12-01 08:57:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:57:44.161773108 +0000 UTC m=+1144.234061012" watchObservedRunningTime="2025-12-01 08:57:44.165316625 +0000 UTC m=+1144.237604529" Dec 01 08:57:44 crc kubenswrapper[4689]: I1201 08:57:44.207896 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-kx454" podStartSLOduration=5.416012438 podStartE2EDuration="55.207877872s" podCreationTimestamp="2025-12-01 08:56:49 +0000 UTC" firstStartedPulling="2025-12-01 08:56:52.878011051 +0000 UTC m=+1092.950298945" lastFinishedPulling="2025-12-01 08:57:42.669876475 +0000 UTC m=+1142.742164379" observedRunningTime="2025-12-01 08:57:44.186391713 +0000 UTC m=+1144.258679627" watchObservedRunningTime="2025-12-01 08:57:44.207877872 +0000 UTC m=+1144.280165776" Dec 01 08:57:44 crc kubenswrapper[4689]: I1201 08:57:44.696725 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=9.696701234 podStartE2EDuration="9.696701234s" podCreationTimestamp="2025-12-01 08:57:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:57:44.233488913 +0000 UTC m=+1144.305776817" watchObservedRunningTime="2025-12-01 08:57:44.696701234 +0000 UTC m=+1144.768989138" Dec 01 08:57:44 crc kubenswrapper[4689]: I1201 08:57:44.706176 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7bd884c498-fvqdz"] Dec 01 08:57:44 crc kubenswrapper[4689]: E1201 08:57:44.706649 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f5f7d36-b4d5-4fa6-9b34-0fb15a5a17b2" containerName="init" Dec 01 08:57:44 crc kubenswrapper[4689]: I1201 08:57:44.706674 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f5f7d36-b4d5-4fa6-9b34-0fb15a5a17b2" containerName="init" Dec 01 08:57:44 crc kubenswrapper[4689]: E1201 08:57:44.706707 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f5f7d36-b4d5-4fa6-9b34-0fb15a5a17b2" containerName="dnsmasq-dns" Dec 01 08:57:44 crc kubenswrapper[4689]: I1201 08:57:44.706715 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f5f7d36-b4d5-4fa6-9b34-0fb15a5a17b2" containerName="dnsmasq-dns" Dec 01 08:57:44 crc kubenswrapper[4689]: I1201 08:57:44.706933 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f5f7d36-b4d5-4fa6-9b34-0fb15a5a17b2" containerName="dnsmasq-dns" Dec 01 08:57:44 crc kubenswrapper[4689]: I1201 08:57:44.708101 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7bd884c498-fvqdz" Dec 01 08:57:44 crc kubenswrapper[4689]: I1201 08:57:44.710205 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Dec 01 08:57:44 crc kubenswrapper[4689]: I1201 08:57:44.710310 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Dec 01 08:57:44 crc kubenswrapper[4689]: I1201 08:57:44.728272 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7bd884c498-fvqdz"] Dec 01 08:57:44 crc kubenswrapper[4689]: I1201 08:57:44.825047 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c74dr\" (UniqueName: \"kubernetes.io/projected/1bd94e50-aa23-4249-acd5-293b272a8123-kube-api-access-c74dr\") pod \"barbican-api-7bd884c498-fvqdz\" (UID: \"1bd94e50-aa23-4249-acd5-293b272a8123\") " pod="openstack/barbican-api-7bd884c498-fvqdz" Dec 01 08:57:44 crc kubenswrapper[4689]: I1201 08:57:44.825107 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1bd94e50-aa23-4249-acd5-293b272a8123-config-data-custom\") pod \"barbican-api-7bd884c498-fvqdz\" (UID: \"1bd94e50-aa23-4249-acd5-293b272a8123\") " pod="openstack/barbican-api-7bd884c498-fvqdz" Dec 01 08:57:44 crc kubenswrapper[4689]: I1201 08:57:44.825155 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1bd94e50-aa23-4249-acd5-293b272a8123-internal-tls-certs\") pod \"barbican-api-7bd884c498-fvqdz\" (UID: \"1bd94e50-aa23-4249-acd5-293b272a8123\") " pod="openstack/barbican-api-7bd884c498-fvqdz" Dec 01 08:57:44 crc kubenswrapper[4689]: I1201 08:57:44.825204 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bd94e50-aa23-4249-acd5-293b272a8123-config-data\") pod \"barbican-api-7bd884c498-fvqdz\" (UID: \"1bd94e50-aa23-4249-acd5-293b272a8123\") " pod="openstack/barbican-api-7bd884c498-fvqdz" Dec 01 08:57:44 crc kubenswrapper[4689]: I1201 08:57:44.825254 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bd94e50-aa23-4249-acd5-293b272a8123-combined-ca-bundle\") pod \"barbican-api-7bd884c498-fvqdz\" (UID: \"1bd94e50-aa23-4249-acd5-293b272a8123\") " pod="openstack/barbican-api-7bd884c498-fvqdz" Dec 01 08:57:44 crc kubenswrapper[4689]: I1201 08:57:44.825287 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1bd94e50-aa23-4249-acd5-293b272a8123-logs\") pod \"barbican-api-7bd884c498-fvqdz\" (UID: \"1bd94e50-aa23-4249-acd5-293b272a8123\") " pod="openstack/barbican-api-7bd884c498-fvqdz" Dec 01 08:57:44 crc kubenswrapper[4689]: I1201 08:57:44.825340 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1bd94e50-aa23-4249-acd5-293b272a8123-public-tls-certs\") pod \"barbican-api-7bd884c498-fvqdz\" (UID: \"1bd94e50-aa23-4249-acd5-293b272a8123\") " pod="openstack/barbican-api-7bd884c498-fvqdz" Dec 01 08:57:44 crc kubenswrapper[4689]: I1201 08:57:44.927089 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c74dr\" (UniqueName: \"kubernetes.io/projected/1bd94e50-aa23-4249-acd5-293b272a8123-kube-api-access-c74dr\") pod \"barbican-api-7bd884c498-fvqdz\" (UID: \"1bd94e50-aa23-4249-acd5-293b272a8123\") " pod="openstack/barbican-api-7bd884c498-fvqdz" Dec 01 08:57:44 crc kubenswrapper[4689]: I1201 08:57:44.927141 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1bd94e50-aa23-4249-acd5-293b272a8123-config-data-custom\") pod \"barbican-api-7bd884c498-fvqdz\" (UID: \"1bd94e50-aa23-4249-acd5-293b272a8123\") " pod="openstack/barbican-api-7bd884c498-fvqdz" Dec 01 08:57:44 crc kubenswrapper[4689]: I1201 08:57:44.927178 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1bd94e50-aa23-4249-acd5-293b272a8123-internal-tls-certs\") pod \"barbican-api-7bd884c498-fvqdz\" (UID: \"1bd94e50-aa23-4249-acd5-293b272a8123\") " pod="openstack/barbican-api-7bd884c498-fvqdz" Dec 01 08:57:44 crc kubenswrapper[4689]: I1201 08:57:44.927217 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bd94e50-aa23-4249-acd5-293b272a8123-config-data\") pod \"barbican-api-7bd884c498-fvqdz\" (UID: \"1bd94e50-aa23-4249-acd5-293b272a8123\") " pod="openstack/barbican-api-7bd884c498-fvqdz" Dec 01 08:57:44 crc kubenswrapper[4689]: I1201 08:57:44.927251 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bd94e50-aa23-4249-acd5-293b272a8123-combined-ca-bundle\") pod \"barbican-api-7bd884c498-fvqdz\" (UID: \"1bd94e50-aa23-4249-acd5-293b272a8123\") " pod="openstack/barbican-api-7bd884c498-fvqdz" Dec 01 08:57:44 crc kubenswrapper[4689]: I1201 08:57:44.927271 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1bd94e50-aa23-4249-acd5-293b272a8123-logs\") pod \"barbican-api-7bd884c498-fvqdz\" (UID: \"1bd94e50-aa23-4249-acd5-293b272a8123\") " pod="openstack/barbican-api-7bd884c498-fvqdz" Dec 01 08:57:44 crc kubenswrapper[4689]: I1201 08:57:44.927301 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1bd94e50-aa23-4249-acd5-293b272a8123-public-tls-certs\") pod \"barbican-api-7bd884c498-fvqdz\" (UID: \"1bd94e50-aa23-4249-acd5-293b272a8123\") " pod="openstack/barbican-api-7bd884c498-fvqdz" Dec 01 08:57:44 crc kubenswrapper[4689]: I1201 08:57:44.927780 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1bd94e50-aa23-4249-acd5-293b272a8123-logs\") pod \"barbican-api-7bd884c498-fvqdz\" (UID: \"1bd94e50-aa23-4249-acd5-293b272a8123\") " pod="openstack/barbican-api-7bd884c498-fvqdz" Dec 01 08:57:44 crc kubenswrapper[4689]: I1201 08:57:44.935917 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1bd94e50-aa23-4249-acd5-293b272a8123-internal-tls-certs\") pod \"barbican-api-7bd884c498-fvqdz\" (UID: \"1bd94e50-aa23-4249-acd5-293b272a8123\") " pod="openstack/barbican-api-7bd884c498-fvqdz" Dec 01 08:57:44 crc kubenswrapper[4689]: I1201 08:57:44.936466 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1bd94e50-aa23-4249-acd5-293b272a8123-public-tls-certs\") pod \"barbican-api-7bd884c498-fvqdz\" (UID: \"1bd94e50-aa23-4249-acd5-293b272a8123\") " pod="openstack/barbican-api-7bd884c498-fvqdz" Dec 01 08:57:44 crc kubenswrapper[4689]: I1201 08:57:44.946512 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1bd94e50-aa23-4249-acd5-293b272a8123-config-data-custom\") pod \"barbican-api-7bd884c498-fvqdz\" (UID: \"1bd94e50-aa23-4249-acd5-293b272a8123\") " pod="openstack/barbican-api-7bd884c498-fvqdz" Dec 01 08:57:44 crc kubenswrapper[4689]: I1201 08:57:44.948252 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bd94e50-aa23-4249-acd5-293b272a8123-combined-ca-bundle\") pod \"barbican-api-7bd884c498-fvqdz\" (UID: \"1bd94e50-aa23-4249-acd5-293b272a8123\") " pod="openstack/barbican-api-7bd884c498-fvqdz" Dec 01 08:57:44 crc kubenswrapper[4689]: I1201 08:57:44.948424 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bd94e50-aa23-4249-acd5-293b272a8123-config-data\") pod \"barbican-api-7bd884c498-fvqdz\" (UID: \"1bd94e50-aa23-4249-acd5-293b272a8123\") " pod="openstack/barbican-api-7bd884c498-fvqdz" Dec 01 08:57:44 crc kubenswrapper[4689]: I1201 08:57:44.953204 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c74dr\" (UniqueName: \"kubernetes.io/projected/1bd94e50-aa23-4249-acd5-293b272a8123-kube-api-access-c74dr\") pod \"barbican-api-7bd884c498-fvqdz\" (UID: \"1bd94e50-aa23-4249-acd5-293b272a8123\") " pod="openstack/barbican-api-7bd884c498-fvqdz" Dec 01 08:57:45 crc kubenswrapper[4689]: I1201 08:57:45.031177 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7bd884c498-fvqdz" Dec 01 08:57:46 crc kubenswrapper[4689]: I1201 08:57:46.162871 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 01 08:57:46 crc kubenswrapper[4689]: I1201 08:57:46.163520 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 01 08:57:46 crc kubenswrapper[4689]: I1201 08:57:46.185326 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 01 08:57:46 crc kubenswrapper[4689]: I1201 08:57:46.185406 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 01 08:57:46 crc kubenswrapper[4689]: I1201 08:57:46.252252 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 01 08:57:46 crc kubenswrapper[4689]: I1201 08:57:46.253902 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 01 08:57:46 crc kubenswrapper[4689]: I1201 08:57:46.288746 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 01 08:57:46 crc kubenswrapper[4689]: I1201 08:57:46.346720 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 01 08:57:47 crc kubenswrapper[4689]: I1201 08:57:47.160549 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 01 08:57:47 crc kubenswrapper[4689]: I1201 08:57:47.160614 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 01 08:57:47 crc kubenswrapper[4689]: I1201 08:57:47.160638 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 01 08:57:47 crc kubenswrapper[4689]: I1201 08:57:47.160774 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 01 08:57:48 crc kubenswrapper[4689]: I1201 08:57:48.179228 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-b66ff89fd-wdk5g" event={"ID":"2943fb44-3da9-4d20-a7ff-7561e1eca1b1","Type":"ContainerStarted","Data":"2513e9f5078291991d4c594a69ed9cd19f37056aa8d5892665df0408b554bd0f"} Dec 01 08:57:48 crc kubenswrapper[4689]: I1201 08:57:48.210879 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-b66ff89fd-wdk5g" podStartSLOduration=8.210856342 podStartE2EDuration="8.210856342s" podCreationTimestamp="2025-12-01 08:57:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:57:48.202192815 +0000 UTC m=+1148.274480719" watchObservedRunningTime="2025-12-01 08:57:48.210856342 +0000 UTC m=+1148.283144246" Dec 01 08:57:49 crc kubenswrapper[4689]: I1201 08:57:49.191339 4689 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 01 08:57:49 crc kubenswrapper[4689]: I1201 08:57:49.191956 4689 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 01 08:57:49 crc kubenswrapper[4689]: I1201 08:57:49.192995 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-b66ff89fd-wdk5g" Dec 01 08:57:49 crc kubenswrapper[4689]: I1201 08:57:49.193037 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-b66ff89fd-wdk5g" Dec 01 08:57:50 crc kubenswrapper[4689]: I1201 08:57:50.717374 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7bd884c498-fvqdz"] Dec 01 08:57:51 crc kubenswrapper[4689]: I1201 08:57:51.240462 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c8ddd69c-9ggn8" event={"ID":"49961df2-6dfd-485f-8f00-3645c115c7f0","Type":"ContainerStarted","Data":"96ec147210dfa4e47bb906e58a3b6e317b621baaa8dce9d077448f245c33278c"} Dec 01 08:57:51 crc kubenswrapper[4689]: I1201 08:57:51.241164 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-75c8ddd69c-9ggn8" Dec 01 08:57:51 crc kubenswrapper[4689]: I1201 08:57:51.258265 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5454b5d64d-5p8d8" event={"ID":"a31e6c25-e2a2-4c12-9138-6969155a7f20","Type":"ContainerStarted","Data":"08e8f8e84506e7a382307010a8ec7b1659bf4166b450a73070bcd3c0da0e6e0c"} Dec 01 08:57:51 crc kubenswrapper[4689]: I1201 08:57:51.259502 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5454b5d64d-5p8d8" Dec 01 08:57:51 crc kubenswrapper[4689]: I1201 08:57:51.259521 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5454b5d64d-5p8d8" Dec 01 08:57:51 crc kubenswrapper[4689]: I1201 08:57:51.272259 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-75c8ddd69c-9ggn8" podStartSLOduration=11.272241644 podStartE2EDuration="11.272241644s" podCreationTimestamp="2025-12-01 08:57:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:57:51.267806023 +0000 UTC m=+1151.340093927" watchObservedRunningTime="2025-12-01 08:57:51.272241644 +0000 UTC m=+1151.344529548" Dec 01 08:57:51 crc kubenswrapper[4689]: I1201 08:57:51.274638 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-844c6c5cff-mqnnk" event={"ID":"059abe7a-8a94-4c9a-8ac2-1830fffad22c","Type":"ContainerStarted","Data":"0acd60597a74d9abf76384e0b852e7e34a39f2dcf77eb3efc7edf1ccd253913a"} Dec 01 08:57:51 crc kubenswrapper[4689]: I1201 08:57:51.274671 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-844c6c5cff-mqnnk" event={"ID":"059abe7a-8a94-4c9a-8ac2-1830fffad22c","Type":"ContainerStarted","Data":"8bd9f31be1efe06342a4defea5601b99ff5b7b9086ba7ae30035be47008cfb79"} Dec 01 08:57:51 crc kubenswrapper[4689]: I1201 08:57:51.298695 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7bd884c498-fvqdz" event={"ID":"1bd94e50-aa23-4249-acd5-293b272a8123","Type":"ContainerStarted","Data":"f14135f1a70312e093d9213537addc559c0d2dbfa3cf15e2420d1e286e5c03e8"} Dec 01 08:57:51 crc kubenswrapper[4689]: I1201 08:57:51.314469 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f54de58e-9111-462b-a86e-8e324060c8aa","Type":"ContainerStarted","Data":"e71e70e63614adcc804aca95ac195b353f6938e502c6418ac9609f75ff113a02"} Dec 01 08:57:51 crc kubenswrapper[4689]: I1201 08:57:51.328243 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-5454b5d64d-5p8d8" podStartSLOduration=11.328205497 podStartE2EDuration="11.328205497s" podCreationTimestamp="2025-12-01 08:57:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:57:51.303925733 +0000 UTC m=+1151.376213637" watchObservedRunningTime="2025-12-01 08:57:51.328205497 +0000 UTC m=+1151.400493401" Dec 01 08:57:51 crc kubenswrapper[4689]: I1201 08:57:51.333261 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7cdd6b5dcb-j5dgx" event={"ID":"215d6908-3cbd-486b-adc3-82cdaddef118","Type":"ContainerStarted","Data":"f79fda52f97bb1109c8182f19ae8a06bcc0fbaa031bc758884e21e2f7114027d"} Dec 01 08:57:51 crc kubenswrapper[4689]: I1201 08:57:51.334684 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-844c6c5cff-mqnnk" podStartSLOduration=4.173892143 podStartE2EDuration="12.334665155s" podCreationTimestamp="2025-12-01 08:57:39 +0000 UTC" firstStartedPulling="2025-12-01 08:57:41.744142172 +0000 UTC m=+1141.816430076" lastFinishedPulling="2025-12-01 08:57:49.904915184 +0000 UTC m=+1149.977203088" observedRunningTime="2025-12-01 08:57:51.322165902 +0000 UTC m=+1151.394453806" watchObservedRunningTime="2025-12-01 08:57:51.334665155 +0000 UTC m=+1151.406953059" Dec 01 08:57:52 crc kubenswrapper[4689]: I1201 08:57:52.051011 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-78d9cd9dbd-qxwq7" podUID="e88c04bb-01ff-47a6-8942-05a9a2a68416" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Dec 01 08:57:52 crc kubenswrapper[4689]: I1201 08:57:52.239185 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-d65b9788-2kr5p" podUID="fcebf70c-3de0-499e-928d-3419299a512f" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Dec 01 08:57:52 crc kubenswrapper[4689]: I1201 08:57:52.343328 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7bd884c498-fvqdz" event={"ID":"1bd94e50-aa23-4249-acd5-293b272a8123","Type":"ContainerStarted","Data":"c6cf7cc709955c82fdd3a5d646dd74cb7c5b64ae4d4f814ee017d833a828e74a"} Dec 01 08:57:52 crc kubenswrapper[4689]: I1201 08:57:52.343404 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7bd884c498-fvqdz" event={"ID":"1bd94e50-aa23-4249-acd5-293b272a8123","Type":"ContainerStarted","Data":"cb8344f97af39b700dbe458c98ee0e1efbc5c8ed012186f5032e6628dcc171e6"} Dec 01 08:57:52 crc kubenswrapper[4689]: I1201 08:57:52.343630 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7bd884c498-fvqdz" Dec 01 08:57:52 crc kubenswrapper[4689]: I1201 08:57:52.345030 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7cdd6b5dcb-j5dgx" event={"ID":"215d6908-3cbd-486b-adc3-82cdaddef118","Type":"ContainerStarted","Data":"c340f2a1d88ffe01e2216b9a4843ba8c6cf19f6f98cdef60a01797df77ae5014"} Dec 01 08:57:52 crc kubenswrapper[4689]: I1201 08:57:52.363236 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 01 08:57:52 crc kubenswrapper[4689]: I1201 08:57:52.363620 4689 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 01 08:57:52 crc kubenswrapper[4689]: I1201 08:57:52.375414 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7bd884c498-fvqdz" podStartSLOduration=8.375395408 podStartE2EDuration="8.375395408s" podCreationTimestamp="2025-12-01 08:57:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:57:52.364696654 +0000 UTC m=+1152.436984568" watchObservedRunningTime="2025-12-01 08:57:52.375395408 +0000 UTC m=+1152.447683312" Dec 01 08:57:52 crc kubenswrapper[4689]: I1201 08:57:52.420706 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-7cdd6b5dcb-j5dgx" podStartSLOduration=5.258097826 podStartE2EDuration="13.420689968s" podCreationTimestamp="2025-12-01 08:57:39 +0000 UTC" firstStartedPulling="2025-12-01 08:57:41.77837891 +0000 UTC m=+1141.850666814" lastFinishedPulling="2025-12-01 08:57:49.940971052 +0000 UTC m=+1150.013258956" observedRunningTime="2025-12-01 08:57:52.389618227 +0000 UTC m=+1152.461906141" watchObservedRunningTime="2025-12-01 08:57:52.420689968 +0000 UTC m=+1152.492977862" Dec 01 08:57:52 crc kubenswrapper[4689]: I1201 08:57:52.425217 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 01 08:57:52 crc kubenswrapper[4689]: I1201 08:57:52.630075 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 01 08:57:52 crc kubenswrapper[4689]: I1201 08:57:52.630223 4689 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 01 08:57:52 crc kubenswrapper[4689]: I1201 08:57:52.679704 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 01 08:57:53 crc kubenswrapper[4689]: I1201 08:57:53.355900 4689 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 01 08:57:53 crc kubenswrapper[4689]: I1201 08:57:53.357287 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7bd884c498-fvqdz" Dec 01 08:57:54 crc kubenswrapper[4689]: I1201 08:57:54.234638 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-b66ff89fd-wdk5g" podUID="2943fb44-3da9-4d20-a7ff-7561e1eca1b1" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.160:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 01 08:57:54 crc kubenswrapper[4689]: I1201 08:57:54.373928 4689 generic.go:334] "Generic (PLEG): container finished" podID="767a61f9-7a7d-43df-b53f-efdc8c693381" containerID="5b0fae6c6cdf40c359bd0d9c1173095622267bdfc4784caa27b64ba809f165ad" exitCode=0 Dec 01 08:57:54 crc kubenswrapper[4689]: I1201 08:57:54.374279 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-kx454" event={"ID":"767a61f9-7a7d-43df-b53f-efdc8c693381","Type":"ContainerDied","Data":"5b0fae6c6cdf40c359bd0d9c1173095622267bdfc4784caa27b64ba809f165ad"} Dec 01 08:57:54 crc kubenswrapper[4689]: I1201 08:57:54.718334 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5454b5d64d-5p8d8" Dec 01 08:57:55 crc kubenswrapper[4689]: I1201 08:57:55.858714 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-75c8ddd69c-9ggn8" Dec 01 08:57:55 crc kubenswrapper[4689]: I1201 08:57:55.917645 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-64x67"] Dec 01 08:57:55 crc kubenswrapper[4689]: I1201 08:57:55.921767 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-84b966f6c9-64x67" podUID="d289ed97-fc00-401d-a724-9ff8a60cbc08" containerName="dnsmasq-dns" containerID="cri-o://acfbfb3d1430ee001c9b6d16ca0aacd3ac7696374e4acaa8c614ca2a4f667d20" gracePeriod=10 Dec 01 08:57:55 crc kubenswrapper[4689]: I1201 08:57:55.959502 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-kx454" Dec 01 08:57:56 crc kubenswrapper[4689]: I1201 08:57:56.053740 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/767a61f9-7a7d-43df-b53f-efdc8c693381-scripts\") pod \"767a61f9-7a7d-43df-b53f-efdc8c693381\" (UID: \"767a61f9-7a7d-43df-b53f-efdc8c693381\") " Dec 01 08:57:56 crc kubenswrapper[4689]: I1201 08:57:56.055080 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jqlzj\" (UniqueName: \"kubernetes.io/projected/767a61f9-7a7d-43df-b53f-efdc8c693381-kube-api-access-jqlzj\") pod \"767a61f9-7a7d-43df-b53f-efdc8c693381\" (UID: \"767a61f9-7a7d-43df-b53f-efdc8c693381\") " Dec 01 08:57:56 crc kubenswrapper[4689]: I1201 08:57:56.055111 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/767a61f9-7a7d-43df-b53f-efdc8c693381-db-sync-config-data\") pod \"767a61f9-7a7d-43df-b53f-efdc8c693381\" (UID: \"767a61f9-7a7d-43df-b53f-efdc8c693381\") " Dec 01 08:57:56 crc kubenswrapper[4689]: I1201 08:57:56.055164 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/767a61f9-7a7d-43df-b53f-efdc8c693381-etc-machine-id\") pod \"767a61f9-7a7d-43df-b53f-efdc8c693381\" (UID: \"767a61f9-7a7d-43df-b53f-efdc8c693381\") " Dec 01 08:57:56 crc kubenswrapper[4689]: I1201 08:57:56.055197 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/767a61f9-7a7d-43df-b53f-efdc8c693381-combined-ca-bundle\") pod \"767a61f9-7a7d-43df-b53f-efdc8c693381\" (UID: \"767a61f9-7a7d-43df-b53f-efdc8c693381\") " Dec 01 08:57:56 crc kubenswrapper[4689]: I1201 08:57:56.055243 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/767a61f9-7a7d-43df-b53f-efdc8c693381-config-data\") pod \"767a61f9-7a7d-43df-b53f-efdc8c693381\" (UID: \"767a61f9-7a7d-43df-b53f-efdc8c693381\") " Dec 01 08:57:56 crc kubenswrapper[4689]: I1201 08:57:56.055749 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/767a61f9-7a7d-43df-b53f-efdc8c693381-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "767a61f9-7a7d-43df-b53f-efdc8c693381" (UID: "767a61f9-7a7d-43df-b53f-efdc8c693381"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 08:57:56 crc kubenswrapper[4689]: I1201 08:57:56.056057 4689 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/767a61f9-7a7d-43df-b53f-efdc8c693381-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 01 08:57:56 crc kubenswrapper[4689]: I1201 08:57:56.084736 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/767a61f9-7a7d-43df-b53f-efdc8c693381-scripts" (OuterVolumeSpecName: "scripts") pod "767a61f9-7a7d-43df-b53f-efdc8c693381" (UID: "767a61f9-7a7d-43df-b53f-efdc8c693381"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:57:56 crc kubenswrapper[4689]: I1201 08:57:56.084889 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/767a61f9-7a7d-43df-b53f-efdc8c693381-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "767a61f9-7a7d-43df-b53f-efdc8c693381" (UID: "767a61f9-7a7d-43df-b53f-efdc8c693381"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:57:56 crc kubenswrapper[4689]: I1201 08:57:56.114768 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/767a61f9-7a7d-43df-b53f-efdc8c693381-kube-api-access-jqlzj" (OuterVolumeSpecName: "kube-api-access-jqlzj") pod "767a61f9-7a7d-43df-b53f-efdc8c693381" (UID: "767a61f9-7a7d-43df-b53f-efdc8c693381"). InnerVolumeSpecName "kube-api-access-jqlzj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:57:56 crc kubenswrapper[4689]: I1201 08:57:56.138520 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/767a61f9-7a7d-43df-b53f-efdc8c693381-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "767a61f9-7a7d-43df-b53f-efdc8c693381" (UID: "767a61f9-7a7d-43df-b53f-efdc8c693381"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:57:56 crc kubenswrapper[4689]: I1201 08:57:56.157585 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jqlzj\" (UniqueName: \"kubernetes.io/projected/767a61f9-7a7d-43df-b53f-efdc8c693381-kube-api-access-jqlzj\") on node \"crc\" DevicePath \"\"" Dec 01 08:57:56 crc kubenswrapper[4689]: I1201 08:57:56.157621 4689 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/767a61f9-7a7d-43df-b53f-efdc8c693381-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 08:57:56 crc kubenswrapper[4689]: I1201 08:57:56.157630 4689 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/767a61f9-7a7d-43df-b53f-efdc8c693381-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 08:57:56 crc kubenswrapper[4689]: I1201 08:57:56.157640 4689 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/767a61f9-7a7d-43df-b53f-efdc8c693381-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 08:57:56 crc kubenswrapper[4689]: I1201 08:57:56.229938 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/767a61f9-7a7d-43df-b53f-efdc8c693381-config-data" (OuterVolumeSpecName: "config-data") pod "767a61f9-7a7d-43df-b53f-efdc8c693381" (UID: "767a61f9-7a7d-43df-b53f-efdc8c693381"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:57:56 crc kubenswrapper[4689]: I1201 08:57:56.266188 4689 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/767a61f9-7a7d-43df-b53f-efdc8c693381-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 08:57:56 crc kubenswrapper[4689]: I1201 08:57:56.399805 4689 generic.go:334] "Generic (PLEG): container finished" podID="d289ed97-fc00-401d-a724-9ff8a60cbc08" containerID="acfbfb3d1430ee001c9b6d16ca0aacd3ac7696374e4acaa8c614ca2a4f667d20" exitCode=0 Dec 01 08:57:56 crc kubenswrapper[4689]: I1201 08:57:56.399869 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-64x67" event={"ID":"d289ed97-fc00-401d-a724-9ff8a60cbc08","Type":"ContainerDied","Data":"acfbfb3d1430ee001c9b6d16ca0aacd3ac7696374e4acaa8c614ca2a4f667d20"} Dec 01 08:57:56 crc kubenswrapper[4689]: I1201 08:57:56.407976 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-kx454" event={"ID":"767a61f9-7a7d-43df-b53f-efdc8c693381","Type":"ContainerDied","Data":"c349819f5831f8081ecc56f1765983cc4931af41a39edd250a9ba8cdb0cd42ab"} Dec 01 08:57:56 crc kubenswrapper[4689]: I1201 08:57:56.408044 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c349819f5831f8081ecc56f1765983cc4931af41a39edd250a9ba8cdb0cd42ab" Dec 01 08:57:56 crc kubenswrapper[4689]: I1201 08:57:56.408124 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-kx454" Dec 01 08:57:56 crc kubenswrapper[4689]: I1201 08:57:56.593462 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84b966f6c9-64x67" Dec 01 08:57:56 crc kubenswrapper[4689]: I1201 08:57:56.674954 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d289ed97-fc00-401d-a724-9ff8a60cbc08-ovsdbserver-sb\") pod \"d289ed97-fc00-401d-a724-9ff8a60cbc08\" (UID: \"d289ed97-fc00-401d-a724-9ff8a60cbc08\") " Dec 01 08:57:56 crc kubenswrapper[4689]: I1201 08:57:56.675037 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d289ed97-fc00-401d-a724-9ff8a60cbc08-ovsdbserver-nb\") pod \"d289ed97-fc00-401d-a724-9ff8a60cbc08\" (UID: \"d289ed97-fc00-401d-a724-9ff8a60cbc08\") " Dec 01 08:57:56 crc kubenswrapper[4689]: I1201 08:57:56.675114 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbnz4\" (UniqueName: \"kubernetes.io/projected/d289ed97-fc00-401d-a724-9ff8a60cbc08-kube-api-access-hbnz4\") pod \"d289ed97-fc00-401d-a724-9ff8a60cbc08\" (UID: \"d289ed97-fc00-401d-a724-9ff8a60cbc08\") " Dec 01 08:57:56 crc kubenswrapper[4689]: I1201 08:57:56.675202 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d289ed97-fc00-401d-a724-9ff8a60cbc08-dns-svc\") pod \"d289ed97-fc00-401d-a724-9ff8a60cbc08\" (UID: \"d289ed97-fc00-401d-a724-9ff8a60cbc08\") " Dec 01 08:57:56 crc kubenswrapper[4689]: I1201 08:57:56.675223 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d289ed97-fc00-401d-a724-9ff8a60cbc08-config\") pod \"d289ed97-fc00-401d-a724-9ff8a60cbc08\" (UID: \"d289ed97-fc00-401d-a724-9ff8a60cbc08\") " Dec 01 08:57:56 crc kubenswrapper[4689]: I1201 08:57:56.675237 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d289ed97-fc00-401d-a724-9ff8a60cbc08-dns-swift-storage-0\") pod \"d289ed97-fc00-401d-a724-9ff8a60cbc08\" (UID: \"d289ed97-fc00-401d-a724-9ff8a60cbc08\") " Dec 01 08:57:56 crc kubenswrapper[4689]: I1201 08:57:56.722825 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d289ed97-fc00-401d-a724-9ff8a60cbc08-kube-api-access-hbnz4" (OuterVolumeSpecName: "kube-api-access-hbnz4") pod "d289ed97-fc00-401d-a724-9ff8a60cbc08" (UID: "d289ed97-fc00-401d-a724-9ff8a60cbc08"). InnerVolumeSpecName "kube-api-access-hbnz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:57:56 crc kubenswrapper[4689]: I1201 08:57:56.770666 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Dec 01 08:57:56 crc kubenswrapper[4689]: E1201 08:57:56.780838 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d289ed97-fc00-401d-a724-9ff8a60cbc08" containerName="dnsmasq-dns" Dec 01 08:57:56 crc kubenswrapper[4689]: I1201 08:57:56.781071 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="d289ed97-fc00-401d-a724-9ff8a60cbc08" containerName="dnsmasq-dns" Dec 01 08:57:56 crc kubenswrapper[4689]: E1201 08:57:56.781174 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d289ed97-fc00-401d-a724-9ff8a60cbc08" containerName="init" Dec 01 08:57:56 crc kubenswrapper[4689]: I1201 08:57:56.781254 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="d289ed97-fc00-401d-a724-9ff8a60cbc08" containerName="init" Dec 01 08:57:56 crc kubenswrapper[4689]: E1201 08:57:56.781342 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="767a61f9-7a7d-43df-b53f-efdc8c693381" containerName="cinder-db-sync" Dec 01 08:57:56 crc kubenswrapper[4689]: I1201 08:57:56.781422 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="767a61f9-7a7d-43df-b53f-efdc8c693381" containerName="cinder-db-sync" Dec 01 08:57:56 crc kubenswrapper[4689]: I1201 08:57:56.781696 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="d289ed97-fc00-401d-a724-9ff8a60cbc08" containerName="dnsmasq-dns" Dec 01 08:57:56 crc kubenswrapper[4689]: I1201 08:57:56.787961 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="767a61f9-7a7d-43df-b53f-efdc8c693381" containerName="cinder-db-sync" Dec 01 08:57:56 crc kubenswrapper[4689]: I1201 08:57:56.787910 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hbnz4\" (UniqueName: \"kubernetes.io/projected/d289ed97-fc00-401d-a724-9ff8a60cbc08-kube-api-access-hbnz4\") on node \"crc\" DevicePath \"\"" Dec 01 08:57:56 crc kubenswrapper[4689]: I1201 08:57:56.789724 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 01 08:57:56 crc kubenswrapper[4689]: I1201 08:57:56.804952 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Dec 01 08:57:56 crc kubenswrapper[4689]: I1201 08:57:56.804954 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-w7brp" Dec 01 08:57:56 crc kubenswrapper[4689]: I1201 08:57:56.805628 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Dec 01 08:57:56 crc kubenswrapper[4689]: I1201 08:57:56.806026 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Dec 01 08:57:56 crc kubenswrapper[4689]: I1201 08:57:56.834321 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 01 08:57:56 crc kubenswrapper[4689]: I1201 08:57:56.909415 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06f62c5e-1ae9-4826-a301-aec927bbf6ee-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"06f62c5e-1ae9-4826-a301-aec927bbf6ee\") " pod="openstack/cinder-scheduler-0" Dec 01 08:57:56 crc kubenswrapper[4689]: I1201 08:57:56.922759 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-mztfd"] Dec 01 08:57:56 crc kubenswrapper[4689]: I1201 08:57:56.933810 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-mztfd" Dec 01 08:57:56 crc kubenswrapper[4689]: I1201 08:57:56.935160 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06f62c5e-1ae9-4826-a301-aec927bbf6ee-config-data\") pod \"cinder-scheduler-0\" (UID: \"06f62c5e-1ae9-4826-a301-aec927bbf6ee\") " pod="openstack/cinder-scheduler-0" Dec 01 08:57:56 crc kubenswrapper[4689]: I1201 08:57:56.946929 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/06f62c5e-1ae9-4826-a301-aec927bbf6ee-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"06f62c5e-1ae9-4826-a301-aec927bbf6ee\") " pod="openstack/cinder-scheduler-0" Dec 01 08:57:56 crc kubenswrapper[4689]: I1201 08:57:56.947111 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06f62c5e-1ae9-4826-a301-aec927bbf6ee-scripts\") pod \"cinder-scheduler-0\" (UID: \"06f62c5e-1ae9-4826-a301-aec927bbf6ee\") " pod="openstack/cinder-scheduler-0" Dec 01 08:57:56 crc kubenswrapper[4689]: I1201 08:57:56.947340 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22lsm\" (UniqueName: \"kubernetes.io/projected/06f62c5e-1ae9-4826-a301-aec927bbf6ee-kube-api-access-22lsm\") pod \"cinder-scheduler-0\" (UID: \"06f62c5e-1ae9-4826-a301-aec927bbf6ee\") " pod="openstack/cinder-scheduler-0" Dec 01 08:57:56 crc kubenswrapper[4689]: I1201 08:57:56.947484 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/06f62c5e-1ae9-4826-a301-aec927bbf6ee-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"06f62c5e-1ae9-4826-a301-aec927bbf6ee\") " pod="openstack/cinder-scheduler-0" Dec 01 08:57:57 crc kubenswrapper[4689]: I1201 08:57:57.004438 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d289ed97-fc00-401d-a724-9ff8a60cbc08-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d289ed97-fc00-401d-a724-9ff8a60cbc08" (UID: "d289ed97-fc00-401d-a724-9ff8a60cbc08"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:57:57 crc kubenswrapper[4689]: I1201 08:57:57.004522 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-mztfd"] Dec 01 08:57:57 crc kubenswrapper[4689]: I1201 08:57:57.012465 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d289ed97-fc00-401d-a724-9ff8a60cbc08-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d289ed97-fc00-401d-a724-9ff8a60cbc08" (UID: "d289ed97-fc00-401d-a724-9ff8a60cbc08"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:57:57 crc kubenswrapper[4689]: I1201 08:57:57.018089 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d289ed97-fc00-401d-a724-9ff8a60cbc08-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d289ed97-fc00-401d-a724-9ff8a60cbc08" (UID: "d289ed97-fc00-401d-a724-9ff8a60cbc08"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:57:57 crc kubenswrapper[4689]: I1201 08:57:57.066596 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8tks\" (UniqueName: \"kubernetes.io/projected/5493cbb2-5880-48d7-81fe-46ab0e2dcb68-kube-api-access-h8tks\") pod \"dnsmasq-dns-5784cf869f-mztfd\" (UID: \"5493cbb2-5880-48d7-81fe-46ab0e2dcb68\") " pod="openstack/dnsmasq-dns-5784cf869f-mztfd" Dec 01 08:57:57 crc kubenswrapper[4689]: I1201 08:57:57.066681 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5493cbb2-5880-48d7-81fe-46ab0e2dcb68-config\") pod \"dnsmasq-dns-5784cf869f-mztfd\" (UID: \"5493cbb2-5880-48d7-81fe-46ab0e2dcb68\") " pod="openstack/dnsmasq-dns-5784cf869f-mztfd" Dec 01 08:57:57 crc kubenswrapper[4689]: I1201 08:57:57.066706 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06f62c5e-1ae9-4826-a301-aec927bbf6ee-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"06f62c5e-1ae9-4826-a301-aec927bbf6ee\") " pod="openstack/cinder-scheduler-0" Dec 01 08:57:57 crc kubenswrapper[4689]: I1201 08:57:57.066738 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5493cbb2-5880-48d7-81fe-46ab0e2dcb68-dns-swift-storage-0\") pod \"dnsmasq-dns-5784cf869f-mztfd\" (UID: \"5493cbb2-5880-48d7-81fe-46ab0e2dcb68\") " pod="openstack/dnsmasq-dns-5784cf869f-mztfd" Dec 01 08:57:57 crc kubenswrapper[4689]: I1201 08:57:57.066763 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06f62c5e-1ae9-4826-a301-aec927bbf6ee-config-data\") pod \"cinder-scheduler-0\" (UID: \"06f62c5e-1ae9-4826-a301-aec927bbf6ee\") " pod="openstack/cinder-scheduler-0" Dec 01 08:57:57 crc kubenswrapper[4689]: I1201 08:57:57.066787 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/06f62c5e-1ae9-4826-a301-aec927bbf6ee-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"06f62c5e-1ae9-4826-a301-aec927bbf6ee\") " pod="openstack/cinder-scheduler-0" Dec 01 08:57:57 crc kubenswrapper[4689]: I1201 08:57:57.066818 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5493cbb2-5880-48d7-81fe-46ab0e2dcb68-ovsdbserver-nb\") pod \"dnsmasq-dns-5784cf869f-mztfd\" (UID: \"5493cbb2-5880-48d7-81fe-46ab0e2dcb68\") " pod="openstack/dnsmasq-dns-5784cf869f-mztfd" Dec 01 08:57:57 crc kubenswrapper[4689]: I1201 08:57:57.066835 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06f62c5e-1ae9-4826-a301-aec927bbf6ee-scripts\") pod \"cinder-scheduler-0\" (UID: \"06f62c5e-1ae9-4826-a301-aec927bbf6ee\") " pod="openstack/cinder-scheduler-0" Dec 01 08:57:57 crc kubenswrapper[4689]: I1201 08:57:57.066853 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5493cbb2-5880-48d7-81fe-46ab0e2dcb68-ovsdbserver-sb\") pod \"dnsmasq-dns-5784cf869f-mztfd\" (UID: \"5493cbb2-5880-48d7-81fe-46ab0e2dcb68\") " pod="openstack/dnsmasq-dns-5784cf869f-mztfd" Dec 01 08:57:57 crc kubenswrapper[4689]: I1201 08:57:57.066902 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5493cbb2-5880-48d7-81fe-46ab0e2dcb68-dns-svc\") pod \"dnsmasq-dns-5784cf869f-mztfd\" (UID: \"5493cbb2-5880-48d7-81fe-46ab0e2dcb68\") " pod="openstack/dnsmasq-dns-5784cf869f-mztfd" Dec 01 08:57:57 crc kubenswrapper[4689]: I1201 08:57:57.066920 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22lsm\" (UniqueName: \"kubernetes.io/projected/06f62c5e-1ae9-4826-a301-aec927bbf6ee-kube-api-access-22lsm\") pod \"cinder-scheduler-0\" (UID: \"06f62c5e-1ae9-4826-a301-aec927bbf6ee\") " pod="openstack/cinder-scheduler-0" Dec 01 08:57:57 crc kubenswrapper[4689]: I1201 08:57:57.066935 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/06f62c5e-1ae9-4826-a301-aec927bbf6ee-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"06f62c5e-1ae9-4826-a301-aec927bbf6ee\") " pod="openstack/cinder-scheduler-0" Dec 01 08:57:57 crc kubenswrapper[4689]: I1201 08:57:57.066993 4689 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d289ed97-fc00-401d-a724-9ff8a60cbc08-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 01 08:57:57 crc kubenswrapper[4689]: I1201 08:57:57.067004 4689 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d289ed97-fc00-401d-a724-9ff8a60cbc08-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 01 08:57:57 crc kubenswrapper[4689]: I1201 08:57:57.067015 4689 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d289ed97-fc00-401d-a724-9ff8a60cbc08-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 01 08:57:57 crc kubenswrapper[4689]: I1201 08:57:57.070117 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/06f62c5e-1ae9-4826-a301-aec927bbf6ee-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"06f62c5e-1ae9-4826-a301-aec927bbf6ee\") " pod="openstack/cinder-scheduler-0" Dec 01 08:57:57 crc kubenswrapper[4689]: I1201 08:57:57.073463 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06f62c5e-1ae9-4826-a301-aec927bbf6ee-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"06f62c5e-1ae9-4826-a301-aec927bbf6ee\") " pod="openstack/cinder-scheduler-0" Dec 01 08:57:57 crc kubenswrapper[4689]: I1201 08:57:57.084403 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d289ed97-fc00-401d-a724-9ff8a60cbc08-config" (OuterVolumeSpecName: "config") pod "d289ed97-fc00-401d-a724-9ff8a60cbc08" (UID: "d289ed97-fc00-401d-a724-9ff8a60cbc08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:57:57 crc kubenswrapper[4689]: I1201 08:57:57.099252 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06f62c5e-1ae9-4826-a301-aec927bbf6ee-scripts\") pod \"cinder-scheduler-0\" (UID: \"06f62c5e-1ae9-4826-a301-aec927bbf6ee\") " pod="openstack/cinder-scheduler-0" Dec 01 08:57:57 crc kubenswrapper[4689]: I1201 08:57:57.101659 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/06f62c5e-1ae9-4826-a301-aec927bbf6ee-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"06f62c5e-1ae9-4826-a301-aec927bbf6ee\") " pod="openstack/cinder-scheduler-0" Dec 01 08:57:57 crc kubenswrapper[4689]: I1201 08:57:57.101769 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06f62c5e-1ae9-4826-a301-aec927bbf6ee-config-data\") pod \"cinder-scheduler-0\" (UID: \"06f62c5e-1ae9-4826-a301-aec927bbf6ee\") " pod="openstack/cinder-scheduler-0" Dec 01 08:57:57 crc kubenswrapper[4689]: I1201 08:57:57.129320 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d289ed97-fc00-401d-a724-9ff8a60cbc08-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d289ed97-fc00-401d-a724-9ff8a60cbc08" (UID: "d289ed97-fc00-401d-a724-9ff8a60cbc08"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:57:57 crc kubenswrapper[4689]: I1201 08:57:57.130978 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22lsm\" (UniqueName: \"kubernetes.io/projected/06f62c5e-1ae9-4826-a301-aec927bbf6ee-kube-api-access-22lsm\") pod \"cinder-scheduler-0\" (UID: \"06f62c5e-1ae9-4826-a301-aec927bbf6ee\") " pod="openstack/cinder-scheduler-0" Dec 01 08:57:57 crc kubenswrapper[4689]: I1201 08:57:57.169735 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5493cbb2-5880-48d7-81fe-46ab0e2dcb68-dns-svc\") pod \"dnsmasq-dns-5784cf869f-mztfd\" (UID: \"5493cbb2-5880-48d7-81fe-46ab0e2dcb68\") " pod="openstack/dnsmasq-dns-5784cf869f-mztfd" Dec 01 08:57:57 crc kubenswrapper[4689]: I1201 08:57:57.169818 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8tks\" (UniqueName: \"kubernetes.io/projected/5493cbb2-5880-48d7-81fe-46ab0e2dcb68-kube-api-access-h8tks\") pod \"dnsmasq-dns-5784cf869f-mztfd\" (UID: \"5493cbb2-5880-48d7-81fe-46ab0e2dcb68\") " pod="openstack/dnsmasq-dns-5784cf869f-mztfd" Dec 01 08:57:57 crc kubenswrapper[4689]: I1201 08:57:57.169937 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5493cbb2-5880-48d7-81fe-46ab0e2dcb68-config\") pod \"dnsmasq-dns-5784cf869f-mztfd\" (UID: \"5493cbb2-5880-48d7-81fe-46ab0e2dcb68\") " pod="openstack/dnsmasq-dns-5784cf869f-mztfd" Dec 01 08:57:57 crc kubenswrapper[4689]: I1201 08:57:57.169965 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5493cbb2-5880-48d7-81fe-46ab0e2dcb68-dns-swift-storage-0\") pod \"dnsmasq-dns-5784cf869f-mztfd\" (UID: \"5493cbb2-5880-48d7-81fe-46ab0e2dcb68\") " pod="openstack/dnsmasq-dns-5784cf869f-mztfd" Dec 01 08:57:57 crc kubenswrapper[4689]: I1201 08:57:57.170010 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5493cbb2-5880-48d7-81fe-46ab0e2dcb68-ovsdbserver-nb\") pod \"dnsmasq-dns-5784cf869f-mztfd\" (UID: \"5493cbb2-5880-48d7-81fe-46ab0e2dcb68\") " pod="openstack/dnsmasq-dns-5784cf869f-mztfd" Dec 01 08:57:57 crc kubenswrapper[4689]: I1201 08:57:57.170028 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5493cbb2-5880-48d7-81fe-46ab0e2dcb68-ovsdbserver-sb\") pod \"dnsmasq-dns-5784cf869f-mztfd\" (UID: \"5493cbb2-5880-48d7-81fe-46ab0e2dcb68\") " pod="openstack/dnsmasq-dns-5784cf869f-mztfd" Dec 01 08:57:57 crc kubenswrapper[4689]: I1201 08:57:57.170152 4689 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d289ed97-fc00-401d-a724-9ff8a60cbc08-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 01 08:57:57 crc kubenswrapper[4689]: I1201 08:57:57.170171 4689 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d289ed97-fc00-401d-a724-9ff8a60cbc08-config\") on node \"crc\" DevicePath \"\"" Dec 01 08:57:57 crc kubenswrapper[4689]: I1201 08:57:57.171060 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5493cbb2-5880-48d7-81fe-46ab0e2dcb68-ovsdbserver-sb\") pod \"dnsmasq-dns-5784cf869f-mztfd\" (UID: \"5493cbb2-5880-48d7-81fe-46ab0e2dcb68\") " pod="openstack/dnsmasq-dns-5784cf869f-mztfd" Dec 01 08:57:57 crc kubenswrapper[4689]: I1201 08:57:57.171777 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5493cbb2-5880-48d7-81fe-46ab0e2dcb68-dns-swift-storage-0\") pod \"dnsmasq-dns-5784cf869f-mztfd\" (UID: \"5493cbb2-5880-48d7-81fe-46ab0e2dcb68\") " pod="openstack/dnsmasq-dns-5784cf869f-mztfd" Dec 01 08:57:57 crc kubenswrapper[4689]: I1201 08:57:57.174018 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5493cbb2-5880-48d7-81fe-46ab0e2dcb68-config\") pod \"dnsmasq-dns-5784cf869f-mztfd\" (UID: \"5493cbb2-5880-48d7-81fe-46ab0e2dcb68\") " pod="openstack/dnsmasq-dns-5784cf869f-mztfd" Dec 01 08:57:57 crc kubenswrapper[4689]: I1201 08:57:57.174656 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5493cbb2-5880-48d7-81fe-46ab0e2dcb68-dns-svc\") pod \"dnsmasq-dns-5784cf869f-mztfd\" (UID: \"5493cbb2-5880-48d7-81fe-46ab0e2dcb68\") " pod="openstack/dnsmasq-dns-5784cf869f-mztfd" Dec 01 08:57:57 crc kubenswrapper[4689]: I1201 08:57:57.175173 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5493cbb2-5880-48d7-81fe-46ab0e2dcb68-ovsdbserver-nb\") pod \"dnsmasq-dns-5784cf869f-mztfd\" (UID: \"5493cbb2-5880-48d7-81fe-46ab0e2dcb68\") " pod="openstack/dnsmasq-dns-5784cf869f-mztfd" Dec 01 08:57:57 crc kubenswrapper[4689]: I1201 08:57:57.211397 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8tks\" (UniqueName: \"kubernetes.io/projected/5493cbb2-5880-48d7-81fe-46ab0e2dcb68-kube-api-access-h8tks\") pod \"dnsmasq-dns-5784cf869f-mztfd\" (UID: \"5493cbb2-5880-48d7-81fe-46ab0e2dcb68\") " pod="openstack/dnsmasq-dns-5784cf869f-mztfd" Dec 01 08:57:57 crc kubenswrapper[4689]: I1201 08:57:57.284934 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Dec 01 08:57:57 crc kubenswrapper[4689]: I1201 08:57:57.286696 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 01 08:57:57 crc kubenswrapper[4689]: I1201 08:57:57.290777 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 01 08:57:57 crc kubenswrapper[4689]: I1201 08:57:57.294059 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Dec 01 08:57:57 crc kubenswrapper[4689]: I1201 08:57:57.374790 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6e961c41-2024-43a4-bb5a-35926b887048-config-data-custom\") pod \"cinder-api-0\" (UID: \"6e961c41-2024-43a4-bb5a-35926b887048\") " pod="openstack/cinder-api-0" Dec 01 08:57:57 crc kubenswrapper[4689]: I1201 08:57:57.374950 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6e961c41-2024-43a4-bb5a-35926b887048-etc-machine-id\") pod \"cinder-api-0\" (UID: \"6e961c41-2024-43a4-bb5a-35926b887048\") " pod="openstack/cinder-api-0" Dec 01 08:57:57 crc kubenswrapper[4689]: I1201 08:57:57.375022 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e961c41-2024-43a4-bb5a-35926b887048-scripts\") pod \"cinder-api-0\" (UID: \"6e961c41-2024-43a4-bb5a-35926b887048\") " pod="openstack/cinder-api-0" Dec 01 08:57:57 crc kubenswrapper[4689]: I1201 08:57:57.375080 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e961c41-2024-43a4-bb5a-35926b887048-logs\") pod \"cinder-api-0\" (UID: \"6e961c41-2024-43a4-bb5a-35926b887048\") " pod="openstack/cinder-api-0" Dec 01 08:57:57 crc kubenswrapper[4689]: I1201 08:57:57.375131 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e961c41-2024-43a4-bb5a-35926b887048-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"6e961c41-2024-43a4-bb5a-35926b887048\") " pod="openstack/cinder-api-0" Dec 01 08:57:57 crc kubenswrapper[4689]: I1201 08:57:57.375235 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dvt4\" (UniqueName: \"kubernetes.io/projected/6e961c41-2024-43a4-bb5a-35926b887048-kube-api-access-5dvt4\") pod \"cinder-api-0\" (UID: \"6e961c41-2024-43a4-bb5a-35926b887048\") " pod="openstack/cinder-api-0" Dec 01 08:57:57 crc kubenswrapper[4689]: I1201 08:57:57.375406 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e961c41-2024-43a4-bb5a-35926b887048-config-data\") pod \"cinder-api-0\" (UID: \"6e961c41-2024-43a4-bb5a-35926b887048\") " pod="openstack/cinder-api-0" Dec 01 08:57:57 crc kubenswrapper[4689]: I1201 08:57:57.411770 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 01 08:57:57 crc kubenswrapper[4689]: I1201 08:57:57.437202 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-mztfd" Dec 01 08:57:57 crc kubenswrapper[4689]: I1201 08:57:57.467626 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-64x67" event={"ID":"d289ed97-fc00-401d-a724-9ff8a60cbc08","Type":"ContainerDied","Data":"90a21ce1fc6f0995f10c2de3ec7e77758dc11416f41f68a601c1ae868dd10e32"} Dec 01 08:57:57 crc kubenswrapper[4689]: I1201 08:57:57.467705 4689 scope.go:117] "RemoveContainer" containerID="acfbfb3d1430ee001c9b6d16ca0aacd3ac7696374e4acaa8c614ca2a4f667d20" Dec 01 08:57:57 crc kubenswrapper[4689]: I1201 08:57:57.467896 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84b966f6c9-64x67" Dec 01 08:57:57 crc kubenswrapper[4689]: I1201 08:57:57.481927 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dvt4\" (UniqueName: \"kubernetes.io/projected/6e961c41-2024-43a4-bb5a-35926b887048-kube-api-access-5dvt4\") pod \"cinder-api-0\" (UID: \"6e961c41-2024-43a4-bb5a-35926b887048\") " pod="openstack/cinder-api-0" Dec 01 08:57:57 crc kubenswrapper[4689]: I1201 08:57:57.482036 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e961c41-2024-43a4-bb5a-35926b887048-config-data\") pod \"cinder-api-0\" (UID: \"6e961c41-2024-43a4-bb5a-35926b887048\") " pod="openstack/cinder-api-0" Dec 01 08:57:57 crc kubenswrapper[4689]: I1201 08:57:57.482077 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6e961c41-2024-43a4-bb5a-35926b887048-config-data-custom\") pod \"cinder-api-0\" (UID: \"6e961c41-2024-43a4-bb5a-35926b887048\") " pod="openstack/cinder-api-0" Dec 01 08:57:57 crc kubenswrapper[4689]: I1201 08:57:57.482102 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6e961c41-2024-43a4-bb5a-35926b887048-etc-machine-id\") pod \"cinder-api-0\" (UID: \"6e961c41-2024-43a4-bb5a-35926b887048\") " pod="openstack/cinder-api-0" Dec 01 08:57:57 crc kubenswrapper[4689]: I1201 08:57:57.482136 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e961c41-2024-43a4-bb5a-35926b887048-scripts\") pod \"cinder-api-0\" (UID: \"6e961c41-2024-43a4-bb5a-35926b887048\") " pod="openstack/cinder-api-0" Dec 01 08:57:57 crc kubenswrapper[4689]: I1201 08:57:57.482158 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e961c41-2024-43a4-bb5a-35926b887048-logs\") pod \"cinder-api-0\" (UID: \"6e961c41-2024-43a4-bb5a-35926b887048\") " pod="openstack/cinder-api-0" Dec 01 08:57:57 crc kubenswrapper[4689]: I1201 08:57:57.482180 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e961c41-2024-43a4-bb5a-35926b887048-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"6e961c41-2024-43a4-bb5a-35926b887048\") " pod="openstack/cinder-api-0" Dec 01 08:57:57 crc kubenswrapper[4689]: I1201 08:57:57.492443 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6e961c41-2024-43a4-bb5a-35926b887048-etc-machine-id\") pod \"cinder-api-0\" (UID: \"6e961c41-2024-43a4-bb5a-35926b887048\") " pod="openstack/cinder-api-0" Dec 01 08:57:57 crc kubenswrapper[4689]: I1201 08:57:57.492791 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e961c41-2024-43a4-bb5a-35926b887048-logs\") pod \"cinder-api-0\" (UID: \"6e961c41-2024-43a4-bb5a-35926b887048\") " pod="openstack/cinder-api-0" Dec 01 08:57:57 crc kubenswrapper[4689]: I1201 08:57:57.503475 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e961c41-2024-43a4-bb5a-35926b887048-config-data\") pod \"cinder-api-0\" (UID: \"6e961c41-2024-43a4-bb5a-35926b887048\") " pod="openstack/cinder-api-0" Dec 01 08:57:57 crc kubenswrapper[4689]: I1201 08:57:57.504037 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6e961c41-2024-43a4-bb5a-35926b887048-config-data-custom\") pod \"cinder-api-0\" (UID: \"6e961c41-2024-43a4-bb5a-35926b887048\") " pod="openstack/cinder-api-0" Dec 01 08:57:57 crc kubenswrapper[4689]: I1201 08:57:57.504035 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e961c41-2024-43a4-bb5a-35926b887048-scripts\") pod \"cinder-api-0\" (UID: \"6e961c41-2024-43a4-bb5a-35926b887048\") " pod="openstack/cinder-api-0" Dec 01 08:57:57 crc kubenswrapper[4689]: I1201 08:57:57.514390 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dvt4\" (UniqueName: \"kubernetes.io/projected/6e961c41-2024-43a4-bb5a-35926b887048-kube-api-access-5dvt4\") pod \"cinder-api-0\" (UID: \"6e961c41-2024-43a4-bb5a-35926b887048\") " pod="openstack/cinder-api-0" Dec 01 08:57:57 crc kubenswrapper[4689]: I1201 08:57:57.519420 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e961c41-2024-43a4-bb5a-35926b887048-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"6e961c41-2024-43a4-bb5a-35926b887048\") " pod="openstack/cinder-api-0" Dec 01 08:57:57 crc kubenswrapper[4689]: I1201 08:57:57.552124 4689 scope.go:117] "RemoveContainer" containerID="ae987a74afbed6f910f8ca5aaedca7f12194eaf2b46b2eaf88f4063d6bd6822e" Dec 01 08:57:57 crc kubenswrapper[4689]: I1201 08:57:57.579441 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-64x67"] Dec 01 08:57:57 crc kubenswrapper[4689]: I1201 08:57:57.613266 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-64x67"] Dec 01 08:57:57 crc kubenswrapper[4689]: I1201 08:57:57.613689 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 01 08:57:58 crc kubenswrapper[4689]: I1201 08:57:58.178896 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 01 08:57:58 crc kubenswrapper[4689]: I1201 08:57:58.321976 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-mztfd"] Dec 01 08:57:58 crc kubenswrapper[4689]: I1201 08:57:58.337529 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-b66ff89fd-wdk5g" podUID="2943fb44-3da9-4d20-a7ff-7561e1eca1b1" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.160:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 01 08:57:58 crc kubenswrapper[4689]: I1201 08:57:58.509043 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"06f62c5e-1ae9-4826-a301-aec927bbf6ee","Type":"ContainerStarted","Data":"52c3ae6842c62eae67440fd7dbf918fcb814ff64ca563f76b193bab95440544d"} Dec 01 08:57:58 crc kubenswrapper[4689]: W1201 08:57:58.532270 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6e961c41_2024_43a4_bb5a_35926b887048.slice/crio-7e56745e8a5341e8537cebf20bfed88b7968484ba8aca6bf01f4adba8f05d925 WatchSource:0}: Error finding container 7e56745e8a5341e8537cebf20bfed88b7968484ba8aca6bf01f4adba8f05d925: Status 404 returned error can't find the container with id 7e56745e8a5341e8537cebf20bfed88b7968484ba8aca6bf01f4adba8f05d925 Dec 01 08:57:58 crc kubenswrapper[4689]: I1201 08:57:58.533196 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 01 08:57:58 crc kubenswrapper[4689]: I1201 08:57:58.538567 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-mztfd" event={"ID":"5493cbb2-5880-48d7-81fe-46ab0e2dcb68","Type":"ContainerStarted","Data":"4e0885fdc6c97cca525b2aee357d1601ee27c8d2142e202a426b2b699dd883b9"} Dec 01 08:57:58 crc kubenswrapper[4689]: I1201 08:57:58.982489 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-b66ff89fd-wdk5g" Dec 01 08:57:59 crc kubenswrapper[4689]: I1201 08:57:59.064635 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d289ed97-fc00-401d-a724-9ff8a60cbc08" path="/var/lib/kubelet/pods/d289ed97-fc00-401d-a724-9ff8a60cbc08/volumes" Dec 01 08:57:59 crc kubenswrapper[4689]: I1201 08:57:59.554993 4689 generic.go:334] "Generic (PLEG): container finished" podID="5493cbb2-5880-48d7-81fe-46ab0e2dcb68" containerID="187461bff597345915f767183f516af00b25c766d7e4a8b0cee9338e17325e74" exitCode=0 Dec 01 08:57:59 crc kubenswrapper[4689]: I1201 08:57:59.555970 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-mztfd" event={"ID":"5493cbb2-5880-48d7-81fe-46ab0e2dcb68","Type":"ContainerDied","Data":"187461bff597345915f767183f516af00b25c766d7e4a8b0cee9338e17325e74"} Dec 01 08:57:59 crc kubenswrapper[4689]: I1201 08:57:59.568909 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"6e961c41-2024-43a4-bb5a-35926b887048","Type":"ContainerStarted","Data":"7e56745e8a5341e8537cebf20bfed88b7968484ba8aca6bf01f4adba8f05d925"} Dec 01 08:57:59 crc kubenswrapper[4689]: I1201 08:57:59.911394 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 01 08:58:00 crc kubenswrapper[4689]: I1201 08:58:00.175202 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-74cd45bd8d-lsl5j" Dec 01 08:58:00 crc kubenswrapper[4689]: I1201 08:58:00.604391 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-mztfd" event={"ID":"5493cbb2-5880-48d7-81fe-46ab0e2dcb68","Type":"ContainerStarted","Data":"5fa40b95ca6db1a7176c43590b21f5f5f7e88adb42e9791c29a5b8af6e002c67"} Dec 01 08:58:00 crc kubenswrapper[4689]: I1201 08:58:00.605416 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5784cf869f-mztfd" Dec 01 08:58:00 crc kubenswrapper[4689]: I1201 08:58:00.616539 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"6e961c41-2024-43a4-bb5a-35926b887048","Type":"ContainerStarted","Data":"96660f73a77ef4cbfb1ace3ec60f36199140a8d65eab28e66db6737198c49412"} Dec 01 08:58:00 crc kubenswrapper[4689]: I1201 08:58:00.658073 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5784cf869f-mztfd" podStartSLOduration=4.658007358 podStartE2EDuration="4.658007358s" podCreationTimestamp="2025-12-01 08:57:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:58:00.634797982 +0000 UTC m=+1160.707085896" watchObservedRunningTime="2025-12-01 08:58:00.658007358 +0000 UTC m=+1160.730295272" Dec 01 08:58:01 crc kubenswrapper[4689]: I1201 08:58:01.074143 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-b66ff89fd-wdk5g" Dec 01 08:58:01 crc kubenswrapper[4689]: I1201 08:58:01.640738 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"6e961c41-2024-43a4-bb5a-35926b887048","Type":"ContainerStarted","Data":"6cf33481554ec1616e0c9863d79dab2edd8317192599b608ed42855463879e06"} Dec 01 08:58:01 crc kubenswrapper[4689]: I1201 08:58:01.640833 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="6e961c41-2024-43a4-bb5a-35926b887048" containerName="cinder-api-log" containerID="cri-o://96660f73a77ef4cbfb1ace3ec60f36199140a8d65eab28e66db6737198c49412" gracePeriod=30 Dec 01 08:58:01 crc kubenswrapper[4689]: I1201 08:58:01.641317 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Dec 01 08:58:01 crc kubenswrapper[4689]: I1201 08:58:01.641403 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="6e961c41-2024-43a4-bb5a-35926b887048" containerName="cinder-api" containerID="cri-o://6cf33481554ec1616e0c9863d79dab2edd8317192599b608ed42855463879e06" gracePeriod=30 Dec 01 08:58:01 crc kubenswrapper[4689]: I1201 08:58:01.650305 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"06f62c5e-1ae9-4826-a301-aec927bbf6ee","Type":"ContainerStarted","Data":"90f432843c2bec581f8bf0e2a901e9ea29474a47f13a34f458024b9b0e347a68"} Dec 01 08:58:01 crc kubenswrapper[4689]: I1201 08:58:01.667936 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.667918816 podStartE2EDuration="4.667918816s" podCreationTimestamp="2025-12-01 08:57:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:58:01.663462974 +0000 UTC m=+1161.735750888" watchObservedRunningTime="2025-12-01 08:58:01.667918816 +0000 UTC m=+1161.740206720" Dec 01 08:58:02 crc kubenswrapper[4689]: I1201 08:58:02.039855 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-7bd884c498-fvqdz" podUID="1bd94e50-aa23-4249-acd5-293b272a8123" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.163:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 01 08:58:02 crc kubenswrapper[4689]: I1201 08:58:02.054600 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-78d9cd9dbd-qxwq7" podUID="e88c04bb-01ff-47a6-8942-05a9a2a68416" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Dec 01 08:58:02 crc kubenswrapper[4689]: I1201 08:58:02.054688 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-78d9cd9dbd-qxwq7" Dec 01 08:58:02 crc kubenswrapper[4689]: I1201 08:58:02.055506 4689 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"a091448b207aa75d136d6feb237ad0fa14303d634a2df9de676e06282a8c25ec"} pod="openstack/horizon-78d9cd9dbd-qxwq7" containerMessage="Container horizon failed startup probe, will be restarted" Dec 01 08:58:02 crc kubenswrapper[4689]: I1201 08:58:02.055542 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-78d9cd9dbd-qxwq7" podUID="e88c04bb-01ff-47a6-8942-05a9a2a68416" containerName="horizon" containerID="cri-o://a091448b207aa75d136d6feb237ad0fa14303d634a2df9de676e06282a8c25ec" gracePeriod=30 Dec 01 08:58:02 crc kubenswrapper[4689]: I1201 08:58:02.238528 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-d65b9788-2kr5p" podUID="fcebf70c-3de0-499e-928d-3419299a512f" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Dec 01 08:58:02 crc kubenswrapper[4689]: I1201 08:58:02.238612 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-d65b9788-2kr5p" Dec 01 08:58:02 crc kubenswrapper[4689]: I1201 08:58:02.239380 4689 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"fab80120b3cdcb11f34e6bc51dab2ce8ef0833fb8a3e2dbb9da58553b25ef62f"} pod="openstack/horizon-d65b9788-2kr5p" containerMessage="Container horizon failed startup probe, will be restarted" Dec 01 08:58:02 crc kubenswrapper[4689]: I1201 08:58:02.239412 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-d65b9788-2kr5p" podUID="fcebf70c-3de0-499e-928d-3419299a512f" containerName="horizon" containerID="cri-o://fab80120b3cdcb11f34e6bc51dab2ce8ef0833fb8a3e2dbb9da58553b25ef62f" gracePeriod=30 Dec 01 08:58:02 crc kubenswrapper[4689]: I1201 08:58:02.661581 4689 generic.go:334] "Generic (PLEG): container finished" podID="6e961c41-2024-43a4-bb5a-35926b887048" containerID="96660f73a77ef4cbfb1ace3ec60f36199140a8d65eab28e66db6737198c49412" exitCode=143 Dec 01 08:58:02 crc kubenswrapper[4689]: I1201 08:58:02.661645 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"6e961c41-2024-43a4-bb5a-35926b887048","Type":"ContainerDied","Data":"96660f73a77ef4cbfb1ace3ec60f36199140a8d65eab28e66db6737198c49412"} Dec 01 08:58:02 crc kubenswrapper[4689]: I1201 08:58:02.666475 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"06f62c5e-1ae9-4826-a301-aec927bbf6ee","Type":"ContainerStarted","Data":"29df931bc4bdb53dfcc3554a0b8d48b1a1675bd31d56afea497a5f874937b44a"} Dec 01 08:58:02 crc kubenswrapper[4689]: I1201 08:58:02.691399 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=5.560109072 podStartE2EDuration="6.691376446s" podCreationTimestamp="2025-12-01 08:57:56 +0000 UTC" firstStartedPulling="2025-12-01 08:57:58.179225776 +0000 UTC m=+1158.251513690" lastFinishedPulling="2025-12-01 08:57:59.31049316 +0000 UTC m=+1159.382781064" observedRunningTime="2025-12-01 08:58:02.684327062 +0000 UTC m=+1162.756614966" watchObservedRunningTime="2025-12-01 08:58:02.691376446 +0000 UTC m=+1162.763664350" Dec 01 08:58:02 crc kubenswrapper[4689]: I1201 08:58:02.822177 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-58c7f9c74f-nqnzt" Dec 01 08:58:02 crc kubenswrapper[4689]: I1201 08:58:02.914055 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-74cd45bd8d-lsl5j"] Dec 01 08:58:02 crc kubenswrapper[4689]: I1201 08:58:02.915121 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-74cd45bd8d-lsl5j" podUID="50356777-8001-44ef-95a4-73db83be36bc" containerName="neutron-httpd" containerID="cri-o://24bf8999cb46d83ed2d5ee7aaee2ccfe420144dea0e10fc595c4d0ca1733718f" gracePeriod=30 Dec 01 08:58:02 crc kubenswrapper[4689]: I1201 08:58:02.914773 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-74cd45bd8d-lsl5j" podUID="50356777-8001-44ef-95a4-73db83be36bc" containerName="neutron-api" containerID="cri-o://ecd2fb94a7a6b1d73891d1bc1d4a2c543437d33c69142f7a4fb2f8cdfbbbf97e" gracePeriod=30 Dec 01 08:58:03 crc kubenswrapper[4689]: I1201 08:58:03.698574 4689 generic.go:334] "Generic (PLEG): container finished" podID="50356777-8001-44ef-95a4-73db83be36bc" containerID="24bf8999cb46d83ed2d5ee7aaee2ccfe420144dea0e10fc595c4d0ca1733718f" exitCode=0 Dec 01 08:58:03 crc kubenswrapper[4689]: I1201 08:58:03.698765 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-74cd45bd8d-lsl5j" event={"ID":"50356777-8001-44ef-95a4-73db83be36bc","Type":"ContainerDied","Data":"24bf8999cb46d83ed2d5ee7aaee2ccfe420144dea0e10fc595c4d0ca1733718f"} Dec 01 08:58:03 crc kubenswrapper[4689]: I1201 08:58:03.958895 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7bd884c498-fvqdz" Dec 01 08:58:04 crc kubenswrapper[4689]: I1201 08:58:04.415020 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7bd884c498-fvqdz" Dec 01 08:58:04 crc kubenswrapper[4689]: I1201 08:58:04.500516 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-b66ff89fd-wdk5g"] Dec 01 08:58:04 crc kubenswrapper[4689]: I1201 08:58:04.500793 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-b66ff89fd-wdk5g" podUID="2943fb44-3da9-4d20-a7ff-7561e1eca1b1" containerName="barbican-api-log" containerID="cri-o://2e0dabe2ffd5b964b1be2793850d334bd8e6dc08d7f3e264cac7ce5861b091f0" gracePeriod=30 Dec 01 08:58:04 crc kubenswrapper[4689]: I1201 08:58:04.501338 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-b66ff89fd-wdk5g" podUID="2943fb44-3da9-4d20-a7ff-7561e1eca1b1" containerName="barbican-api" containerID="cri-o://2513e9f5078291991d4c594a69ed9cd19f37056aa8d5892665df0408b554bd0f" gracePeriod=30 Dec 01 08:58:04 crc kubenswrapper[4689]: I1201 08:58:04.721377 4689 generic.go:334] "Generic (PLEG): container finished" podID="2943fb44-3da9-4d20-a7ff-7561e1eca1b1" containerID="2e0dabe2ffd5b964b1be2793850d334bd8e6dc08d7f3e264cac7ce5861b091f0" exitCode=143 Dec 01 08:58:04 crc kubenswrapper[4689]: I1201 08:58:04.721489 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-b66ff89fd-wdk5g" event={"ID":"2943fb44-3da9-4d20-a7ff-7561e1eca1b1","Type":"ContainerDied","Data":"2e0dabe2ffd5b964b1be2793850d334bd8e6dc08d7f3e264cac7ce5861b091f0"} Dec 01 08:58:07 crc kubenswrapper[4689]: I1201 08:58:07.413259 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Dec 01 08:58:07 crc kubenswrapper[4689]: I1201 08:58:07.440717 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5784cf869f-mztfd" Dec 01 08:58:07 crc kubenswrapper[4689]: I1201 08:58:07.544982 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-9ggn8"] Dec 01 08:58:07 crc kubenswrapper[4689]: I1201 08:58:07.545204 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-75c8ddd69c-9ggn8" podUID="49961df2-6dfd-485f-8f00-3645c115c7f0" containerName="dnsmasq-dns" containerID="cri-o://96ec147210dfa4e47bb906e58a3b6e317b621baaa8dce9d077448f245c33278c" gracePeriod=10 Dec 01 08:58:07 crc kubenswrapper[4689]: I1201 08:58:07.761489 4689 generic.go:334] "Generic (PLEG): container finished" podID="49961df2-6dfd-485f-8f00-3645c115c7f0" containerID="96ec147210dfa4e47bb906e58a3b6e317b621baaa8dce9d077448f245c33278c" exitCode=0 Dec 01 08:58:07 crc kubenswrapper[4689]: I1201 08:58:07.761844 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c8ddd69c-9ggn8" event={"ID":"49961df2-6dfd-485f-8f00-3645c115c7f0","Type":"ContainerDied","Data":"96ec147210dfa4e47bb906e58a3b6e317b621baaa8dce9d077448f245c33278c"} Dec 01 08:58:07 crc kubenswrapper[4689]: I1201 08:58:07.891872 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Dec 01 08:58:07 crc kubenswrapper[4689]: I1201 08:58:07.992140 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 01 08:58:08 crc kubenswrapper[4689]: I1201 08:58:08.780840 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="06f62c5e-1ae9-4826-a301-aec927bbf6ee" containerName="cinder-scheduler" containerID="cri-o://90f432843c2bec581f8bf0e2a901e9ea29474a47f13a34f458024b9b0e347a68" gracePeriod=30 Dec 01 08:58:08 crc kubenswrapper[4689]: I1201 08:58:08.781661 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="06f62c5e-1ae9-4826-a301-aec927bbf6ee" containerName="probe" containerID="cri-o://29df931bc4bdb53dfcc3554a0b8d48b1a1675bd31d56afea497a5f874937b44a" gracePeriod=30 Dec 01 08:58:09 crc kubenswrapper[4689]: I1201 08:58:09.196769 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-b66ff89fd-wdk5g" podUID="2943fb44-3da9-4d20-a7ff-7561e1eca1b1" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.160:9311/healthcheck\": read tcp 10.217.0.2:60986->10.217.0.160:9311: read: connection reset by peer" Dec 01 08:58:09 crc kubenswrapper[4689]: I1201 08:58:09.197126 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-b66ff89fd-wdk5g" podUID="2943fb44-3da9-4d20-a7ff-7561e1eca1b1" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.160:9311/healthcheck\": read tcp 10.217.0.2:60984->10.217.0.160:9311: read: connection reset by peer" Dec 01 08:58:09 crc kubenswrapper[4689]: I1201 08:58:09.795819 4689 generic.go:334] "Generic (PLEG): container finished" podID="06f62c5e-1ae9-4826-a301-aec927bbf6ee" containerID="29df931bc4bdb53dfcc3554a0b8d48b1a1675bd31d56afea497a5f874937b44a" exitCode=0 Dec 01 08:58:09 crc kubenswrapper[4689]: I1201 08:58:09.795853 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"06f62c5e-1ae9-4826-a301-aec927bbf6ee","Type":"ContainerDied","Data":"29df931bc4bdb53dfcc3554a0b8d48b1a1675bd31d56afea497a5f874937b44a"} Dec 01 08:58:09 crc kubenswrapper[4689]: I1201 08:58:09.800343 4689 generic.go:334] "Generic (PLEG): container finished" podID="2943fb44-3da9-4d20-a7ff-7561e1eca1b1" containerID="2513e9f5078291991d4c594a69ed9cd19f37056aa8d5892665df0408b554bd0f" exitCode=0 Dec 01 08:58:09 crc kubenswrapper[4689]: I1201 08:58:09.800403 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-b66ff89fd-wdk5g" event={"ID":"2943fb44-3da9-4d20-a7ff-7561e1eca1b1","Type":"ContainerDied","Data":"2513e9f5078291991d4c594a69ed9cd19f37056aa8d5892665df0408b554bd0f"} Dec 01 08:58:10 crc kubenswrapper[4689]: I1201 08:58:10.846633 4689 generic.go:334] "Generic (PLEG): container finished" podID="06f62c5e-1ae9-4826-a301-aec927bbf6ee" containerID="90f432843c2bec581f8bf0e2a901e9ea29474a47f13a34f458024b9b0e347a68" exitCode=0 Dec 01 08:58:10 crc kubenswrapper[4689]: I1201 08:58:10.846896 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"06f62c5e-1ae9-4826-a301-aec927bbf6ee","Type":"ContainerDied","Data":"90f432843c2bec581f8bf0e2a901e9ea29474a47f13a34f458024b9b0e347a68"} Dec 01 08:58:11 crc kubenswrapper[4689]: I1201 08:58:11.283320 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-b66ff89fd-wdk5g" podUID="2943fb44-3da9-4d20-a7ff-7561e1eca1b1" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.160:9311/healthcheck\": dial tcp 10.217.0.160:9311: connect: connection refused" Dec 01 08:58:11 crc kubenswrapper[4689]: I1201 08:58:11.284172 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-b66ff89fd-wdk5g" podUID="2943fb44-3da9-4d20-a7ff-7561e1eca1b1" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.160:9311/healthcheck\": dial tcp 10.217.0.160:9311: connect: connection refused" Dec 01 08:58:11 crc kubenswrapper[4689]: I1201 08:58:11.345891 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Dec 01 08:58:11 crc kubenswrapper[4689]: I1201 08:58:11.500932 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5454b5d64d-5p8d8" Dec 01 08:58:11 crc kubenswrapper[4689]: I1201 08:58:11.988923 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c8ddd69c-9ggn8" Dec 01 08:58:12 crc kubenswrapper[4689]: I1201 08:58:12.019419 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/49961df2-6dfd-485f-8f00-3645c115c7f0-ovsdbserver-nb\") pod \"49961df2-6dfd-485f-8f00-3645c115c7f0\" (UID: \"49961df2-6dfd-485f-8f00-3645c115c7f0\") " Dec 01 08:58:12 crc kubenswrapper[4689]: I1201 08:58:12.019492 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/49961df2-6dfd-485f-8f00-3645c115c7f0-ovsdbserver-sb\") pod \"49961df2-6dfd-485f-8f00-3645c115c7f0\" (UID: \"49961df2-6dfd-485f-8f00-3645c115c7f0\") " Dec 01 08:58:12 crc kubenswrapper[4689]: I1201 08:58:12.019591 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/49961df2-6dfd-485f-8f00-3645c115c7f0-dns-swift-storage-0\") pod \"49961df2-6dfd-485f-8f00-3645c115c7f0\" (UID: \"49961df2-6dfd-485f-8f00-3645c115c7f0\") " Dec 01 08:58:12 crc kubenswrapper[4689]: I1201 08:58:12.019619 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s6546\" (UniqueName: \"kubernetes.io/projected/49961df2-6dfd-485f-8f00-3645c115c7f0-kube-api-access-s6546\") pod \"49961df2-6dfd-485f-8f00-3645c115c7f0\" (UID: \"49961df2-6dfd-485f-8f00-3645c115c7f0\") " Dec 01 08:58:12 crc kubenswrapper[4689]: I1201 08:58:12.019667 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49961df2-6dfd-485f-8f00-3645c115c7f0-config\") pod \"49961df2-6dfd-485f-8f00-3645c115c7f0\" (UID: \"49961df2-6dfd-485f-8f00-3645c115c7f0\") " Dec 01 08:58:12 crc kubenswrapper[4689]: I1201 08:58:12.019759 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/49961df2-6dfd-485f-8f00-3645c115c7f0-dns-svc\") pod \"49961df2-6dfd-485f-8f00-3645c115c7f0\" (UID: \"49961df2-6dfd-485f-8f00-3645c115c7f0\") " Dec 01 08:58:12 crc kubenswrapper[4689]: I1201 08:58:12.083815 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49961df2-6dfd-485f-8f00-3645c115c7f0-kube-api-access-s6546" (OuterVolumeSpecName: "kube-api-access-s6546") pod "49961df2-6dfd-485f-8f00-3645c115c7f0" (UID: "49961df2-6dfd-485f-8f00-3645c115c7f0"). InnerVolumeSpecName "kube-api-access-s6546". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:58:12 crc kubenswrapper[4689]: I1201 08:58:12.107905 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49961df2-6dfd-485f-8f00-3645c115c7f0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "49961df2-6dfd-485f-8f00-3645c115c7f0" (UID: "49961df2-6dfd-485f-8f00-3645c115c7f0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:58:12 crc kubenswrapper[4689]: I1201 08:58:12.128981 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49961df2-6dfd-485f-8f00-3645c115c7f0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "49961df2-6dfd-485f-8f00-3645c115c7f0" (UID: "49961df2-6dfd-485f-8f00-3645c115c7f0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:58:12 crc kubenswrapper[4689]: I1201 08:58:12.129965 4689 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/49961df2-6dfd-485f-8f00-3645c115c7f0-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 01 08:58:12 crc kubenswrapper[4689]: I1201 08:58:12.131054 4689 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/49961df2-6dfd-485f-8f00-3645c115c7f0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 01 08:58:12 crc kubenswrapper[4689]: I1201 08:58:12.131160 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s6546\" (UniqueName: \"kubernetes.io/projected/49961df2-6dfd-485f-8f00-3645c115c7f0-kube-api-access-s6546\") on node \"crc\" DevicePath \"\"" Dec 01 08:58:12 crc kubenswrapper[4689]: I1201 08:58:12.132491 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49961df2-6dfd-485f-8f00-3645c115c7f0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "49961df2-6dfd-485f-8f00-3645c115c7f0" (UID: "49961df2-6dfd-485f-8f00-3645c115c7f0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:58:12 crc kubenswrapper[4689]: I1201 08:58:12.149029 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49961df2-6dfd-485f-8f00-3645c115c7f0-config" (OuterVolumeSpecName: "config") pod "49961df2-6dfd-485f-8f00-3645c115c7f0" (UID: "49961df2-6dfd-485f-8f00-3645c115c7f0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:58:12 crc kubenswrapper[4689]: I1201 08:58:12.175413 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49961df2-6dfd-485f-8f00-3645c115c7f0-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "49961df2-6dfd-485f-8f00-3645c115c7f0" (UID: "49961df2-6dfd-485f-8f00-3645c115c7f0"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:58:12 crc kubenswrapper[4689]: I1201 08:58:12.232501 4689 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/49961df2-6dfd-485f-8f00-3645c115c7f0-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 01 08:58:12 crc kubenswrapper[4689]: I1201 08:58:12.232538 4689 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/49961df2-6dfd-485f-8f00-3645c115c7f0-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 01 08:58:12 crc kubenswrapper[4689]: I1201 08:58:12.232549 4689 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49961df2-6dfd-485f-8f00-3645c115c7f0-config\") on node \"crc\" DevicePath \"\"" Dec 01 08:58:12 crc kubenswrapper[4689]: I1201 08:58:12.867348 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c8ddd69c-9ggn8" event={"ID":"49961df2-6dfd-485f-8f00-3645c115c7f0","Type":"ContainerDied","Data":"049f62b77233a003b5ffa0ca17a68f7ae69a7f4f1326100fa830cd63dc2cf1ac"} Dec 01 08:58:12 crc kubenswrapper[4689]: I1201 08:58:12.867498 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c8ddd69c-9ggn8" Dec 01 08:58:12 crc kubenswrapper[4689]: I1201 08:58:12.867642 4689 scope.go:117] "RemoveContainer" containerID="96ec147210dfa4e47bb906e58a3b6e317b621baaa8dce9d077448f245c33278c" Dec 01 08:58:12 crc kubenswrapper[4689]: I1201 08:58:12.936586 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-9ggn8"] Dec 01 08:58:12 crc kubenswrapper[4689]: I1201 08:58:12.955672 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-9ggn8"] Dec 01 08:58:13 crc kubenswrapper[4689]: E1201 08:58:13.044393 4689 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/ubi9/httpd-24:latest" Dec 01 08:58:13 crc kubenswrapper[4689]: E1201 08:58:13.044571 4689 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r29fc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(f54de58e-9111-462b-a86e-8e324060c8aa): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 01 08:58:13 crc kubenswrapper[4689]: E1201 08:58:13.045850 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"]" pod="openstack/ceilometer-0" podUID="f54de58e-9111-462b-a86e-8e324060c8aa" Dec 01 08:58:13 crc kubenswrapper[4689]: I1201 08:58:13.062236 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49961df2-6dfd-485f-8f00-3645c115c7f0" path="/var/lib/kubelet/pods/49961df2-6dfd-485f-8f00-3645c115c7f0/volumes" Dec 01 08:58:13 crc kubenswrapper[4689]: I1201 08:58:13.149725 4689 scope.go:117] "RemoveContainer" containerID="34e82cbade1d32e224ae96afb028dce4fb90180addee695920d1de52c2043cc5" Dec 01 08:58:13 crc kubenswrapper[4689]: I1201 08:58:13.407080 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 01 08:58:13 crc kubenswrapper[4689]: I1201 08:58:13.472727 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06f62c5e-1ae9-4826-a301-aec927bbf6ee-combined-ca-bundle\") pod \"06f62c5e-1ae9-4826-a301-aec927bbf6ee\" (UID: \"06f62c5e-1ae9-4826-a301-aec927bbf6ee\") " Dec 01 08:58:13 crc kubenswrapper[4689]: I1201 08:58:13.472808 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06f62c5e-1ae9-4826-a301-aec927bbf6ee-scripts\") pod \"06f62c5e-1ae9-4826-a301-aec927bbf6ee\" (UID: \"06f62c5e-1ae9-4826-a301-aec927bbf6ee\") " Dec 01 08:58:13 crc kubenswrapper[4689]: I1201 08:58:13.472868 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/06f62c5e-1ae9-4826-a301-aec927bbf6ee-etc-machine-id\") pod \"06f62c5e-1ae9-4826-a301-aec927bbf6ee\" (UID: \"06f62c5e-1ae9-4826-a301-aec927bbf6ee\") " Dec 01 08:58:13 crc kubenswrapper[4689]: I1201 08:58:13.472924 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/06f62c5e-1ae9-4826-a301-aec927bbf6ee-config-data-custom\") pod \"06f62c5e-1ae9-4826-a301-aec927bbf6ee\" (UID: \"06f62c5e-1ae9-4826-a301-aec927bbf6ee\") " Dec 01 08:58:13 crc kubenswrapper[4689]: I1201 08:58:13.473017 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06f62c5e-1ae9-4826-a301-aec927bbf6ee-config-data\") pod \"06f62c5e-1ae9-4826-a301-aec927bbf6ee\" (UID: \"06f62c5e-1ae9-4826-a301-aec927bbf6ee\") " Dec 01 08:58:13 crc kubenswrapper[4689]: I1201 08:58:13.473091 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-22lsm\" (UniqueName: \"kubernetes.io/projected/06f62c5e-1ae9-4826-a301-aec927bbf6ee-kube-api-access-22lsm\") pod \"06f62c5e-1ae9-4826-a301-aec927bbf6ee\" (UID: \"06f62c5e-1ae9-4826-a301-aec927bbf6ee\") " Dec 01 08:58:13 crc kubenswrapper[4689]: I1201 08:58:13.474477 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06f62c5e-1ae9-4826-a301-aec927bbf6ee-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "06f62c5e-1ae9-4826-a301-aec927bbf6ee" (UID: "06f62c5e-1ae9-4826-a301-aec927bbf6ee"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 08:58:13 crc kubenswrapper[4689]: I1201 08:58:13.498139 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06f62c5e-1ae9-4826-a301-aec927bbf6ee-kube-api-access-22lsm" (OuterVolumeSpecName: "kube-api-access-22lsm") pod "06f62c5e-1ae9-4826-a301-aec927bbf6ee" (UID: "06f62c5e-1ae9-4826-a301-aec927bbf6ee"). InnerVolumeSpecName "kube-api-access-22lsm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:58:13 crc kubenswrapper[4689]: I1201 08:58:13.517531 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06f62c5e-1ae9-4826-a301-aec927bbf6ee-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "06f62c5e-1ae9-4826-a301-aec927bbf6ee" (UID: "06f62c5e-1ae9-4826-a301-aec927bbf6ee"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:58:13 crc kubenswrapper[4689]: I1201 08:58:13.552281 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06f62c5e-1ae9-4826-a301-aec927bbf6ee-scripts" (OuterVolumeSpecName: "scripts") pod "06f62c5e-1ae9-4826-a301-aec927bbf6ee" (UID: "06f62c5e-1ae9-4826-a301-aec927bbf6ee"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:58:13 crc kubenswrapper[4689]: I1201 08:58:13.586812 4689 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/06f62c5e-1ae9-4826-a301-aec927bbf6ee-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 01 08:58:13 crc kubenswrapper[4689]: I1201 08:58:13.587153 4689 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/06f62c5e-1ae9-4826-a301-aec927bbf6ee-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 01 08:58:13 crc kubenswrapper[4689]: I1201 08:58:13.587199 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-22lsm\" (UniqueName: \"kubernetes.io/projected/06f62c5e-1ae9-4826-a301-aec927bbf6ee-kube-api-access-22lsm\") on node \"crc\" DevicePath \"\"" Dec 01 08:58:13 crc kubenswrapper[4689]: I1201 08:58:13.587213 4689 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06f62c5e-1ae9-4826-a301-aec927bbf6ee-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 08:58:13 crc kubenswrapper[4689]: I1201 08:58:13.604555 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06f62c5e-1ae9-4826-a301-aec927bbf6ee-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "06f62c5e-1ae9-4826-a301-aec927bbf6ee" (UID: "06f62c5e-1ae9-4826-a301-aec927bbf6ee"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:58:13 crc kubenswrapper[4689]: I1201 08:58:13.609471 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-b66ff89fd-wdk5g" Dec 01 08:58:13 crc kubenswrapper[4689]: I1201 08:58:13.688553 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2943fb44-3da9-4d20-a7ff-7561e1eca1b1-combined-ca-bundle\") pod \"2943fb44-3da9-4d20-a7ff-7561e1eca1b1\" (UID: \"2943fb44-3da9-4d20-a7ff-7561e1eca1b1\") " Dec 01 08:58:13 crc kubenswrapper[4689]: I1201 08:58:13.688611 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5f4mr\" (UniqueName: \"kubernetes.io/projected/2943fb44-3da9-4d20-a7ff-7561e1eca1b1-kube-api-access-5f4mr\") pod \"2943fb44-3da9-4d20-a7ff-7561e1eca1b1\" (UID: \"2943fb44-3da9-4d20-a7ff-7561e1eca1b1\") " Dec 01 08:58:13 crc kubenswrapper[4689]: I1201 08:58:13.688659 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2943fb44-3da9-4d20-a7ff-7561e1eca1b1-config-data\") pod \"2943fb44-3da9-4d20-a7ff-7561e1eca1b1\" (UID: \"2943fb44-3da9-4d20-a7ff-7561e1eca1b1\") " Dec 01 08:58:13 crc kubenswrapper[4689]: I1201 08:58:13.688695 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2943fb44-3da9-4d20-a7ff-7561e1eca1b1-config-data-custom\") pod \"2943fb44-3da9-4d20-a7ff-7561e1eca1b1\" (UID: \"2943fb44-3da9-4d20-a7ff-7561e1eca1b1\") " Dec 01 08:58:13 crc kubenswrapper[4689]: I1201 08:58:13.688816 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2943fb44-3da9-4d20-a7ff-7561e1eca1b1-logs\") pod \"2943fb44-3da9-4d20-a7ff-7561e1eca1b1\" (UID: \"2943fb44-3da9-4d20-a7ff-7561e1eca1b1\") " Dec 01 08:58:13 crc kubenswrapper[4689]: I1201 08:58:13.689298 4689 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06f62c5e-1ae9-4826-a301-aec927bbf6ee-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 08:58:13 crc kubenswrapper[4689]: I1201 08:58:13.690291 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2943fb44-3da9-4d20-a7ff-7561e1eca1b1-logs" (OuterVolumeSpecName: "logs") pod "2943fb44-3da9-4d20-a7ff-7561e1eca1b1" (UID: "2943fb44-3da9-4d20-a7ff-7561e1eca1b1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 08:58:13 crc kubenswrapper[4689]: I1201 08:58:13.693866 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2943fb44-3da9-4d20-a7ff-7561e1eca1b1-kube-api-access-5f4mr" (OuterVolumeSpecName: "kube-api-access-5f4mr") pod "2943fb44-3da9-4d20-a7ff-7561e1eca1b1" (UID: "2943fb44-3da9-4d20-a7ff-7561e1eca1b1"). InnerVolumeSpecName "kube-api-access-5f4mr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:58:13 crc kubenswrapper[4689]: I1201 08:58:13.700525 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2943fb44-3da9-4d20-a7ff-7561e1eca1b1-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "2943fb44-3da9-4d20-a7ff-7561e1eca1b1" (UID: "2943fb44-3da9-4d20-a7ff-7561e1eca1b1"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:58:13 crc kubenswrapper[4689]: I1201 08:58:13.700719 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06f62c5e-1ae9-4826-a301-aec927bbf6ee-config-data" (OuterVolumeSpecName: "config-data") pod "06f62c5e-1ae9-4826-a301-aec927bbf6ee" (UID: "06f62c5e-1ae9-4826-a301-aec927bbf6ee"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:58:13 crc kubenswrapper[4689]: I1201 08:58:13.728686 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2943fb44-3da9-4d20-a7ff-7561e1eca1b1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2943fb44-3da9-4d20-a7ff-7561e1eca1b1" (UID: "2943fb44-3da9-4d20-a7ff-7561e1eca1b1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:58:13 crc kubenswrapper[4689]: I1201 08:58:13.751744 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2943fb44-3da9-4d20-a7ff-7561e1eca1b1-config-data" (OuterVolumeSpecName: "config-data") pod "2943fb44-3da9-4d20-a7ff-7561e1eca1b1" (UID: "2943fb44-3da9-4d20-a7ff-7561e1eca1b1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:58:13 crc kubenswrapper[4689]: I1201 08:58:13.792214 4689 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2943fb44-3da9-4d20-a7ff-7561e1eca1b1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 08:58:13 crc kubenswrapper[4689]: I1201 08:58:13.792517 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5f4mr\" (UniqueName: \"kubernetes.io/projected/2943fb44-3da9-4d20-a7ff-7561e1eca1b1-kube-api-access-5f4mr\") on node \"crc\" DevicePath \"\"" Dec 01 08:58:13 crc kubenswrapper[4689]: I1201 08:58:13.792581 4689 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2943fb44-3da9-4d20-a7ff-7561e1eca1b1-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 08:58:13 crc kubenswrapper[4689]: I1201 08:58:13.792636 4689 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2943fb44-3da9-4d20-a7ff-7561e1eca1b1-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 01 08:58:13 crc kubenswrapper[4689]: I1201 08:58:13.792710 4689 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06f62c5e-1ae9-4826-a301-aec927bbf6ee-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 08:58:13 crc kubenswrapper[4689]: I1201 08:58:13.792767 4689 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2943fb44-3da9-4d20-a7ff-7561e1eca1b1-logs\") on node \"crc\" DevicePath \"\"" Dec 01 08:58:13 crc kubenswrapper[4689]: I1201 08:58:13.851199 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-7575f55b68-75xn5" Dec 01 08:58:13 crc kubenswrapper[4689]: I1201 08:58:13.882235 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"06f62c5e-1ae9-4826-a301-aec927bbf6ee","Type":"ContainerDied","Data":"52c3ae6842c62eae67440fd7dbf918fcb814ff64ca563f76b193bab95440544d"} Dec 01 08:58:13 crc kubenswrapper[4689]: I1201 08:58:13.882334 4689 scope.go:117] "RemoveContainer" containerID="29df931bc4bdb53dfcc3554a0b8d48b1a1675bd31d56afea497a5f874937b44a" Dec 01 08:58:13 crc kubenswrapper[4689]: I1201 08:58:13.882264 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 01 08:58:13 crc kubenswrapper[4689]: I1201 08:58:13.890824 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f54de58e-9111-462b-a86e-8e324060c8aa" containerName="ceilometer-notification-agent" containerID="cri-o://5d84c0bc33fa0c594dd2e4ac53c19ea7e3a986eb17d8a353c63f16fb5ad089d6" gracePeriod=30 Dec 01 08:58:13 crc kubenswrapper[4689]: I1201 08:58:13.891167 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-b66ff89fd-wdk5g" Dec 01 08:58:13 crc kubenswrapper[4689]: I1201 08:58:13.891501 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-b66ff89fd-wdk5g" event={"ID":"2943fb44-3da9-4d20-a7ff-7561e1eca1b1","Type":"ContainerDied","Data":"e0e065fbd581fd1051894ae8e54a3f2863d0c3352dde57320ce1dde7b0b4f5e7"} Dec 01 08:58:13 crc kubenswrapper[4689]: I1201 08:58:13.891849 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f54de58e-9111-462b-a86e-8e324060c8aa" containerName="sg-core" containerID="cri-o://e71e70e63614adcc804aca95ac195b353f6938e502c6418ac9609f75ff113a02" gracePeriod=30 Dec 01 08:58:13 crc kubenswrapper[4689]: I1201 08:58:13.998768 4689 scope.go:117] "RemoveContainer" containerID="90f432843c2bec581f8bf0e2a901e9ea29474a47f13a34f458024b9b0e347a68" Dec 01 08:58:14 crc kubenswrapper[4689]: I1201 08:58:14.025782 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 01 08:58:14 crc kubenswrapper[4689]: I1201 08:58:14.038480 4689 scope.go:117] "RemoveContainer" containerID="2513e9f5078291991d4c594a69ed9cd19f37056aa8d5892665df0408b554bd0f" Dec 01 08:58:14 crc kubenswrapper[4689]: I1201 08:58:14.046343 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 01 08:58:14 crc kubenswrapper[4689]: I1201 08:58:14.067320 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-b66ff89fd-wdk5g"] Dec 01 08:58:14 crc kubenswrapper[4689]: I1201 08:58:14.079262 4689 scope.go:117] "RemoveContainer" containerID="2e0dabe2ffd5b964b1be2793850d334bd8e6dc08d7f3e264cac7ce5861b091f0" Dec 01 08:58:14 crc kubenswrapper[4689]: I1201 08:58:14.109441 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-b66ff89fd-wdk5g"] Dec 01 08:58:14 crc kubenswrapper[4689]: I1201 08:58:14.128186 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Dec 01 08:58:14 crc kubenswrapper[4689]: E1201 08:58:14.128887 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49961df2-6dfd-485f-8f00-3645c115c7f0" containerName="init" Dec 01 08:58:14 crc kubenswrapper[4689]: I1201 08:58:14.128940 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="49961df2-6dfd-485f-8f00-3645c115c7f0" containerName="init" Dec 01 08:58:14 crc kubenswrapper[4689]: E1201 08:58:14.128954 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06f62c5e-1ae9-4826-a301-aec927bbf6ee" containerName="probe" Dec 01 08:58:14 crc kubenswrapper[4689]: I1201 08:58:14.128960 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="06f62c5e-1ae9-4826-a301-aec927bbf6ee" containerName="probe" Dec 01 08:58:14 crc kubenswrapper[4689]: E1201 08:58:14.128972 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49961df2-6dfd-485f-8f00-3645c115c7f0" containerName="dnsmasq-dns" Dec 01 08:58:14 crc kubenswrapper[4689]: I1201 08:58:14.128978 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="49961df2-6dfd-485f-8f00-3645c115c7f0" containerName="dnsmasq-dns" Dec 01 08:58:14 crc kubenswrapper[4689]: E1201 08:58:14.129011 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2943fb44-3da9-4d20-a7ff-7561e1eca1b1" containerName="barbican-api" Dec 01 08:58:14 crc kubenswrapper[4689]: I1201 08:58:14.129017 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="2943fb44-3da9-4d20-a7ff-7561e1eca1b1" containerName="barbican-api" Dec 01 08:58:14 crc kubenswrapper[4689]: E1201 08:58:14.129025 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2943fb44-3da9-4d20-a7ff-7561e1eca1b1" containerName="barbican-api-log" Dec 01 08:58:14 crc kubenswrapper[4689]: I1201 08:58:14.129030 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="2943fb44-3da9-4d20-a7ff-7561e1eca1b1" containerName="barbican-api-log" Dec 01 08:58:14 crc kubenswrapper[4689]: E1201 08:58:14.129043 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06f62c5e-1ae9-4826-a301-aec927bbf6ee" containerName="cinder-scheduler" Dec 01 08:58:14 crc kubenswrapper[4689]: I1201 08:58:14.129049 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="06f62c5e-1ae9-4826-a301-aec927bbf6ee" containerName="cinder-scheduler" Dec 01 08:58:14 crc kubenswrapper[4689]: I1201 08:58:14.129221 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="06f62c5e-1ae9-4826-a301-aec927bbf6ee" containerName="probe" Dec 01 08:58:14 crc kubenswrapper[4689]: I1201 08:58:14.129254 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="49961df2-6dfd-485f-8f00-3645c115c7f0" containerName="dnsmasq-dns" Dec 01 08:58:14 crc kubenswrapper[4689]: I1201 08:58:14.129271 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="06f62c5e-1ae9-4826-a301-aec927bbf6ee" containerName="cinder-scheduler" Dec 01 08:58:14 crc kubenswrapper[4689]: I1201 08:58:14.129287 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="2943fb44-3da9-4d20-a7ff-7561e1eca1b1" containerName="barbican-api" Dec 01 08:58:14 crc kubenswrapper[4689]: I1201 08:58:14.129298 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="2943fb44-3da9-4d20-a7ff-7561e1eca1b1" containerName="barbican-api-log" Dec 01 08:58:14 crc kubenswrapper[4689]: I1201 08:58:14.130655 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 01 08:58:14 crc kubenswrapper[4689]: I1201 08:58:14.137742 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Dec 01 08:58:14 crc kubenswrapper[4689]: I1201 08:58:14.141801 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 01 08:58:14 crc kubenswrapper[4689]: I1201 08:58:14.199776 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kqlp\" (UniqueName: \"kubernetes.io/projected/0556c1c8-69cc-4fa6-a3df-46a4ed439312-kube-api-access-6kqlp\") pod \"cinder-scheduler-0\" (UID: \"0556c1c8-69cc-4fa6-a3df-46a4ed439312\") " pod="openstack/cinder-scheduler-0" Dec 01 08:58:14 crc kubenswrapper[4689]: I1201 08:58:14.199860 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0556c1c8-69cc-4fa6-a3df-46a4ed439312-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"0556c1c8-69cc-4fa6-a3df-46a4ed439312\") " pod="openstack/cinder-scheduler-0" Dec 01 08:58:14 crc kubenswrapper[4689]: I1201 08:58:14.199904 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0556c1c8-69cc-4fa6-a3df-46a4ed439312-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"0556c1c8-69cc-4fa6-a3df-46a4ed439312\") " pod="openstack/cinder-scheduler-0" Dec 01 08:58:14 crc kubenswrapper[4689]: I1201 08:58:14.200005 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0556c1c8-69cc-4fa6-a3df-46a4ed439312-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"0556c1c8-69cc-4fa6-a3df-46a4ed439312\") " pod="openstack/cinder-scheduler-0" Dec 01 08:58:14 crc kubenswrapper[4689]: I1201 08:58:14.200043 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0556c1c8-69cc-4fa6-a3df-46a4ed439312-config-data\") pod \"cinder-scheduler-0\" (UID: \"0556c1c8-69cc-4fa6-a3df-46a4ed439312\") " pod="openstack/cinder-scheduler-0" Dec 01 08:58:14 crc kubenswrapper[4689]: I1201 08:58:14.200075 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0556c1c8-69cc-4fa6-a3df-46a4ed439312-scripts\") pod \"cinder-scheduler-0\" (UID: \"0556c1c8-69cc-4fa6-a3df-46a4ed439312\") " pod="openstack/cinder-scheduler-0" Dec 01 08:58:14 crc kubenswrapper[4689]: I1201 08:58:14.302409 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0556c1c8-69cc-4fa6-a3df-46a4ed439312-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"0556c1c8-69cc-4fa6-a3df-46a4ed439312\") " pod="openstack/cinder-scheduler-0" Dec 01 08:58:14 crc kubenswrapper[4689]: I1201 08:58:14.302534 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0556c1c8-69cc-4fa6-a3df-46a4ed439312-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"0556c1c8-69cc-4fa6-a3df-46a4ed439312\") " pod="openstack/cinder-scheduler-0" Dec 01 08:58:14 crc kubenswrapper[4689]: I1201 08:58:14.302706 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0556c1c8-69cc-4fa6-a3df-46a4ed439312-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"0556c1c8-69cc-4fa6-a3df-46a4ed439312\") " pod="openstack/cinder-scheduler-0" Dec 01 08:58:14 crc kubenswrapper[4689]: I1201 08:58:14.302806 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0556c1c8-69cc-4fa6-a3df-46a4ed439312-config-data\") pod \"cinder-scheduler-0\" (UID: \"0556c1c8-69cc-4fa6-a3df-46a4ed439312\") " pod="openstack/cinder-scheduler-0" Dec 01 08:58:14 crc kubenswrapper[4689]: I1201 08:58:14.302869 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0556c1c8-69cc-4fa6-a3df-46a4ed439312-scripts\") pod \"cinder-scheduler-0\" (UID: \"0556c1c8-69cc-4fa6-a3df-46a4ed439312\") " pod="openstack/cinder-scheduler-0" Dec 01 08:58:14 crc kubenswrapper[4689]: I1201 08:58:14.302891 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0556c1c8-69cc-4fa6-a3df-46a4ed439312-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"0556c1c8-69cc-4fa6-a3df-46a4ed439312\") " pod="openstack/cinder-scheduler-0" Dec 01 08:58:14 crc kubenswrapper[4689]: I1201 08:58:14.302923 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kqlp\" (UniqueName: \"kubernetes.io/projected/0556c1c8-69cc-4fa6-a3df-46a4ed439312-kube-api-access-6kqlp\") pod \"cinder-scheduler-0\" (UID: \"0556c1c8-69cc-4fa6-a3df-46a4ed439312\") " pod="openstack/cinder-scheduler-0" Dec 01 08:58:14 crc kubenswrapper[4689]: I1201 08:58:14.309024 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0556c1c8-69cc-4fa6-a3df-46a4ed439312-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"0556c1c8-69cc-4fa6-a3df-46a4ed439312\") " pod="openstack/cinder-scheduler-0" Dec 01 08:58:14 crc kubenswrapper[4689]: I1201 08:58:14.316594 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0556c1c8-69cc-4fa6-a3df-46a4ed439312-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"0556c1c8-69cc-4fa6-a3df-46a4ed439312\") " pod="openstack/cinder-scheduler-0" Dec 01 08:58:14 crc kubenswrapper[4689]: I1201 08:58:14.317017 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0556c1c8-69cc-4fa6-a3df-46a4ed439312-config-data\") pod \"cinder-scheduler-0\" (UID: \"0556c1c8-69cc-4fa6-a3df-46a4ed439312\") " pod="openstack/cinder-scheduler-0" Dec 01 08:58:14 crc kubenswrapper[4689]: I1201 08:58:14.320366 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0556c1c8-69cc-4fa6-a3df-46a4ed439312-scripts\") pod \"cinder-scheduler-0\" (UID: \"0556c1c8-69cc-4fa6-a3df-46a4ed439312\") " pod="openstack/cinder-scheduler-0" Dec 01 08:58:14 crc kubenswrapper[4689]: I1201 08:58:14.326562 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kqlp\" (UniqueName: \"kubernetes.io/projected/0556c1c8-69cc-4fa6-a3df-46a4ed439312-kube-api-access-6kqlp\") pod \"cinder-scheduler-0\" (UID: \"0556c1c8-69cc-4fa6-a3df-46a4ed439312\") " pod="openstack/cinder-scheduler-0" Dec 01 08:58:14 crc kubenswrapper[4689]: I1201 08:58:14.456640 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 01 08:58:14 crc kubenswrapper[4689]: I1201 08:58:14.908691 4689 generic.go:334] "Generic (PLEG): container finished" podID="f54de58e-9111-462b-a86e-8e324060c8aa" containerID="e71e70e63614adcc804aca95ac195b353f6938e502c6418ac9609f75ff113a02" exitCode=2 Dec 01 08:58:14 crc kubenswrapper[4689]: I1201 08:58:14.908992 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f54de58e-9111-462b-a86e-8e324060c8aa","Type":"ContainerDied","Data":"e71e70e63614adcc804aca95ac195b353f6938e502c6418ac9609f75ff113a02"} Dec 01 08:58:14 crc kubenswrapper[4689]: I1201 08:58:14.999822 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 01 08:58:15 crc kubenswrapper[4689]: W1201 08:58:15.012820 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0556c1c8_69cc_4fa6_a3df_46a4ed439312.slice/crio-abe75f8336eb4cb91a7b4fc82b6db207ad178ab0ac101b35f4e83791a360d8b4 WatchSource:0}: Error finding container abe75f8336eb4cb91a7b4fc82b6db207ad178ab0ac101b35f4e83791a360d8b4: Status 404 returned error can't find the container with id abe75f8336eb4cb91a7b4fc82b6db207ad178ab0ac101b35f4e83791a360d8b4 Dec 01 08:58:15 crc kubenswrapper[4689]: I1201 08:58:15.079341 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06f62c5e-1ae9-4826-a301-aec927bbf6ee" path="/var/lib/kubelet/pods/06f62c5e-1ae9-4826-a301-aec927bbf6ee/volumes" Dec 01 08:58:15 crc kubenswrapper[4689]: I1201 08:58:15.080602 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2943fb44-3da9-4d20-a7ff-7561e1eca1b1" path="/var/lib/kubelet/pods/2943fb44-3da9-4d20-a7ff-7561e1eca1b1/volumes" Dec 01 08:58:15 crc kubenswrapper[4689]: I1201 08:58:15.854721 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-75c8ddd69c-9ggn8" podUID="49961df2-6dfd-485f-8f00-3645c115c7f0" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.159:5353: i/o timeout" Dec 01 08:58:15 crc kubenswrapper[4689]: I1201 08:58:15.953297 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0556c1c8-69cc-4fa6-a3df-46a4ed439312","Type":"ContainerStarted","Data":"8b3034d3593a24a14d2b067c67c7cd4728b6706fb5845bc7936fe44037091d07"} Dec 01 08:58:15 crc kubenswrapper[4689]: I1201 08:58:15.953405 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0556c1c8-69cc-4fa6-a3df-46a4ed439312","Type":"ContainerStarted","Data":"abe75f8336eb4cb91a7b4fc82b6db207ad178ab0ac101b35f4e83791a360d8b4"} Dec 01 08:58:16 crc kubenswrapper[4689]: I1201 08:58:16.977782 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0556c1c8-69cc-4fa6-a3df-46a4ed439312","Type":"ContainerStarted","Data":"882374d094398becbabd5909a752bb62a09ef5754726718b53e283b65440d11f"} Dec 01 08:58:17 crc kubenswrapper[4689]: I1201 08:58:17.004133 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.004112743 podStartE2EDuration="3.004112743s" podCreationTimestamp="2025-12-01 08:58:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:58:16.995827446 +0000 UTC m=+1177.068115350" watchObservedRunningTime="2025-12-01 08:58:17.004112743 +0000 UTC m=+1177.076400647" Dec 01 08:58:17 crc kubenswrapper[4689]: I1201 08:58:17.390963 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Dec 01 08:58:17 crc kubenswrapper[4689]: I1201 08:58:17.402241 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 01 08:58:17 crc kubenswrapper[4689]: I1201 08:58:17.417129 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 01 08:58:17 crc kubenswrapper[4689]: I1201 08:58:17.427586 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Dec 01 08:58:17 crc kubenswrapper[4689]: I1201 08:58:17.428524 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Dec 01 08:58:17 crc kubenswrapper[4689]: I1201 08:58:17.428758 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-mn8gv" Dec 01 08:58:17 crc kubenswrapper[4689]: I1201 08:58:17.477350 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cbf9f73-fecd-4c17-95c6-b0bd5a1ae285-combined-ca-bundle\") pod \"openstackclient\" (UID: \"0cbf9f73-fecd-4c17-95c6-b0bd5a1ae285\") " pod="openstack/openstackclient" Dec 01 08:58:17 crc kubenswrapper[4689]: I1201 08:58:17.477624 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jd85r\" (UniqueName: \"kubernetes.io/projected/0cbf9f73-fecd-4c17-95c6-b0bd5a1ae285-kube-api-access-jd85r\") pod \"openstackclient\" (UID: \"0cbf9f73-fecd-4c17-95c6-b0bd5a1ae285\") " pod="openstack/openstackclient" Dec 01 08:58:17 crc kubenswrapper[4689]: I1201 08:58:17.477730 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0cbf9f73-fecd-4c17-95c6-b0bd5a1ae285-openstack-config\") pod \"openstackclient\" (UID: \"0cbf9f73-fecd-4c17-95c6-b0bd5a1ae285\") " pod="openstack/openstackclient" Dec 01 08:58:17 crc kubenswrapper[4689]: I1201 08:58:17.477883 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0cbf9f73-fecd-4c17-95c6-b0bd5a1ae285-openstack-config-secret\") pod \"openstackclient\" (UID: \"0cbf9f73-fecd-4c17-95c6-b0bd5a1ae285\") " pod="openstack/openstackclient" Dec 01 08:58:17 crc kubenswrapper[4689]: I1201 08:58:17.579760 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cbf9f73-fecd-4c17-95c6-b0bd5a1ae285-combined-ca-bundle\") pod \"openstackclient\" (UID: \"0cbf9f73-fecd-4c17-95c6-b0bd5a1ae285\") " pod="openstack/openstackclient" Dec 01 08:58:17 crc kubenswrapper[4689]: I1201 08:58:17.584258 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jd85r\" (UniqueName: \"kubernetes.io/projected/0cbf9f73-fecd-4c17-95c6-b0bd5a1ae285-kube-api-access-jd85r\") pod \"openstackclient\" (UID: \"0cbf9f73-fecd-4c17-95c6-b0bd5a1ae285\") " pod="openstack/openstackclient" Dec 01 08:58:17 crc kubenswrapper[4689]: I1201 08:58:17.584311 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0cbf9f73-fecd-4c17-95c6-b0bd5a1ae285-openstack-config\") pod \"openstackclient\" (UID: \"0cbf9f73-fecd-4c17-95c6-b0bd5a1ae285\") " pod="openstack/openstackclient" Dec 01 08:58:17 crc kubenswrapper[4689]: I1201 08:58:17.584521 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0cbf9f73-fecd-4c17-95c6-b0bd5a1ae285-openstack-config-secret\") pod \"openstackclient\" (UID: \"0cbf9f73-fecd-4c17-95c6-b0bd5a1ae285\") " pod="openstack/openstackclient" Dec 01 08:58:17 crc kubenswrapper[4689]: I1201 08:58:17.586900 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0cbf9f73-fecd-4c17-95c6-b0bd5a1ae285-openstack-config\") pod \"openstackclient\" (UID: \"0cbf9f73-fecd-4c17-95c6-b0bd5a1ae285\") " pod="openstack/openstackclient" Dec 01 08:58:17 crc kubenswrapper[4689]: I1201 08:58:17.587572 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cbf9f73-fecd-4c17-95c6-b0bd5a1ae285-combined-ca-bundle\") pod \"openstackclient\" (UID: \"0cbf9f73-fecd-4c17-95c6-b0bd5a1ae285\") " pod="openstack/openstackclient" Dec 01 08:58:17 crc kubenswrapper[4689]: I1201 08:58:17.589913 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0cbf9f73-fecd-4c17-95c6-b0bd5a1ae285-openstack-config-secret\") pod \"openstackclient\" (UID: \"0cbf9f73-fecd-4c17-95c6-b0bd5a1ae285\") " pod="openstack/openstackclient" Dec 01 08:58:17 crc kubenswrapper[4689]: I1201 08:58:17.601521 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jd85r\" (UniqueName: \"kubernetes.io/projected/0cbf9f73-fecd-4c17-95c6-b0bd5a1ae285-kube-api-access-jd85r\") pod \"openstackclient\" (UID: \"0cbf9f73-fecd-4c17-95c6-b0bd5a1ae285\") " pod="openstack/openstackclient" Dec 01 08:58:17 crc kubenswrapper[4689]: I1201 08:58:17.725294 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 01 08:58:17 crc kubenswrapper[4689]: I1201 08:58:17.863944 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-74cd45bd8d-lsl5j" Dec 01 08:58:18 crc kubenswrapper[4689]: I1201 08:58:18.003833 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50356777-8001-44ef-95a4-73db83be36bc-combined-ca-bundle\") pod \"50356777-8001-44ef-95a4-73db83be36bc\" (UID: \"50356777-8001-44ef-95a4-73db83be36bc\") " Dec 01 08:58:18 crc kubenswrapper[4689]: I1201 08:58:18.004170 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/50356777-8001-44ef-95a4-73db83be36bc-config\") pod \"50356777-8001-44ef-95a4-73db83be36bc\" (UID: \"50356777-8001-44ef-95a4-73db83be36bc\") " Dec 01 08:58:18 crc kubenswrapper[4689]: I1201 08:58:18.004193 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j7jkr\" (UniqueName: \"kubernetes.io/projected/50356777-8001-44ef-95a4-73db83be36bc-kube-api-access-j7jkr\") pod \"50356777-8001-44ef-95a4-73db83be36bc\" (UID: \"50356777-8001-44ef-95a4-73db83be36bc\") " Dec 01 08:58:18 crc kubenswrapper[4689]: I1201 08:58:18.004924 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/50356777-8001-44ef-95a4-73db83be36bc-ovndb-tls-certs\") pod \"50356777-8001-44ef-95a4-73db83be36bc\" (UID: \"50356777-8001-44ef-95a4-73db83be36bc\") " Dec 01 08:58:18 crc kubenswrapper[4689]: I1201 08:58:18.005014 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/50356777-8001-44ef-95a4-73db83be36bc-httpd-config\") pod \"50356777-8001-44ef-95a4-73db83be36bc\" (UID: \"50356777-8001-44ef-95a4-73db83be36bc\") " Dec 01 08:58:18 crc kubenswrapper[4689]: I1201 08:58:18.014823 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50356777-8001-44ef-95a4-73db83be36bc-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "50356777-8001-44ef-95a4-73db83be36bc" (UID: "50356777-8001-44ef-95a4-73db83be36bc"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:58:18 crc kubenswrapper[4689]: I1201 08:58:18.015262 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50356777-8001-44ef-95a4-73db83be36bc-kube-api-access-j7jkr" (OuterVolumeSpecName: "kube-api-access-j7jkr") pod "50356777-8001-44ef-95a4-73db83be36bc" (UID: "50356777-8001-44ef-95a4-73db83be36bc"). InnerVolumeSpecName "kube-api-access-j7jkr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:58:18 crc kubenswrapper[4689]: I1201 08:58:18.019040 4689 generic.go:334] "Generic (PLEG): container finished" podID="50356777-8001-44ef-95a4-73db83be36bc" containerID="ecd2fb94a7a6b1d73891d1bc1d4a2c543437d33c69142f7a4fb2f8cdfbbbf97e" exitCode=0 Dec 01 08:58:18 crc kubenswrapper[4689]: I1201 08:58:18.020051 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-74cd45bd8d-lsl5j" Dec 01 08:58:18 crc kubenswrapper[4689]: I1201 08:58:18.020605 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-74cd45bd8d-lsl5j" event={"ID":"50356777-8001-44ef-95a4-73db83be36bc","Type":"ContainerDied","Data":"ecd2fb94a7a6b1d73891d1bc1d4a2c543437d33c69142f7a4fb2f8cdfbbbf97e"} Dec 01 08:58:18 crc kubenswrapper[4689]: I1201 08:58:18.020663 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-74cd45bd8d-lsl5j" event={"ID":"50356777-8001-44ef-95a4-73db83be36bc","Type":"ContainerDied","Data":"c98d481d294e1ef82b1f23d6f84fc86296637070030df151f61d7d928ae97441"} Dec 01 08:58:18 crc kubenswrapper[4689]: I1201 08:58:18.020685 4689 scope.go:117] "RemoveContainer" containerID="24bf8999cb46d83ed2d5ee7aaee2ccfe420144dea0e10fc595c4d0ca1733718f" Dec 01 08:58:18 crc kubenswrapper[4689]: I1201 08:58:18.072609 4689 scope.go:117] "RemoveContainer" containerID="ecd2fb94a7a6b1d73891d1bc1d4a2c543437d33c69142f7a4fb2f8cdfbbbf97e" Dec 01 08:58:18 crc kubenswrapper[4689]: I1201 08:58:18.107492 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50356777-8001-44ef-95a4-73db83be36bc-config" (OuterVolumeSpecName: "config") pod "50356777-8001-44ef-95a4-73db83be36bc" (UID: "50356777-8001-44ef-95a4-73db83be36bc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:58:18 crc kubenswrapper[4689]: I1201 08:58:18.108777 4689 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/50356777-8001-44ef-95a4-73db83be36bc-httpd-config\") on node \"crc\" DevicePath \"\"" Dec 01 08:58:18 crc kubenswrapper[4689]: I1201 08:58:18.108827 4689 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/50356777-8001-44ef-95a4-73db83be36bc-config\") on node \"crc\" DevicePath \"\"" Dec 01 08:58:18 crc kubenswrapper[4689]: I1201 08:58:18.108841 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j7jkr\" (UniqueName: \"kubernetes.io/projected/50356777-8001-44ef-95a4-73db83be36bc-kube-api-access-j7jkr\") on node \"crc\" DevicePath \"\"" Dec 01 08:58:18 crc kubenswrapper[4689]: I1201 08:58:18.127572 4689 scope.go:117] "RemoveContainer" containerID="24bf8999cb46d83ed2d5ee7aaee2ccfe420144dea0e10fc595c4d0ca1733718f" Dec 01 08:58:18 crc kubenswrapper[4689]: E1201 08:58:18.128851 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24bf8999cb46d83ed2d5ee7aaee2ccfe420144dea0e10fc595c4d0ca1733718f\": container with ID starting with 24bf8999cb46d83ed2d5ee7aaee2ccfe420144dea0e10fc595c4d0ca1733718f not found: ID does not exist" containerID="24bf8999cb46d83ed2d5ee7aaee2ccfe420144dea0e10fc595c4d0ca1733718f" Dec 01 08:58:18 crc kubenswrapper[4689]: I1201 08:58:18.128887 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24bf8999cb46d83ed2d5ee7aaee2ccfe420144dea0e10fc595c4d0ca1733718f"} err="failed to get container status \"24bf8999cb46d83ed2d5ee7aaee2ccfe420144dea0e10fc595c4d0ca1733718f\": rpc error: code = NotFound desc = could not find container \"24bf8999cb46d83ed2d5ee7aaee2ccfe420144dea0e10fc595c4d0ca1733718f\": container with ID starting with 24bf8999cb46d83ed2d5ee7aaee2ccfe420144dea0e10fc595c4d0ca1733718f not found: ID does not exist" Dec 01 08:58:18 crc kubenswrapper[4689]: I1201 08:58:18.128908 4689 scope.go:117] "RemoveContainer" containerID="ecd2fb94a7a6b1d73891d1bc1d4a2c543437d33c69142f7a4fb2f8cdfbbbf97e" Dec 01 08:58:18 crc kubenswrapper[4689]: E1201 08:58:18.129570 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ecd2fb94a7a6b1d73891d1bc1d4a2c543437d33c69142f7a4fb2f8cdfbbbf97e\": container with ID starting with ecd2fb94a7a6b1d73891d1bc1d4a2c543437d33c69142f7a4fb2f8cdfbbbf97e not found: ID does not exist" containerID="ecd2fb94a7a6b1d73891d1bc1d4a2c543437d33c69142f7a4fb2f8cdfbbbf97e" Dec 01 08:58:18 crc kubenswrapper[4689]: I1201 08:58:18.129616 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ecd2fb94a7a6b1d73891d1bc1d4a2c543437d33c69142f7a4fb2f8cdfbbbf97e"} err="failed to get container status \"ecd2fb94a7a6b1d73891d1bc1d4a2c543437d33c69142f7a4fb2f8cdfbbbf97e\": rpc error: code = NotFound desc = could not find container \"ecd2fb94a7a6b1d73891d1bc1d4a2c543437d33c69142f7a4fb2f8cdfbbbf97e\": container with ID starting with ecd2fb94a7a6b1d73891d1bc1d4a2c543437d33c69142f7a4fb2f8cdfbbbf97e not found: ID does not exist" Dec 01 08:58:18 crc kubenswrapper[4689]: I1201 08:58:18.141646 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50356777-8001-44ef-95a4-73db83be36bc-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "50356777-8001-44ef-95a4-73db83be36bc" (UID: "50356777-8001-44ef-95a4-73db83be36bc"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:58:18 crc kubenswrapper[4689]: I1201 08:58:18.147215 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50356777-8001-44ef-95a4-73db83be36bc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "50356777-8001-44ef-95a4-73db83be36bc" (UID: "50356777-8001-44ef-95a4-73db83be36bc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:58:18 crc kubenswrapper[4689]: I1201 08:58:18.210938 4689 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50356777-8001-44ef-95a4-73db83be36bc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 08:58:18 crc kubenswrapper[4689]: I1201 08:58:18.210980 4689 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/50356777-8001-44ef-95a4-73db83be36bc-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 01 08:58:18 crc kubenswrapper[4689]: I1201 08:58:18.341588 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 01 08:58:18 crc kubenswrapper[4689]: W1201 08:58:18.355674 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0cbf9f73_fecd_4c17_95c6_b0bd5a1ae285.slice/crio-d42dbebeb72a42794204435109e0bd2bf73c874de10a364195a97f5c463f4d69 WatchSource:0}: Error finding container d42dbebeb72a42794204435109e0bd2bf73c874de10a364195a97f5c463f4d69: Status 404 returned error can't find the container with id d42dbebeb72a42794204435109e0bd2bf73c874de10a364195a97f5c463f4d69 Dec 01 08:58:18 crc kubenswrapper[4689]: I1201 08:58:18.447945 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-74cd45bd8d-lsl5j"] Dec 01 08:58:18 crc kubenswrapper[4689]: I1201 08:58:18.455631 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-74cd45bd8d-lsl5j"] Dec 01 08:58:19 crc kubenswrapper[4689]: I1201 08:58:19.032066 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"0cbf9f73-fecd-4c17-95c6-b0bd5a1ae285","Type":"ContainerStarted","Data":"d42dbebeb72a42794204435109e0bd2bf73c874de10a364195a97f5c463f4d69"} Dec 01 08:58:19 crc kubenswrapper[4689]: I1201 08:58:19.058259 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50356777-8001-44ef-95a4-73db83be36bc" path="/var/lib/kubelet/pods/50356777-8001-44ef-95a4-73db83be36bc/volumes" Dec 01 08:58:19 crc kubenswrapper[4689]: I1201 08:58:19.457956 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Dec 01 08:58:21 crc kubenswrapper[4689]: I1201 08:58:21.106262 4689 generic.go:334] "Generic (PLEG): container finished" podID="f54de58e-9111-462b-a86e-8e324060c8aa" containerID="5d84c0bc33fa0c594dd2e4ac53c19ea7e3a986eb17d8a353c63f16fb5ad089d6" exitCode=0 Dec 01 08:58:21 crc kubenswrapper[4689]: I1201 08:58:21.106754 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f54de58e-9111-462b-a86e-8e324060c8aa","Type":"ContainerDied","Data":"5d84c0bc33fa0c594dd2e4ac53c19ea7e3a986eb17d8a353c63f16fb5ad089d6"} Dec 01 08:58:21 crc kubenswrapper[4689]: I1201 08:58:21.106785 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f54de58e-9111-462b-a86e-8e324060c8aa","Type":"ContainerDied","Data":"72a69f752069c7c0043e8b641e84c8a59cd5a3d76ed9c8596d72a2ca20805a7a"} Dec 01 08:58:21 crc kubenswrapper[4689]: I1201 08:58:21.106829 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="72a69f752069c7c0043e8b641e84c8a59cd5a3d76ed9c8596d72a2ca20805a7a" Dec 01 08:58:21 crc kubenswrapper[4689]: I1201 08:58:21.130764 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 08:58:21 crc kubenswrapper[4689]: I1201 08:58:21.168422 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f54de58e-9111-462b-a86e-8e324060c8aa-run-httpd\") pod \"f54de58e-9111-462b-a86e-8e324060c8aa\" (UID: \"f54de58e-9111-462b-a86e-8e324060c8aa\") " Dec 01 08:58:21 crc kubenswrapper[4689]: I1201 08:58:21.168515 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f54de58e-9111-462b-a86e-8e324060c8aa-log-httpd\") pod \"f54de58e-9111-462b-a86e-8e324060c8aa\" (UID: \"f54de58e-9111-462b-a86e-8e324060c8aa\") " Dec 01 08:58:21 crc kubenswrapper[4689]: I1201 08:58:21.168561 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f54de58e-9111-462b-a86e-8e324060c8aa-scripts\") pod \"f54de58e-9111-462b-a86e-8e324060c8aa\" (UID: \"f54de58e-9111-462b-a86e-8e324060c8aa\") " Dec 01 08:58:21 crc kubenswrapper[4689]: I1201 08:58:21.168615 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r29fc\" (UniqueName: \"kubernetes.io/projected/f54de58e-9111-462b-a86e-8e324060c8aa-kube-api-access-r29fc\") pod \"f54de58e-9111-462b-a86e-8e324060c8aa\" (UID: \"f54de58e-9111-462b-a86e-8e324060c8aa\") " Dec 01 08:58:21 crc kubenswrapper[4689]: I1201 08:58:21.168675 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f54de58e-9111-462b-a86e-8e324060c8aa-combined-ca-bundle\") pod \"f54de58e-9111-462b-a86e-8e324060c8aa\" (UID: \"f54de58e-9111-462b-a86e-8e324060c8aa\") " Dec 01 08:58:21 crc kubenswrapper[4689]: I1201 08:58:21.168711 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f54de58e-9111-462b-a86e-8e324060c8aa-sg-core-conf-yaml\") pod \"f54de58e-9111-462b-a86e-8e324060c8aa\" (UID: \"f54de58e-9111-462b-a86e-8e324060c8aa\") " Dec 01 08:58:21 crc kubenswrapper[4689]: I1201 08:58:21.168770 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f54de58e-9111-462b-a86e-8e324060c8aa-config-data\") pod \"f54de58e-9111-462b-a86e-8e324060c8aa\" (UID: \"f54de58e-9111-462b-a86e-8e324060c8aa\") " Dec 01 08:58:21 crc kubenswrapper[4689]: I1201 08:58:21.168953 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f54de58e-9111-462b-a86e-8e324060c8aa-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f54de58e-9111-462b-a86e-8e324060c8aa" (UID: "f54de58e-9111-462b-a86e-8e324060c8aa"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 08:58:21 crc kubenswrapper[4689]: I1201 08:58:21.169484 4689 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f54de58e-9111-462b-a86e-8e324060c8aa-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 01 08:58:21 crc kubenswrapper[4689]: I1201 08:58:21.169632 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f54de58e-9111-462b-a86e-8e324060c8aa-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f54de58e-9111-462b-a86e-8e324060c8aa" (UID: "f54de58e-9111-462b-a86e-8e324060c8aa"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 08:58:21 crc kubenswrapper[4689]: I1201 08:58:21.191079 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f54de58e-9111-462b-a86e-8e324060c8aa-scripts" (OuterVolumeSpecName: "scripts") pod "f54de58e-9111-462b-a86e-8e324060c8aa" (UID: "f54de58e-9111-462b-a86e-8e324060c8aa"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:58:21 crc kubenswrapper[4689]: I1201 08:58:21.202773 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f54de58e-9111-462b-a86e-8e324060c8aa-kube-api-access-r29fc" (OuterVolumeSpecName: "kube-api-access-r29fc") pod "f54de58e-9111-462b-a86e-8e324060c8aa" (UID: "f54de58e-9111-462b-a86e-8e324060c8aa"). InnerVolumeSpecName "kube-api-access-r29fc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:58:21 crc kubenswrapper[4689]: I1201 08:58:21.218474 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f54de58e-9111-462b-a86e-8e324060c8aa-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f54de58e-9111-462b-a86e-8e324060c8aa" (UID: "f54de58e-9111-462b-a86e-8e324060c8aa"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:58:21 crc kubenswrapper[4689]: I1201 08:58:21.238523 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f54de58e-9111-462b-a86e-8e324060c8aa-config-data" (OuterVolumeSpecName: "config-data") pod "f54de58e-9111-462b-a86e-8e324060c8aa" (UID: "f54de58e-9111-462b-a86e-8e324060c8aa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:58:21 crc kubenswrapper[4689]: I1201 08:58:21.255746 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f54de58e-9111-462b-a86e-8e324060c8aa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f54de58e-9111-462b-a86e-8e324060c8aa" (UID: "f54de58e-9111-462b-a86e-8e324060c8aa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:58:21 crc kubenswrapper[4689]: I1201 08:58:21.271297 4689 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f54de58e-9111-462b-a86e-8e324060c8aa-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 01 08:58:21 crc kubenswrapper[4689]: I1201 08:58:21.271350 4689 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f54de58e-9111-462b-a86e-8e324060c8aa-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 08:58:21 crc kubenswrapper[4689]: I1201 08:58:21.271361 4689 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f54de58e-9111-462b-a86e-8e324060c8aa-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 01 08:58:21 crc kubenswrapper[4689]: I1201 08:58:21.271394 4689 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f54de58e-9111-462b-a86e-8e324060c8aa-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 08:58:21 crc kubenswrapper[4689]: I1201 08:58:21.271415 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r29fc\" (UniqueName: \"kubernetes.io/projected/f54de58e-9111-462b-a86e-8e324060c8aa-kube-api-access-r29fc\") on node \"crc\" DevicePath \"\"" Dec 01 08:58:21 crc kubenswrapper[4689]: I1201 08:58:21.271485 4689 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f54de58e-9111-462b-a86e-8e324060c8aa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 08:58:22 crc kubenswrapper[4689]: I1201 08:58:22.083312 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-7459744dff-cxqv7"] Dec 01 08:58:22 crc kubenswrapper[4689]: E1201 08:58:22.083899 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f54de58e-9111-462b-a86e-8e324060c8aa" containerName="sg-core" Dec 01 08:58:22 crc kubenswrapper[4689]: I1201 08:58:22.083930 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="f54de58e-9111-462b-a86e-8e324060c8aa" containerName="sg-core" Dec 01 08:58:22 crc kubenswrapper[4689]: E1201 08:58:22.083946 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50356777-8001-44ef-95a4-73db83be36bc" containerName="neutron-api" Dec 01 08:58:22 crc kubenswrapper[4689]: I1201 08:58:22.083954 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="50356777-8001-44ef-95a4-73db83be36bc" containerName="neutron-api" Dec 01 08:58:22 crc kubenswrapper[4689]: E1201 08:58:22.083976 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50356777-8001-44ef-95a4-73db83be36bc" containerName="neutron-httpd" Dec 01 08:58:22 crc kubenswrapper[4689]: I1201 08:58:22.083986 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="50356777-8001-44ef-95a4-73db83be36bc" containerName="neutron-httpd" Dec 01 08:58:22 crc kubenswrapper[4689]: E1201 08:58:22.084013 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f54de58e-9111-462b-a86e-8e324060c8aa" containerName="ceilometer-notification-agent" Dec 01 08:58:22 crc kubenswrapper[4689]: I1201 08:58:22.084021 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="f54de58e-9111-462b-a86e-8e324060c8aa" containerName="ceilometer-notification-agent" Dec 01 08:58:22 crc kubenswrapper[4689]: I1201 08:58:22.084257 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="f54de58e-9111-462b-a86e-8e324060c8aa" containerName="sg-core" Dec 01 08:58:22 crc kubenswrapper[4689]: I1201 08:58:22.084289 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="50356777-8001-44ef-95a4-73db83be36bc" containerName="neutron-httpd" Dec 01 08:58:22 crc kubenswrapper[4689]: I1201 08:58:22.084310 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="f54de58e-9111-462b-a86e-8e324060c8aa" containerName="ceilometer-notification-agent" Dec 01 08:58:22 crc kubenswrapper[4689]: I1201 08:58:22.084325 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="50356777-8001-44ef-95a4-73db83be36bc" containerName="neutron-api" Dec 01 08:58:22 crc kubenswrapper[4689]: I1201 08:58:22.085576 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-7459744dff-cxqv7" Dec 01 08:58:22 crc kubenswrapper[4689]: I1201 08:58:22.087999 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Dec 01 08:58:22 crc kubenswrapper[4689]: I1201 08:58:22.088253 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Dec 01 08:58:22 crc kubenswrapper[4689]: I1201 08:58:22.088410 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Dec 01 08:58:22 crc kubenswrapper[4689]: I1201 08:58:22.112246 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-7459744dff-cxqv7"] Dec 01 08:58:22 crc kubenswrapper[4689]: I1201 08:58:22.116677 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 08:58:22 crc kubenswrapper[4689]: I1201 08:58:22.186799 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e242b763-d0db-401f-b552-d109d6c5ec28-config-data\") pod \"swift-proxy-7459744dff-cxqv7\" (UID: \"e242b763-d0db-401f-b552-d109d6c5ec28\") " pod="openstack/swift-proxy-7459744dff-cxqv7" Dec 01 08:58:22 crc kubenswrapper[4689]: I1201 08:58:22.186895 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e242b763-d0db-401f-b552-d109d6c5ec28-public-tls-certs\") pod \"swift-proxy-7459744dff-cxqv7\" (UID: \"e242b763-d0db-401f-b552-d109d6c5ec28\") " pod="openstack/swift-proxy-7459744dff-cxqv7" Dec 01 08:58:22 crc kubenswrapper[4689]: I1201 08:58:22.187599 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e242b763-d0db-401f-b552-d109d6c5ec28-combined-ca-bundle\") pod \"swift-proxy-7459744dff-cxqv7\" (UID: \"e242b763-d0db-401f-b552-d109d6c5ec28\") " pod="openstack/swift-proxy-7459744dff-cxqv7" Dec 01 08:58:22 crc kubenswrapper[4689]: I1201 08:58:22.187661 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bb6q8\" (UniqueName: \"kubernetes.io/projected/e242b763-d0db-401f-b552-d109d6c5ec28-kube-api-access-bb6q8\") pod \"swift-proxy-7459744dff-cxqv7\" (UID: \"e242b763-d0db-401f-b552-d109d6c5ec28\") " pod="openstack/swift-proxy-7459744dff-cxqv7" Dec 01 08:58:22 crc kubenswrapper[4689]: I1201 08:58:22.187720 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e242b763-d0db-401f-b552-d109d6c5ec28-run-httpd\") pod \"swift-proxy-7459744dff-cxqv7\" (UID: \"e242b763-d0db-401f-b552-d109d6c5ec28\") " pod="openstack/swift-proxy-7459744dff-cxqv7" Dec 01 08:58:22 crc kubenswrapper[4689]: I1201 08:58:22.187771 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e242b763-d0db-401f-b552-d109d6c5ec28-etc-swift\") pod \"swift-proxy-7459744dff-cxqv7\" (UID: \"e242b763-d0db-401f-b552-d109d6c5ec28\") " pod="openstack/swift-proxy-7459744dff-cxqv7" Dec 01 08:58:22 crc kubenswrapper[4689]: I1201 08:58:22.187832 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e242b763-d0db-401f-b552-d109d6c5ec28-internal-tls-certs\") pod \"swift-proxy-7459744dff-cxqv7\" (UID: \"e242b763-d0db-401f-b552-d109d6c5ec28\") " pod="openstack/swift-proxy-7459744dff-cxqv7" Dec 01 08:58:22 crc kubenswrapper[4689]: I1201 08:58:22.187906 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e242b763-d0db-401f-b552-d109d6c5ec28-log-httpd\") pod \"swift-proxy-7459744dff-cxqv7\" (UID: \"e242b763-d0db-401f-b552-d109d6c5ec28\") " pod="openstack/swift-proxy-7459744dff-cxqv7" Dec 01 08:58:22 crc kubenswrapper[4689]: I1201 08:58:22.193326 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 01 08:58:22 crc kubenswrapper[4689]: I1201 08:58:22.220339 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 01 08:58:22 crc kubenswrapper[4689]: I1201 08:58:22.240884 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 01 08:58:22 crc kubenswrapper[4689]: I1201 08:58:22.243697 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 08:58:22 crc kubenswrapper[4689]: I1201 08:58:22.245845 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 01 08:58:22 crc kubenswrapper[4689]: I1201 08:58:22.246087 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 01 08:58:22 crc kubenswrapper[4689]: I1201 08:58:22.254341 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 01 08:58:22 crc kubenswrapper[4689]: I1201 08:58:22.289987 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c68dacab-e7a5-480f-b8ec-d2d14169b7c0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c68dacab-e7a5-480f-b8ec-d2d14169b7c0\") " pod="openstack/ceilometer-0" Dec 01 08:58:22 crc kubenswrapper[4689]: I1201 08:58:22.290037 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e242b763-d0db-401f-b552-d109d6c5ec28-public-tls-certs\") pod \"swift-proxy-7459744dff-cxqv7\" (UID: \"e242b763-d0db-401f-b552-d109d6c5ec28\") " pod="openstack/swift-proxy-7459744dff-cxqv7" Dec 01 08:58:22 crc kubenswrapper[4689]: I1201 08:58:22.290101 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c68dacab-e7a5-480f-b8ec-d2d14169b7c0-config-data\") pod \"ceilometer-0\" (UID: \"c68dacab-e7a5-480f-b8ec-d2d14169b7c0\") " pod="openstack/ceilometer-0" Dec 01 08:58:22 crc kubenswrapper[4689]: I1201 08:58:22.290133 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e242b763-d0db-401f-b552-d109d6c5ec28-combined-ca-bundle\") pod \"swift-proxy-7459744dff-cxqv7\" (UID: \"e242b763-d0db-401f-b552-d109d6c5ec28\") " pod="openstack/swift-proxy-7459744dff-cxqv7" Dec 01 08:58:22 crc kubenswrapper[4689]: I1201 08:58:22.290153 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bb6q8\" (UniqueName: \"kubernetes.io/projected/e242b763-d0db-401f-b552-d109d6c5ec28-kube-api-access-bb6q8\") pod \"swift-proxy-7459744dff-cxqv7\" (UID: \"e242b763-d0db-401f-b552-d109d6c5ec28\") " pod="openstack/swift-proxy-7459744dff-cxqv7" Dec 01 08:58:22 crc kubenswrapper[4689]: I1201 08:58:22.290181 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e242b763-d0db-401f-b552-d109d6c5ec28-run-httpd\") pod \"swift-proxy-7459744dff-cxqv7\" (UID: \"e242b763-d0db-401f-b552-d109d6c5ec28\") " pod="openstack/swift-proxy-7459744dff-cxqv7" Dec 01 08:58:22 crc kubenswrapper[4689]: I1201 08:58:22.290212 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e242b763-d0db-401f-b552-d109d6c5ec28-etc-swift\") pod \"swift-proxy-7459744dff-cxqv7\" (UID: \"e242b763-d0db-401f-b552-d109d6c5ec28\") " pod="openstack/swift-proxy-7459744dff-cxqv7" Dec 01 08:58:22 crc kubenswrapper[4689]: I1201 08:58:22.290245 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e242b763-d0db-401f-b552-d109d6c5ec28-internal-tls-certs\") pod \"swift-proxy-7459744dff-cxqv7\" (UID: \"e242b763-d0db-401f-b552-d109d6c5ec28\") " pod="openstack/swift-proxy-7459744dff-cxqv7" Dec 01 08:58:22 crc kubenswrapper[4689]: I1201 08:58:22.290269 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9d5gh\" (UniqueName: \"kubernetes.io/projected/c68dacab-e7a5-480f-b8ec-d2d14169b7c0-kube-api-access-9d5gh\") pod \"ceilometer-0\" (UID: \"c68dacab-e7a5-480f-b8ec-d2d14169b7c0\") " pod="openstack/ceilometer-0" Dec 01 08:58:22 crc kubenswrapper[4689]: I1201 08:58:22.290308 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c68dacab-e7a5-480f-b8ec-d2d14169b7c0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c68dacab-e7a5-480f-b8ec-d2d14169b7c0\") " pod="openstack/ceilometer-0" Dec 01 08:58:22 crc kubenswrapper[4689]: I1201 08:58:22.290404 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c68dacab-e7a5-480f-b8ec-d2d14169b7c0-log-httpd\") pod \"ceilometer-0\" (UID: \"c68dacab-e7a5-480f-b8ec-d2d14169b7c0\") " pod="openstack/ceilometer-0" Dec 01 08:58:22 crc kubenswrapper[4689]: I1201 08:58:22.290428 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e242b763-d0db-401f-b552-d109d6c5ec28-log-httpd\") pod \"swift-proxy-7459744dff-cxqv7\" (UID: \"e242b763-d0db-401f-b552-d109d6c5ec28\") " pod="openstack/swift-proxy-7459744dff-cxqv7" Dec 01 08:58:22 crc kubenswrapper[4689]: I1201 08:58:22.290448 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c68dacab-e7a5-480f-b8ec-d2d14169b7c0-scripts\") pod \"ceilometer-0\" (UID: \"c68dacab-e7a5-480f-b8ec-d2d14169b7c0\") " pod="openstack/ceilometer-0" Dec 01 08:58:22 crc kubenswrapper[4689]: I1201 08:58:22.290466 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c68dacab-e7a5-480f-b8ec-d2d14169b7c0-run-httpd\") pod \"ceilometer-0\" (UID: \"c68dacab-e7a5-480f-b8ec-d2d14169b7c0\") " pod="openstack/ceilometer-0" Dec 01 08:58:22 crc kubenswrapper[4689]: I1201 08:58:22.290486 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e242b763-d0db-401f-b552-d109d6c5ec28-config-data\") pod \"swift-proxy-7459744dff-cxqv7\" (UID: \"e242b763-d0db-401f-b552-d109d6c5ec28\") " pod="openstack/swift-proxy-7459744dff-cxqv7" Dec 01 08:58:22 crc kubenswrapper[4689]: I1201 08:58:22.291908 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e242b763-d0db-401f-b552-d109d6c5ec28-run-httpd\") pod \"swift-proxy-7459744dff-cxqv7\" (UID: \"e242b763-d0db-401f-b552-d109d6c5ec28\") " pod="openstack/swift-proxy-7459744dff-cxqv7" Dec 01 08:58:22 crc kubenswrapper[4689]: I1201 08:58:22.292722 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e242b763-d0db-401f-b552-d109d6c5ec28-log-httpd\") pod \"swift-proxy-7459744dff-cxqv7\" (UID: \"e242b763-d0db-401f-b552-d109d6c5ec28\") " pod="openstack/swift-proxy-7459744dff-cxqv7" Dec 01 08:58:22 crc kubenswrapper[4689]: I1201 08:58:22.297400 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e242b763-d0db-401f-b552-d109d6c5ec28-config-data\") pod \"swift-proxy-7459744dff-cxqv7\" (UID: \"e242b763-d0db-401f-b552-d109d6c5ec28\") " pod="openstack/swift-proxy-7459744dff-cxqv7" Dec 01 08:58:22 crc kubenswrapper[4689]: I1201 08:58:22.298811 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e242b763-d0db-401f-b552-d109d6c5ec28-internal-tls-certs\") pod \"swift-proxy-7459744dff-cxqv7\" (UID: \"e242b763-d0db-401f-b552-d109d6c5ec28\") " pod="openstack/swift-proxy-7459744dff-cxqv7" Dec 01 08:58:22 crc kubenswrapper[4689]: I1201 08:58:22.301455 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e242b763-d0db-401f-b552-d109d6c5ec28-combined-ca-bundle\") pod \"swift-proxy-7459744dff-cxqv7\" (UID: \"e242b763-d0db-401f-b552-d109d6c5ec28\") " pod="openstack/swift-proxy-7459744dff-cxqv7" Dec 01 08:58:22 crc kubenswrapper[4689]: I1201 08:58:22.302111 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e242b763-d0db-401f-b552-d109d6c5ec28-etc-swift\") pod \"swift-proxy-7459744dff-cxqv7\" (UID: \"e242b763-d0db-401f-b552-d109d6c5ec28\") " pod="openstack/swift-proxy-7459744dff-cxqv7" Dec 01 08:58:22 crc kubenswrapper[4689]: I1201 08:58:22.302404 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e242b763-d0db-401f-b552-d109d6c5ec28-public-tls-certs\") pod \"swift-proxy-7459744dff-cxqv7\" (UID: \"e242b763-d0db-401f-b552-d109d6c5ec28\") " pod="openstack/swift-proxy-7459744dff-cxqv7" Dec 01 08:58:22 crc kubenswrapper[4689]: I1201 08:58:22.309324 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bb6q8\" (UniqueName: \"kubernetes.io/projected/e242b763-d0db-401f-b552-d109d6c5ec28-kube-api-access-bb6q8\") pod \"swift-proxy-7459744dff-cxqv7\" (UID: \"e242b763-d0db-401f-b552-d109d6c5ec28\") " pod="openstack/swift-proxy-7459744dff-cxqv7" Dec 01 08:58:22 crc kubenswrapper[4689]: I1201 08:58:22.392227 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c68dacab-e7a5-480f-b8ec-d2d14169b7c0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c68dacab-e7a5-480f-b8ec-d2d14169b7c0\") " pod="openstack/ceilometer-0" Dec 01 08:58:22 crc kubenswrapper[4689]: I1201 08:58:22.392301 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c68dacab-e7a5-480f-b8ec-d2d14169b7c0-config-data\") pod \"ceilometer-0\" (UID: \"c68dacab-e7a5-480f-b8ec-d2d14169b7c0\") " pod="openstack/ceilometer-0" Dec 01 08:58:22 crc kubenswrapper[4689]: I1201 08:58:22.392403 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9d5gh\" (UniqueName: \"kubernetes.io/projected/c68dacab-e7a5-480f-b8ec-d2d14169b7c0-kube-api-access-9d5gh\") pod \"ceilometer-0\" (UID: \"c68dacab-e7a5-480f-b8ec-d2d14169b7c0\") " pod="openstack/ceilometer-0" Dec 01 08:58:22 crc kubenswrapper[4689]: I1201 08:58:22.392427 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c68dacab-e7a5-480f-b8ec-d2d14169b7c0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c68dacab-e7a5-480f-b8ec-d2d14169b7c0\") " pod="openstack/ceilometer-0" Dec 01 08:58:22 crc kubenswrapper[4689]: I1201 08:58:22.392456 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c68dacab-e7a5-480f-b8ec-d2d14169b7c0-log-httpd\") pod \"ceilometer-0\" (UID: \"c68dacab-e7a5-480f-b8ec-d2d14169b7c0\") " pod="openstack/ceilometer-0" Dec 01 08:58:22 crc kubenswrapper[4689]: I1201 08:58:22.392478 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c68dacab-e7a5-480f-b8ec-d2d14169b7c0-scripts\") pod \"ceilometer-0\" (UID: \"c68dacab-e7a5-480f-b8ec-d2d14169b7c0\") " pod="openstack/ceilometer-0" Dec 01 08:58:22 crc kubenswrapper[4689]: I1201 08:58:22.392495 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c68dacab-e7a5-480f-b8ec-d2d14169b7c0-run-httpd\") pod \"ceilometer-0\" (UID: \"c68dacab-e7a5-480f-b8ec-d2d14169b7c0\") " pod="openstack/ceilometer-0" Dec 01 08:58:22 crc kubenswrapper[4689]: I1201 08:58:22.392889 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c68dacab-e7a5-480f-b8ec-d2d14169b7c0-run-httpd\") pod \"ceilometer-0\" (UID: \"c68dacab-e7a5-480f-b8ec-d2d14169b7c0\") " pod="openstack/ceilometer-0" Dec 01 08:58:22 crc kubenswrapper[4689]: I1201 08:58:22.392969 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c68dacab-e7a5-480f-b8ec-d2d14169b7c0-log-httpd\") pod \"ceilometer-0\" (UID: \"c68dacab-e7a5-480f-b8ec-d2d14169b7c0\") " pod="openstack/ceilometer-0" Dec 01 08:58:22 crc kubenswrapper[4689]: I1201 08:58:22.397398 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c68dacab-e7a5-480f-b8ec-d2d14169b7c0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c68dacab-e7a5-480f-b8ec-d2d14169b7c0\") " pod="openstack/ceilometer-0" Dec 01 08:58:22 crc kubenswrapper[4689]: I1201 08:58:22.398890 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c68dacab-e7a5-480f-b8ec-d2d14169b7c0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c68dacab-e7a5-480f-b8ec-d2d14169b7c0\") " pod="openstack/ceilometer-0" Dec 01 08:58:22 crc kubenswrapper[4689]: I1201 08:58:22.398966 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c68dacab-e7a5-480f-b8ec-d2d14169b7c0-config-data\") pod \"ceilometer-0\" (UID: \"c68dacab-e7a5-480f-b8ec-d2d14169b7c0\") " pod="openstack/ceilometer-0" Dec 01 08:58:22 crc kubenswrapper[4689]: I1201 08:58:22.399182 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c68dacab-e7a5-480f-b8ec-d2d14169b7c0-scripts\") pod \"ceilometer-0\" (UID: \"c68dacab-e7a5-480f-b8ec-d2d14169b7c0\") " pod="openstack/ceilometer-0" Dec 01 08:58:22 crc kubenswrapper[4689]: I1201 08:58:22.403886 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-7459744dff-cxqv7" Dec 01 08:58:22 crc kubenswrapper[4689]: I1201 08:58:22.411950 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9d5gh\" (UniqueName: \"kubernetes.io/projected/c68dacab-e7a5-480f-b8ec-d2d14169b7c0-kube-api-access-9d5gh\") pod \"ceilometer-0\" (UID: \"c68dacab-e7a5-480f-b8ec-d2d14169b7c0\") " pod="openstack/ceilometer-0" Dec 01 08:58:22 crc kubenswrapper[4689]: I1201 08:58:22.572270 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 08:58:23 crc kubenswrapper[4689]: I1201 08:58:23.019277 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-7459744dff-cxqv7"] Dec 01 08:58:23 crc kubenswrapper[4689]: I1201 08:58:23.112698 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f54de58e-9111-462b-a86e-8e324060c8aa" path="/var/lib/kubelet/pods/f54de58e-9111-462b-a86e-8e324060c8aa/volumes" Dec 01 08:58:23 crc kubenswrapper[4689]: I1201 08:58:23.128129 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7459744dff-cxqv7" event={"ID":"e242b763-d0db-401f-b552-d109d6c5ec28","Type":"ContainerStarted","Data":"c7c4a3e9402405aefa13b6cda1054b3f442110c7c776e55914e87de627cd519d"} Dec 01 08:58:23 crc kubenswrapper[4689]: I1201 08:58:23.149797 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 01 08:58:24 crc kubenswrapper[4689]: I1201 08:58:24.181578 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c68dacab-e7a5-480f-b8ec-d2d14169b7c0","Type":"ContainerStarted","Data":"d6b49d4cf22adb3360db4a2d7dff7c0535d557095f824962c76f09e8a9638ce3"} Dec 01 08:58:24 crc kubenswrapper[4689]: I1201 08:58:24.186543 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7459744dff-cxqv7" event={"ID":"e242b763-d0db-401f-b552-d109d6c5ec28","Type":"ContainerStarted","Data":"561c3e59b3a2ce153c3153d2883f1f3672094c47a9d37baacd5c2afa5ad9c840"} Dec 01 08:58:24 crc kubenswrapper[4689]: I1201 08:58:24.186598 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7459744dff-cxqv7" event={"ID":"e242b763-d0db-401f-b552-d109d6c5ec28","Type":"ContainerStarted","Data":"5148431c75a0f3ac6262f27f92f97e772547ad992d47d0475a7a8c3c385b36f8"} Dec 01 08:58:24 crc kubenswrapper[4689]: I1201 08:58:24.186878 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-7459744dff-cxqv7" Dec 01 08:58:24 crc kubenswrapper[4689]: I1201 08:58:24.186913 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-7459744dff-cxqv7" Dec 01 08:58:24 crc kubenswrapper[4689]: I1201 08:58:24.213610 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-7459744dff-cxqv7" podStartSLOduration=2.21353731 podStartE2EDuration="2.21353731s" podCreationTimestamp="2025-12-01 08:58:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:58:24.209416857 +0000 UTC m=+1184.281704761" watchObservedRunningTime="2025-12-01 08:58:24.21353731 +0000 UTC m=+1184.285825214" Dec 01 08:58:24 crc kubenswrapper[4689]: I1201 08:58:24.815536 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Dec 01 08:58:25 crc kubenswrapper[4689]: I1201 08:58:25.308026 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 01 08:58:32 crc kubenswrapper[4689]: I1201 08:58:32.225141 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 01 08:58:32 crc kubenswrapper[4689]: I1201 08:58:32.265423 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5dvt4\" (UniqueName: \"kubernetes.io/projected/6e961c41-2024-43a4-bb5a-35926b887048-kube-api-access-5dvt4\") pod \"6e961c41-2024-43a4-bb5a-35926b887048\" (UID: \"6e961c41-2024-43a4-bb5a-35926b887048\") " Dec 01 08:58:32 crc kubenswrapper[4689]: I1201 08:58:32.265486 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e961c41-2024-43a4-bb5a-35926b887048-config-data\") pod \"6e961c41-2024-43a4-bb5a-35926b887048\" (UID: \"6e961c41-2024-43a4-bb5a-35926b887048\") " Dec 01 08:58:32 crc kubenswrapper[4689]: I1201 08:58:32.265598 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6e961c41-2024-43a4-bb5a-35926b887048-config-data-custom\") pod \"6e961c41-2024-43a4-bb5a-35926b887048\" (UID: \"6e961c41-2024-43a4-bb5a-35926b887048\") " Dec 01 08:58:32 crc kubenswrapper[4689]: I1201 08:58:32.265622 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6e961c41-2024-43a4-bb5a-35926b887048-etc-machine-id\") pod \"6e961c41-2024-43a4-bb5a-35926b887048\" (UID: \"6e961c41-2024-43a4-bb5a-35926b887048\") " Dec 01 08:58:32 crc kubenswrapper[4689]: I1201 08:58:32.265660 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e961c41-2024-43a4-bb5a-35926b887048-logs\") pod \"6e961c41-2024-43a4-bb5a-35926b887048\" (UID: \"6e961c41-2024-43a4-bb5a-35926b887048\") " Dec 01 08:58:32 crc kubenswrapper[4689]: I1201 08:58:32.265750 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e961c41-2024-43a4-bb5a-35926b887048-scripts\") pod \"6e961c41-2024-43a4-bb5a-35926b887048\" (UID: \"6e961c41-2024-43a4-bb5a-35926b887048\") " Dec 01 08:58:32 crc kubenswrapper[4689]: I1201 08:58:32.265868 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e961c41-2024-43a4-bb5a-35926b887048-combined-ca-bundle\") pod \"6e961c41-2024-43a4-bb5a-35926b887048\" (UID: \"6e961c41-2024-43a4-bb5a-35926b887048\") " Dec 01 08:58:32 crc kubenswrapper[4689]: I1201 08:58:32.269770 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e961c41-2024-43a4-bb5a-35926b887048-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "6e961c41-2024-43a4-bb5a-35926b887048" (UID: "6e961c41-2024-43a4-bb5a-35926b887048"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 08:58:32 crc kubenswrapper[4689]: I1201 08:58:32.270537 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e961c41-2024-43a4-bb5a-35926b887048-logs" (OuterVolumeSpecName: "logs") pod "6e961c41-2024-43a4-bb5a-35926b887048" (UID: "6e961c41-2024-43a4-bb5a-35926b887048"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 08:58:32 crc kubenswrapper[4689]: I1201 08:58:32.278527 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e961c41-2024-43a4-bb5a-35926b887048-scripts" (OuterVolumeSpecName: "scripts") pod "6e961c41-2024-43a4-bb5a-35926b887048" (UID: "6e961c41-2024-43a4-bb5a-35926b887048"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:58:32 crc kubenswrapper[4689]: I1201 08:58:32.280144 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e961c41-2024-43a4-bb5a-35926b887048-kube-api-access-5dvt4" (OuterVolumeSpecName: "kube-api-access-5dvt4") pod "6e961c41-2024-43a4-bb5a-35926b887048" (UID: "6e961c41-2024-43a4-bb5a-35926b887048"). InnerVolumeSpecName "kube-api-access-5dvt4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:58:32 crc kubenswrapper[4689]: I1201 08:58:32.289553 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e961c41-2024-43a4-bb5a-35926b887048-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "6e961c41-2024-43a4-bb5a-35926b887048" (UID: "6e961c41-2024-43a4-bb5a-35926b887048"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:58:32 crc kubenswrapper[4689]: I1201 08:58:32.327477 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e961c41-2024-43a4-bb5a-35926b887048-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6e961c41-2024-43a4-bb5a-35926b887048" (UID: "6e961c41-2024-43a4-bb5a-35926b887048"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:58:32 crc kubenswrapper[4689]: I1201 08:58:32.346724 4689 generic.go:334] "Generic (PLEG): container finished" podID="fcebf70c-3de0-499e-928d-3419299a512f" containerID="fab80120b3cdcb11f34e6bc51dab2ce8ef0833fb8a3e2dbb9da58553b25ef62f" exitCode=137 Dec 01 08:58:32 crc kubenswrapper[4689]: I1201 08:58:32.346797 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-d65b9788-2kr5p" event={"ID":"fcebf70c-3de0-499e-928d-3419299a512f","Type":"ContainerDied","Data":"fab80120b3cdcb11f34e6bc51dab2ce8ef0833fb8a3e2dbb9da58553b25ef62f"} Dec 01 08:58:32 crc kubenswrapper[4689]: I1201 08:58:32.353178 4689 generic.go:334] "Generic (PLEG): container finished" podID="6e961c41-2024-43a4-bb5a-35926b887048" containerID="6cf33481554ec1616e0c9863d79dab2edd8317192599b608ed42855463879e06" exitCode=137 Dec 01 08:58:32 crc kubenswrapper[4689]: I1201 08:58:32.353323 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 01 08:58:32 crc kubenswrapper[4689]: I1201 08:58:32.353624 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"6e961c41-2024-43a4-bb5a-35926b887048","Type":"ContainerDied","Data":"6cf33481554ec1616e0c9863d79dab2edd8317192599b608ed42855463879e06"} Dec 01 08:58:32 crc kubenswrapper[4689]: I1201 08:58:32.353698 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"6e961c41-2024-43a4-bb5a-35926b887048","Type":"ContainerDied","Data":"7e56745e8a5341e8537cebf20bfed88b7968484ba8aca6bf01f4adba8f05d925"} Dec 01 08:58:32 crc kubenswrapper[4689]: I1201 08:58:32.353721 4689 scope.go:117] "RemoveContainer" containerID="6cf33481554ec1616e0c9863d79dab2edd8317192599b608ed42855463879e06" Dec 01 08:58:32 crc kubenswrapper[4689]: I1201 08:58:32.359260 4689 generic.go:334] "Generic (PLEG): container finished" podID="e88c04bb-01ff-47a6-8942-05a9a2a68416" containerID="a091448b207aa75d136d6feb237ad0fa14303d634a2df9de676e06282a8c25ec" exitCode=137 Dec 01 08:58:32 crc kubenswrapper[4689]: I1201 08:58:32.359349 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-78d9cd9dbd-qxwq7" event={"ID":"e88c04bb-01ff-47a6-8942-05a9a2a68416","Type":"ContainerDied","Data":"a091448b207aa75d136d6feb237ad0fa14303d634a2df9de676e06282a8c25ec"} Dec 01 08:58:32 crc kubenswrapper[4689]: I1201 08:58:32.369604 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c68dacab-e7a5-480f-b8ec-d2d14169b7c0","Type":"ContainerStarted","Data":"a4074dd58183aae6e955b8ee20d54bbaecb524ab4fe5534f1637cbac1ed28e52"} Dec 01 08:58:32 crc kubenswrapper[4689]: I1201 08:58:32.370591 4689 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e961c41-2024-43a4-bb5a-35926b887048-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 08:58:32 crc kubenswrapper[4689]: I1201 08:58:32.370623 4689 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e961c41-2024-43a4-bb5a-35926b887048-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 08:58:32 crc kubenswrapper[4689]: I1201 08:58:32.370636 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5dvt4\" (UniqueName: \"kubernetes.io/projected/6e961c41-2024-43a4-bb5a-35926b887048-kube-api-access-5dvt4\") on node \"crc\" DevicePath \"\"" Dec 01 08:58:32 crc kubenswrapper[4689]: I1201 08:58:32.370645 4689 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6e961c41-2024-43a4-bb5a-35926b887048-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 01 08:58:32 crc kubenswrapper[4689]: I1201 08:58:32.370654 4689 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6e961c41-2024-43a4-bb5a-35926b887048-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 01 08:58:32 crc kubenswrapper[4689]: I1201 08:58:32.370662 4689 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e961c41-2024-43a4-bb5a-35926b887048-logs\") on node \"crc\" DevicePath \"\"" Dec 01 08:58:32 crc kubenswrapper[4689]: I1201 08:58:32.383000 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e961c41-2024-43a4-bb5a-35926b887048-config-data" (OuterVolumeSpecName: "config-data") pod "6e961c41-2024-43a4-bb5a-35926b887048" (UID: "6e961c41-2024-43a4-bb5a-35926b887048"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:58:32 crc kubenswrapper[4689]: I1201 08:58:32.384201 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"0cbf9f73-fecd-4c17-95c6-b0bd5a1ae285","Type":"ContainerStarted","Data":"3079c8e67360299ff180baed43dd30ded5c0da4d18344bc890297a4217e36675"} Dec 01 08:58:32 crc kubenswrapper[4689]: I1201 08:58:32.396545 4689 scope.go:117] "RemoveContainer" containerID="96660f73a77ef4cbfb1ace3ec60f36199140a8d65eab28e66db6737198c49412" Dec 01 08:58:32 crc kubenswrapper[4689]: I1201 08:58:32.409631 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=1.842368436 podStartE2EDuration="15.409610019s" podCreationTimestamp="2025-12-01 08:58:17 +0000 UTC" firstStartedPulling="2025-12-01 08:58:18.357962874 +0000 UTC m=+1178.430250778" lastFinishedPulling="2025-12-01 08:58:31.925204457 +0000 UTC m=+1191.997492361" observedRunningTime="2025-12-01 08:58:32.406122123 +0000 UTC m=+1192.478410027" watchObservedRunningTime="2025-12-01 08:58:32.409610019 +0000 UTC m=+1192.481897923" Dec 01 08:58:32 crc kubenswrapper[4689]: I1201 08:58:32.422963 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-7459744dff-cxqv7" Dec 01 08:58:32 crc kubenswrapper[4689]: I1201 08:58:32.424515 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-7459744dff-cxqv7" Dec 01 08:58:32 crc kubenswrapper[4689]: I1201 08:58:32.446011 4689 scope.go:117] "RemoveContainer" containerID="6cf33481554ec1616e0c9863d79dab2edd8317192599b608ed42855463879e06" Dec 01 08:58:32 crc kubenswrapper[4689]: E1201 08:58:32.449120 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6cf33481554ec1616e0c9863d79dab2edd8317192599b608ed42855463879e06\": container with ID starting with 6cf33481554ec1616e0c9863d79dab2edd8317192599b608ed42855463879e06 not found: ID does not exist" containerID="6cf33481554ec1616e0c9863d79dab2edd8317192599b608ed42855463879e06" Dec 01 08:58:32 crc kubenswrapper[4689]: I1201 08:58:32.449166 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6cf33481554ec1616e0c9863d79dab2edd8317192599b608ed42855463879e06"} err="failed to get container status \"6cf33481554ec1616e0c9863d79dab2edd8317192599b608ed42855463879e06\": rpc error: code = NotFound desc = could not find container \"6cf33481554ec1616e0c9863d79dab2edd8317192599b608ed42855463879e06\": container with ID starting with 6cf33481554ec1616e0c9863d79dab2edd8317192599b608ed42855463879e06 not found: ID does not exist" Dec 01 08:58:32 crc kubenswrapper[4689]: I1201 08:58:32.449214 4689 scope.go:117] "RemoveContainer" containerID="96660f73a77ef4cbfb1ace3ec60f36199140a8d65eab28e66db6737198c49412" Dec 01 08:58:32 crc kubenswrapper[4689]: E1201 08:58:32.449570 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96660f73a77ef4cbfb1ace3ec60f36199140a8d65eab28e66db6737198c49412\": container with ID starting with 96660f73a77ef4cbfb1ace3ec60f36199140a8d65eab28e66db6737198c49412 not found: ID does not exist" containerID="96660f73a77ef4cbfb1ace3ec60f36199140a8d65eab28e66db6737198c49412" Dec 01 08:58:32 crc kubenswrapper[4689]: I1201 08:58:32.449591 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96660f73a77ef4cbfb1ace3ec60f36199140a8d65eab28e66db6737198c49412"} err="failed to get container status \"96660f73a77ef4cbfb1ace3ec60f36199140a8d65eab28e66db6737198c49412\": rpc error: code = NotFound desc = could not find container \"96660f73a77ef4cbfb1ace3ec60f36199140a8d65eab28e66db6737198c49412\": container with ID starting with 96660f73a77ef4cbfb1ace3ec60f36199140a8d65eab28e66db6737198c49412 not found: ID does not exist" Dec 01 08:58:32 crc kubenswrapper[4689]: I1201 08:58:32.471887 4689 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e961c41-2024-43a4-bb5a-35926b887048-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 08:58:32 crc kubenswrapper[4689]: I1201 08:58:32.686699 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 01 08:58:32 crc kubenswrapper[4689]: I1201 08:58:32.698900 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Dec 01 08:58:32 crc kubenswrapper[4689]: I1201 08:58:32.719792 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Dec 01 08:58:32 crc kubenswrapper[4689]: E1201 08:58:32.722419 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e961c41-2024-43a4-bb5a-35926b887048" containerName="cinder-api" Dec 01 08:58:32 crc kubenswrapper[4689]: I1201 08:58:32.722537 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e961c41-2024-43a4-bb5a-35926b887048" containerName="cinder-api" Dec 01 08:58:32 crc kubenswrapper[4689]: E1201 08:58:32.722713 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e961c41-2024-43a4-bb5a-35926b887048" containerName="cinder-api-log" Dec 01 08:58:32 crc kubenswrapper[4689]: I1201 08:58:32.722778 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e961c41-2024-43a4-bb5a-35926b887048" containerName="cinder-api-log" Dec 01 08:58:32 crc kubenswrapper[4689]: I1201 08:58:32.723029 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e961c41-2024-43a4-bb5a-35926b887048" containerName="cinder-api-log" Dec 01 08:58:32 crc kubenswrapper[4689]: I1201 08:58:32.723109 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e961c41-2024-43a4-bb5a-35926b887048" containerName="cinder-api" Dec 01 08:58:32 crc kubenswrapper[4689]: I1201 08:58:32.726901 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 01 08:58:32 crc kubenswrapper[4689]: I1201 08:58:32.733943 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Dec 01 08:58:32 crc kubenswrapper[4689]: I1201 08:58:32.734191 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Dec 01 08:58:32 crc kubenswrapper[4689]: I1201 08:58:32.737636 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 01 08:58:32 crc kubenswrapper[4689]: I1201 08:58:32.741563 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Dec 01 08:58:32 crc kubenswrapper[4689]: I1201 08:58:32.779293 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f0f718c-3a19-482a-9ed0-4c4d7dbac886-scripts\") pod \"cinder-api-0\" (UID: \"8f0f718c-3a19-482a-9ed0-4c4d7dbac886\") " pod="openstack/cinder-api-0" Dec 01 08:58:32 crc kubenswrapper[4689]: I1201 08:58:32.779329 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f0f718c-3a19-482a-9ed0-4c4d7dbac886-config-data\") pod \"cinder-api-0\" (UID: \"8f0f718c-3a19-482a-9ed0-4c4d7dbac886\") " pod="openstack/cinder-api-0" Dec 01 08:58:32 crc kubenswrapper[4689]: I1201 08:58:32.779379 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f0f718c-3a19-482a-9ed0-4c4d7dbac886-logs\") pod \"cinder-api-0\" (UID: \"8f0f718c-3a19-482a-9ed0-4c4d7dbac886\") " pod="openstack/cinder-api-0" Dec 01 08:58:32 crc kubenswrapper[4689]: I1201 08:58:32.779415 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f0f718c-3a19-482a-9ed0-4c4d7dbac886-public-tls-certs\") pod \"cinder-api-0\" (UID: \"8f0f718c-3a19-482a-9ed0-4c4d7dbac886\") " pod="openstack/cinder-api-0" Dec 01 08:58:32 crc kubenswrapper[4689]: I1201 08:58:32.779453 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f0f718c-3a19-482a-9ed0-4c4d7dbac886-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"8f0f718c-3a19-482a-9ed0-4c4d7dbac886\") " pod="openstack/cinder-api-0" Dec 01 08:58:32 crc kubenswrapper[4689]: I1201 08:58:32.779549 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtt9f\" (UniqueName: \"kubernetes.io/projected/8f0f718c-3a19-482a-9ed0-4c4d7dbac886-kube-api-access-mtt9f\") pod \"cinder-api-0\" (UID: \"8f0f718c-3a19-482a-9ed0-4c4d7dbac886\") " pod="openstack/cinder-api-0" Dec 01 08:58:32 crc kubenswrapper[4689]: I1201 08:58:32.779675 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8f0f718c-3a19-482a-9ed0-4c4d7dbac886-etc-machine-id\") pod \"cinder-api-0\" (UID: \"8f0f718c-3a19-482a-9ed0-4c4d7dbac886\") " pod="openstack/cinder-api-0" Dec 01 08:58:32 crc kubenswrapper[4689]: I1201 08:58:32.779728 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8f0f718c-3a19-482a-9ed0-4c4d7dbac886-config-data-custom\") pod \"cinder-api-0\" (UID: \"8f0f718c-3a19-482a-9ed0-4c4d7dbac886\") " pod="openstack/cinder-api-0" Dec 01 08:58:32 crc kubenswrapper[4689]: I1201 08:58:32.779760 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f0f718c-3a19-482a-9ed0-4c4d7dbac886-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"8f0f718c-3a19-482a-9ed0-4c4d7dbac886\") " pod="openstack/cinder-api-0" Dec 01 08:58:32 crc kubenswrapper[4689]: I1201 08:58:32.889862 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtt9f\" (UniqueName: \"kubernetes.io/projected/8f0f718c-3a19-482a-9ed0-4c4d7dbac886-kube-api-access-mtt9f\") pod \"cinder-api-0\" (UID: \"8f0f718c-3a19-482a-9ed0-4c4d7dbac886\") " pod="openstack/cinder-api-0" Dec 01 08:58:32 crc kubenswrapper[4689]: I1201 08:58:32.889929 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8f0f718c-3a19-482a-9ed0-4c4d7dbac886-etc-machine-id\") pod \"cinder-api-0\" (UID: \"8f0f718c-3a19-482a-9ed0-4c4d7dbac886\") " pod="openstack/cinder-api-0" Dec 01 08:58:32 crc kubenswrapper[4689]: I1201 08:58:32.889952 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8f0f718c-3a19-482a-9ed0-4c4d7dbac886-config-data-custom\") pod \"cinder-api-0\" (UID: \"8f0f718c-3a19-482a-9ed0-4c4d7dbac886\") " pod="openstack/cinder-api-0" Dec 01 08:58:32 crc kubenswrapper[4689]: I1201 08:58:32.889972 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f0f718c-3a19-482a-9ed0-4c4d7dbac886-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"8f0f718c-3a19-482a-9ed0-4c4d7dbac886\") " pod="openstack/cinder-api-0" Dec 01 08:58:32 crc kubenswrapper[4689]: I1201 08:58:32.890013 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f0f718c-3a19-482a-9ed0-4c4d7dbac886-scripts\") pod \"cinder-api-0\" (UID: \"8f0f718c-3a19-482a-9ed0-4c4d7dbac886\") " pod="openstack/cinder-api-0" Dec 01 08:58:32 crc kubenswrapper[4689]: I1201 08:58:32.890030 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f0f718c-3a19-482a-9ed0-4c4d7dbac886-config-data\") pod \"cinder-api-0\" (UID: \"8f0f718c-3a19-482a-9ed0-4c4d7dbac886\") " pod="openstack/cinder-api-0" Dec 01 08:58:32 crc kubenswrapper[4689]: I1201 08:58:32.890063 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f0f718c-3a19-482a-9ed0-4c4d7dbac886-logs\") pod \"cinder-api-0\" (UID: \"8f0f718c-3a19-482a-9ed0-4c4d7dbac886\") " pod="openstack/cinder-api-0" Dec 01 08:58:32 crc kubenswrapper[4689]: I1201 08:58:32.890101 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f0f718c-3a19-482a-9ed0-4c4d7dbac886-public-tls-certs\") pod \"cinder-api-0\" (UID: \"8f0f718c-3a19-482a-9ed0-4c4d7dbac886\") " pod="openstack/cinder-api-0" Dec 01 08:58:32 crc kubenswrapper[4689]: I1201 08:58:32.890141 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f0f718c-3a19-482a-9ed0-4c4d7dbac886-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"8f0f718c-3a19-482a-9ed0-4c4d7dbac886\") " pod="openstack/cinder-api-0" Dec 01 08:58:32 crc kubenswrapper[4689]: I1201 08:58:32.891833 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8f0f718c-3a19-482a-9ed0-4c4d7dbac886-etc-machine-id\") pod \"cinder-api-0\" (UID: \"8f0f718c-3a19-482a-9ed0-4c4d7dbac886\") " pod="openstack/cinder-api-0" Dec 01 08:58:32 crc kubenswrapper[4689]: I1201 08:58:32.892494 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f0f718c-3a19-482a-9ed0-4c4d7dbac886-logs\") pod \"cinder-api-0\" (UID: \"8f0f718c-3a19-482a-9ed0-4c4d7dbac886\") " pod="openstack/cinder-api-0" Dec 01 08:58:32 crc kubenswrapper[4689]: I1201 08:58:32.904138 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f0f718c-3a19-482a-9ed0-4c4d7dbac886-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"8f0f718c-3a19-482a-9ed0-4c4d7dbac886\") " pod="openstack/cinder-api-0" Dec 01 08:58:32 crc kubenswrapper[4689]: I1201 08:58:32.904773 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f0f718c-3a19-482a-9ed0-4c4d7dbac886-config-data\") pod \"cinder-api-0\" (UID: \"8f0f718c-3a19-482a-9ed0-4c4d7dbac886\") " pod="openstack/cinder-api-0" Dec 01 08:58:32 crc kubenswrapper[4689]: I1201 08:58:32.906339 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f0f718c-3a19-482a-9ed0-4c4d7dbac886-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"8f0f718c-3a19-482a-9ed0-4c4d7dbac886\") " pod="openstack/cinder-api-0" Dec 01 08:58:32 crc kubenswrapper[4689]: I1201 08:58:32.906557 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8f0f718c-3a19-482a-9ed0-4c4d7dbac886-config-data-custom\") pod \"cinder-api-0\" (UID: \"8f0f718c-3a19-482a-9ed0-4c4d7dbac886\") " pod="openstack/cinder-api-0" Dec 01 08:58:32 crc kubenswrapper[4689]: I1201 08:58:32.906601 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f0f718c-3a19-482a-9ed0-4c4d7dbac886-public-tls-certs\") pod \"cinder-api-0\" (UID: \"8f0f718c-3a19-482a-9ed0-4c4d7dbac886\") " pod="openstack/cinder-api-0" Dec 01 08:58:32 crc kubenswrapper[4689]: I1201 08:58:32.916983 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f0f718c-3a19-482a-9ed0-4c4d7dbac886-scripts\") pod \"cinder-api-0\" (UID: \"8f0f718c-3a19-482a-9ed0-4c4d7dbac886\") " pod="openstack/cinder-api-0" Dec 01 08:58:32 crc kubenswrapper[4689]: I1201 08:58:32.927264 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtt9f\" (UniqueName: \"kubernetes.io/projected/8f0f718c-3a19-482a-9ed0-4c4d7dbac886-kube-api-access-mtt9f\") pod \"cinder-api-0\" (UID: \"8f0f718c-3a19-482a-9ed0-4c4d7dbac886\") " pod="openstack/cinder-api-0" Dec 01 08:58:33 crc kubenswrapper[4689]: I1201 08:58:33.047004 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 01 08:58:33 crc kubenswrapper[4689]: I1201 08:58:33.058672 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e961c41-2024-43a4-bb5a-35926b887048" path="/var/lib/kubelet/pods/6e961c41-2024-43a4-bb5a-35926b887048/volumes" Dec 01 08:58:33 crc kubenswrapper[4689]: I1201 08:58:33.409564 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-d65b9788-2kr5p" event={"ID":"fcebf70c-3de0-499e-928d-3419299a512f","Type":"ContainerStarted","Data":"dcfa1be0370fc18f949696b70741e006a2da4a40fd839a97095963a8724c91cd"} Dec 01 08:58:33 crc kubenswrapper[4689]: I1201 08:58:33.419940 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-78d9cd9dbd-qxwq7" event={"ID":"e88c04bb-01ff-47a6-8942-05a9a2a68416","Type":"ContainerStarted","Data":"dbacf385cc7e024476440ee9e90e68d5f7a572a69d91e9e613651d878c816d6c"} Dec 01 08:58:33 crc kubenswrapper[4689]: I1201 08:58:33.424296 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c68dacab-e7a5-480f-b8ec-d2d14169b7c0","Type":"ContainerStarted","Data":"586f084de265f6982abbadabe89f0a00828316c037bc9f8726d818d15f79f9a1"} Dec 01 08:58:33 crc kubenswrapper[4689]: I1201 08:58:33.588789 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 01 08:58:33 crc kubenswrapper[4689]: W1201 08:58:33.597714 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8f0f718c_3a19_482a_9ed0_4c4d7dbac886.slice/crio-5099724a6054612e7a5f5eaa43258dcfd281a34f50a49c087fc31182c4c11fda WatchSource:0}: Error finding container 5099724a6054612e7a5f5eaa43258dcfd281a34f50a49c087fc31182c4c11fda: Status 404 returned error can't find the container with id 5099724a6054612e7a5f5eaa43258dcfd281a34f50a49c087fc31182c4c11fda Dec 01 08:58:34 crc kubenswrapper[4689]: I1201 08:58:34.434738 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c68dacab-e7a5-480f-b8ec-d2d14169b7c0","Type":"ContainerStarted","Data":"fe760fa039dcdd6cf5cc979b852974b2e4383996e473be6a7486ffb383fcd950"} Dec 01 08:58:34 crc kubenswrapper[4689]: I1201 08:58:34.440627 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8f0f718c-3a19-482a-9ed0-4c4d7dbac886","Type":"ContainerStarted","Data":"5099724a6054612e7a5f5eaa43258dcfd281a34f50a49c087fc31182c4c11fda"} Dec 01 08:58:35 crc kubenswrapper[4689]: I1201 08:58:35.493295 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8f0f718c-3a19-482a-9ed0-4c4d7dbac886","Type":"ContainerStarted","Data":"b805cac881cd223550cb90b845d33c20b6b7601ac91bd36ed171b2663b22fe0e"} Dec 01 08:58:35 crc kubenswrapper[4689]: I1201 08:58:35.493884 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8f0f718c-3a19-482a-9ed0-4c4d7dbac886","Type":"ContainerStarted","Data":"c5612b4c41db8a219246f39c0eb08ad03494faaac34c12687eeccbfd03086296"} Dec 01 08:58:35 crc kubenswrapper[4689]: I1201 08:58:35.493898 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Dec 01 08:58:35 crc kubenswrapper[4689]: I1201 08:58:35.551521 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.551494417 podStartE2EDuration="3.551494417s" podCreationTimestamp="2025-12-01 08:58:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:58:35.537082012 +0000 UTC m=+1195.609369906" watchObservedRunningTime="2025-12-01 08:58:35.551494417 +0000 UTC m=+1195.623782321" Dec 01 08:58:36 crc kubenswrapper[4689]: I1201 08:58:36.503744 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c68dacab-e7a5-480f-b8ec-d2d14169b7c0","Type":"ContainerStarted","Data":"cc04593835206831a48fa1ec823d86de6c75c435f1fb0722ede0ceeb8a13eff7"} Dec 01 08:58:36 crc kubenswrapper[4689]: I1201 08:58:36.503924 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c68dacab-e7a5-480f-b8ec-d2d14169b7c0" containerName="ceilometer-central-agent" containerID="cri-o://a4074dd58183aae6e955b8ee20d54bbaecb524ab4fe5534f1637cbac1ed28e52" gracePeriod=30 Dec 01 08:58:36 crc kubenswrapper[4689]: I1201 08:58:36.504167 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c68dacab-e7a5-480f-b8ec-d2d14169b7c0" containerName="proxy-httpd" containerID="cri-o://cc04593835206831a48fa1ec823d86de6c75c435f1fb0722ede0ceeb8a13eff7" gracePeriod=30 Dec 01 08:58:36 crc kubenswrapper[4689]: I1201 08:58:36.504183 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c68dacab-e7a5-480f-b8ec-d2d14169b7c0" containerName="sg-core" containerID="cri-o://fe760fa039dcdd6cf5cc979b852974b2e4383996e473be6a7486ffb383fcd950" gracePeriod=30 Dec 01 08:58:36 crc kubenswrapper[4689]: I1201 08:58:36.504193 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c68dacab-e7a5-480f-b8ec-d2d14169b7c0" containerName="ceilometer-notification-agent" containerID="cri-o://586f084de265f6982abbadabe89f0a00828316c037bc9f8726d818d15f79f9a1" gracePeriod=30 Dec 01 08:58:37 crc kubenswrapper[4689]: I1201 08:58:37.515595 4689 generic.go:334] "Generic (PLEG): container finished" podID="c68dacab-e7a5-480f-b8ec-d2d14169b7c0" containerID="cc04593835206831a48fa1ec823d86de6c75c435f1fb0722ede0ceeb8a13eff7" exitCode=0 Dec 01 08:58:37 crc kubenswrapper[4689]: I1201 08:58:37.515997 4689 generic.go:334] "Generic (PLEG): container finished" podID="c68dacab-e7a5-480f-b8ec-d2d14169b7c0" containerID="fe760fa039dcdd6cf5cc979b852974b2e4383996e473be6a7486ffb383fcd950" exitCode=2 Dec 01 08:58:37 crc kubenswrapper[4689]: I1201 08:58:37.516014 4689 generic.go:334] "Generic (PLEG): container finished" podID="c68dacab-e7a5-480f-b8ec-d2d14169b7c0" containerID="586f084de265f6982abbadabe89f0a00828316c037bc9f8726d818d15f79f9a1" exitCode=0 Dec 01 08:58:37 crc kubenswrapper[4689]: I1201 08:58:37.515658 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c68dacab-e7a5-480f-b8ec-d2d14169b7c0","Type":"ContainerDied","Data":"cc04593835206831a48fa1ec823d86de6c75c435f1fb0722ede0ceeb8a13eff7"} Dec 01 08:58:37 crc kubenswrapper[4689]: I1201 08:58:37.516057 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c68dacab-e7a5-480f-b8ec-d2d14169b7c0","Type":"ContainerDied","Data":"fe760fa039dcdd6cf5cc979b852974b2e4383996e473be6a7486ffb383fcd950"} Dec 01 08:58:37 crc kubenswrapper[4689]: I1201 08:58:37.516075 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c68dacab-e7a5-480f-b8ec-d2d14169b7c0","Type":"ContainerDied","Data":"586f084de265f6982abbadabe89f0a00828316c037bc9f8726d818d15f79f9a1"} Dec 01 08:58:40 crc kubenswrapper[4689]: I1201 08:58:40.066113 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 08:58:40 crc kubenswrapper[4689]: I1201 08:58:40.152769 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c68dacab-e7a5-480f-b8ec-d2d14169b7c0-scripts\") pod \"c68dacab-e7a5-480f-b8ec-d2d14169b7c0\" (UID: \"c68dacab-e7a5-480f-b8ec-d2d14169b7c0\") " Dec 01 08:58:40 crc kubenswrapper[4689]: I1201 08:58:40.153261 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c68dacab-e7a5-480f-b8ec-d2d14169b7c0-combined-ca-bundle\") pod \"c68dacab-e7a5-480f-b8ec-d2d14169b7c0\" (UID: \"c68dacab-e7a5-480f-b8ec-d2d14169b7c0\") " Dec 01 08:58:40 crc kubenswrapper[4689]: I1201 08:58:40.153526 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c68dacab-e7a5-480f-b8ec-d2d14169b7c0-sg-core-conf-yaml\") pod \"c68dacab-e7a5-480f-b8ec-d2d14169b7c0\" (UID: \"c68dacab-e7a5-480f-b8ec-d2d14169b7c0\") " Dec 01 08:58:40 crc kubenswrapper[4689]: I1201 08:58:40.154080 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9d5gh\" (UniqueName: \"kubernetes.io/projected/c68dacab-e7a5-480f-b8ec-d2d14169b7c0-kube-api-access-9d5gh\") pod \"c68dacab-e7a5-480f-b8ec-d2d14169b7c0\" (UID: \"c68dacab-e7a5-480f-b8ec-d2d14169b7c0\") " Dec 01 08:58:40 crc kubenswrapper[4689]: I1201 08:58:40.154428 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c68dacab-e7a5-480f-b8ec-d2d14169b7c0-config-data\") pod \"c68dacab-e7a5-480f-b8ec-d2d14169b7c0\" (UID: \"c68dacab-e7a5-480f-b8ec-d2d14169b7c0\") " Dec 01 08:58:40 crc kubenswrapper[4689]: I1201 08:58:40.154468 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c68dacab-e7a5-480f-b8ec-d2d14169b7c0-log-httpd\") pod \"c68dacab-e7a5-480f-b8ec-d2d14169b7c0\" (UID: \"c68dacab-e7a5-480f-b8ec-d2d14169b7c0\") " Dec 01 08:58:40 crc kubenswrapper[4689]: I1201 08:58:40.154524 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c68dacab-e7a5-480f-b8ec-d2d14169b7c0-run-httpd\") pod \"c68dacab-e7a5-480f-b8ec-d2d14169b7c0\" (UID: \"c68dacab-e7a5-480f-b8ec-d2d14169b7c0\") " Dec 01 08:58:40 crc kubenswrapper[4689]: I1201 08:58:40.156501 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c68dacab-e7a5-480f-b8ec-d2d14169b7c0-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c68dacab-e7a5-480f-b8ec-d2d14169b7c0" (UID: "c68dacab-e7a5-480f-b8ec-d2d14169b7c0"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 08:58:40 crc kubenswrapper[4689]: I1201 08:58:40.157256 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c68dacab-e7a5-480f-b8ec-d2d14169b7c0-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c68dacab-e7a5-480f-b8ec-d2d14169b7c0" (UID: "c68dacab-e7a5-480f-b8ec-d2d14169b7c0"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 08:58:40 crc kubenswrapper[4689]: I1201 08:58:40.169653 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c68dacab-e7a5-480f-b8ec-d2d14169b7c0-scripts" (OuterVolumeSpecName: "scripts") pod "c68dacab-e7a5-480f-b8ec-d2d14169b7c0" (UID: "c68dacab-e7a5-480f-b8ec-d2d14169b7c0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:58:40 crc kubenswrapper[4689]: I1201 08:58:40.180222 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c68dacab-e7a5-480f-b8ec-d2d14169b7c0-kube-api-access-9d5gh" (OuterVolumeSpecName: "kube-api-access-9d5gh") pod "c68dacab-e7a5-480f-b8ec-d2d14169b7c0" (UID: "c68dacab-e7a5-480f-b8ec-d2d14169b7c0"). InnerVolumeSpecName "kube-api-access-9d5gh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:58:40 crc kubenswrapper[4689]: I1201 08:58:40.254529 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c68dacab-e7a5-480f-b8ec-d2d14169b7c0-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c68dacab-e7a5-480f-b8ec-d2d14169b7c0" (UID: "c68dacab-e7a5-480f-b8ec-d2d14169b7c0"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:58:40 crc kubenswrapper[4689]: I1201 08:58:40.256941 4689 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c68dacab-e7a5-480f-b8ec-d2d14169b7c0-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 01 08:58:40 crc kubenswrapper[4689]: I1201 08:58:40.257088 4689 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c68dacab-e7a5-480f-b8ec-d2d14169b7c0-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 01 08:58:40 crc kubenswrapper[4689]: I1201 08:58:40.257198 4689 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c68dacab-e7a5-480f-b8ec-d2d14169b7c0-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 08:58:40 crc kubenswrapper[4689]: I1201 08:58:40.257281 4689 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c68dacab-e7a5-480f-b8ec-d2d14169b7c0-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 01 08:58:40 crc kubenswrapper[4689]: I1201 08:58:40.257350 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9d5gh\" (UniqueName: \"kubernetes.io/projected/c68dacab-e7a5-480f-b8ec-d2d14169b7c0-kube-api-access-9d5gh\") on node \"crc\" DevicePath \"\"" Dec 01 08:58:40 crc kubenswrapper[4689]: I1201 08:58:40.314276 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c68dacab-e7a5-480f-b8ec-d2d14169b7c0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c68dacab-e7a5-480f-b8ec-d2d14169b7c0" (UID: "c68dacab-e7a5-480f-b8ec-d2d14169b7c0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:58:40 crc kubenswrapper[4689]: I1201 08:58:40.348887 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c68dacab-e7a5-480f-b8ec-d2d14169b7c0-config-data" (OuterVolumeSpecName: "config-data") pod "c68dacab-e7a5-480f-b8ec-d2d14169b7c0" (UID: "c68dacab-e7a5-480f-b8ec-d2d14169b7c0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:58:40 crc kubenswrapper[4689]: I1201 08:58:40.361354 4689 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c68dacab-e7a5-480f-b8ec-d2d14169b7c0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 08:58:40 crc kubenswrapper[4689]: I1201 08:58:40.361622 4689 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c68dacab-e7a5-480f-b8ec-d2d14169b7c0-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 08:58:40 crc kubenswrapper[4689]: I1201 08:58:40.546257 4689 generic.go:334] "Generic (PLEG): container finished" podID="c68dacab-e7a5-480f-b8ec-d2d14169b7c0" containerID="a4074dd58183aae6e955b8ee20d54bbaecb524ab4fe5534f1637cbac1ed28e52" exitCode=0 Dec 01 08:58:40 crc kubenswrapper[4689]: I1201 08:58:40.546623 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 08:58:40 crc kubenswrapper[4689]: I1201 08:58:40.546654 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c68dacab-e7a5-480f-b8ec-d2d14169b7c0","Type":"ContainerDied","Data":"a4074dd58183aae6e955b8ee20d54bbaecb524ab4fe5534f1637cbac1ed28e52"} Dec 01 08:58:40 crc kubenswrapper[4689]: I1201 08:58:40.547066 4689 scope.go:117] "RemoveContainer" containerID="cc04593835206831a48fa1ec823d86de6c75c435f1fb0722ede0ceeb8a13eff7" Dec 01 08:58:40 crc kubenswrapper[4689]: I1201 08:58:40.546919 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c68dacab-e7a5-480f-b8ec-d2d14169b7c0","Type":"ContainerDied","Data":"d6b49d4cf22adb3360db4a2d7dff7c0535d557095f824962c76f09e8a9638ce3"} Dec 01 08:58:40 crc kubenswrapper[4689]: I1201 08:58:40.599267 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 01 08:58:40 crc kubenswrapper[4689]: I1201 08:58:40.611854 4689 scope.go:117] "RemoveContainer" containerID="fe760fa039dcdd6cf5cc979b852974b2e4383996e473be6a7486ffb383fcd950" Dec 01 08:58:40 crc kubenswrapper[4689]: I1201 08:58:40.616637 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 01 08:58:40 crc kubenswrapper[4689]: I1201 08:58:40.638046 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 01 08:58:40 crc kubenswrapper[4689]: E1201 08:58:40.638570 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c68dacab-e7a5-480f-b8ec-d2d14169b7c0" containerName="sg-core" Dec 01 08:58:40 crc kubenswrapper[4689]: I1201 08:58:40.638591 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="c68dacab-e7a5-480f-b8ec-d2d14169b7c0" containerName="sg-core" Dec 01 08:58:40 crc kubenswrapper[4689]: E1201 08:58:40.638607 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c68dacab-e7a5-480f-b8ec-d2d14169b7c0" containerName="proxy-httpd" Dec 01 08:58:40 crc kubenswrapper[4689]: I1201 08:58:40.638616 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="c68dacab-e7a5-480f-b8ec-d2d14169b7c0" containerName="proxy-httpd" Dec 01 08:58:40 crc kubenswrapper[4689]: E1201 08:58:40.638635 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c68dacab-e7a5-480f-b8ec-d2d14169b7c0" containerName="ceilometer-central-agent" Dec 01 08:58:40 crc kubenswrapper[4689]: I1201 08:58:40.638643 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="c68dacab-e7a5-480f-b8ec-d2d14169b7c0" containerName="ceilometer-central-agent" Dec 01 08:58:40 crc kubenswrapper[4689]: E1201 08:58:40.638675 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c68dacab-e7a5-480f-b8ec-d2d14169b7c0" containerName="ceilometer-notification-agent" Dec 01 08:58:40 crc kubenswrapper[4689]: I1201 08:58:40.638694 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="c68dacab-e7a5-480f-b8ec-d2d14169b7c0" containerName="ceilometer-notification-agent" Dec 01 08:58:40 crc kubenswrapper[4689]: I1201 08:58:40.638933 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="c68dacab-e7a5-480f-b8ec-d2d14169b7c0" containerName="proxy-httpd" Dec 01 08:58:40 crc kubenswrapper[4689]: I1201 08:58:40.638962 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="c68dacab-e7a5-480f-b8ec-d2d14169b7c0" containerName="ceilometer-central-agent" Dec 01 08:58:40 crc kubenswrapper[4689]: I1201 08:58:40.638975 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="c68dacab-e7a5-480f-b8ec-d2d14169b7c0" containerName="ceilometer-notification-agent" Dec 01 08:58:40 crc kubenswrapper[4689]: I1201 08:58:40.638996 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="c68dacab-e7a5-480f-b8ec-d2d14169b7c0" containerName="sg-core" Dec 01 08:58:40 crc kubenswrapper[4689]: I1201 08:58:40.644843 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 08:58:40 crc kubenswrapper[4689]: I1201 08:58:40.650865 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 01 08:58:40 crc kubenswrapper[4689]: I1201 08:58:40.651133 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 01 08:58:40 crc kubenswrapper[4689]: I1201 08:58:40.651168 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 01 08:58:40 crc kubenswrapper[4689]: I1201 08:58:40.710551 4689 scope.go:117] "RemoveContainer" containerID="586f084de265f6982abbadabe89f0a00828316c037bc9f8726d818d15f79f9a1" Dec 01 08:58:40 crc kubenswrapper[4689]: I1201 08:58:40.749855 4689 scope.go:117] "RemoveContainer" containerID="a4074dd58183aae6e955b8ee20d54bbaecb524ab4fe5534f1637cbac1ed28e52" Dec 01 08:58:40 crc kubenswrapper[4689]: I1201 08:58:40.773444 4689 scope.go:117] "RemoveContainer" containerID="cc04593835206831a48fa1ec823d86de6c75c435f1fb0722ede0ceeb8a13eff7" Dec 01 08:58:40 crc kubenswrapper[4689]: E1201 08:58:40.773887 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc04593835206831a48fa1ec823d86de6c75c435f1fb0722ede0ceeb8a13eff7\": container with ID starting with cc04593835206831a48fa1ec823d86de6c75c435f1fb0722ede0ceeb8a13eff7 not found: ID does not exist" containerID="cc04593835206831a48fa1ec823d86de6c75c435f1fb0722ede0ceeb8a13eff7" Dec 01 08:58:40 crc kubenswrapper[4689]: I1201 08:58:40.773920 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc04593835206831a48fa1ec823d86de6c75c435f1fb0722ede0ceeb8a13eff7"} err="failed to get container status \"cc04593835206831a48fa1ec823d86de6c75c435f1fb0722ede0ceeb8a13eff7\": rpc error: code = NotFound desc = could not find container \"cc04593835206831a48fa1ec823d86de6c75c435f1fb0722ede0ceeb8a13eff7\": container with ID starting with cc04593835206831a48fa1ec823d86de6c75c435f1fb0722ede0ceeb8a13eff7 not found: ID does not exist" Dec 01 08:58:40 crc kubenswrapper[4689]: I1201 08:58:40.773944 4689 scope.go:117] "RemoveContainer" containerID="fe760fa039dcdd6cf5cc979b852974b2e4383996e473be6a7486ffb383fcd950" Dec 01 08:58:40 crc kubenswrapper[4689]: E1201 08:58:40.774489 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe760fa039dcdd6cf5cc979b852974b2e4383996e473be6a7486ffb383fcd950\": container with ID starting with fe760fa039dcdd6cf5cc979b852974b2e4383996e473be6a7486ffb383fcd950 not found: ID does not exist" containerID="fe760fa039dcdd6cf5cc979b852974b2e4383996e473be6a7486ffb383fcd950" Dec 01 08:58:40 crc kubenswrapper[4689]: I1201 08:58:40.774531 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe760fa039dcdd6cf5cc979b852974b2e4383996e473be6a7486ffb383fcd950"} err="failed to get container status \"fe760fa039dcdd6cf5cc979b852974b2e4383996e473be6a7486ffb383fcd950\": rpc error: code = NotFound desc = could not find container \"fe760fa039dcdd6cf5cc979b852974b2e4383996e473be6a7486ffb383fcd950\": container with ID starting with fe760fa039dcdd6cf5cc979b852974b2e4383996e473be6a7486ffb383fcd950 not found: ID does not exist" Dec 01 08:58:40 crc kubenswrapper[4689]: I1201 08:58:40.774581 4689 scope.go:117] "RemoveContainer" containerID="586f084de265f6982abbadabe89f0a00828316c037bc9f8726d818d15f79f9a1" Dec 01 08:58:40 crc kubenswrapper[4689]: I1201 08:58:40.777451 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c79796e3-78be-4803-9627-c8c769fc9f59-scripts\") pod \"ceilometer-0\" (UID: \"c79796e3-78be-4803-9627-c8c769fc9f59\") " pod="openstack/ceilometer-0" Dec 01 08:58:40 crc kubenswrapper[4689]: I1201 08:58:40.777501 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c79796e3-78be-4803-9627-c8c769fc9f59-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c79796e3-78be-4803-9627-c8c769fc9f59\") " pod="openstack/ceilometer-0" Dec 01 08:58:40 crc kubenswrapper[4689]: I1201 08:58:40.777532 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c79796e3-78be-4803-9627-c8c769fc9f59-config-data\") pod \"ceilometer-0\" (UID: \"c79796e3-78be-4803-9627-c8c769fc9f59\") " pod="openstack/ceilometer-0" Dec 01 08:58:40 crc kubenswrapper[4689]: I1201 08:58:40.777585 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cl658\" (UniqueName: \"kubernetes.io/projected/c79796e3-78be-4803-9627-c8c769fc9f59-kube-api-access-cl658\") pod \"ceilometer-0\" (UID: \"c79796e3-78be-4803-9627-c8c769fc9f59\") " pod="openstack/ceilometer-0" Dec 01 08:58:40 crc kubenswrapper[4689]: I1201 08:58:40.777601 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c79796e3-78be-4803-9627-c8c769fc9f59-run-httpd\") pod \"ceilometer-0\" (UID: \"c79796e3-78be-4803-9627-c8c769fc9f59\") " pod="openstack/ceilometer-0" Dec 01 08:58:40 crc kubenswrapper[4689]: I1201 08:58:40.777624 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c79796e3-78be-4803-9627-c8c769fc9f59-log-httpd\") pod \"ceilometer-0\" (UID: \"c79796e3-78be-4803-9627-c8c769fc9f59\") " pod="openstack/ceilometer-0" Dec 01 08:58:40 crc kubenswrapper[4689]: I1201 08:58:40.777827 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c79796e3-78be-4803-9627-c8c769fc9f59-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c79796e3-78be-4803-9627-c8c769fc9f59\") " pod="openstack/ceilometer-0" Dec 01 08:58:40 crc kubenswrapper[4689]: E1201 08:58:40.780812 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"586f084de265f6982abbadabe89f0a00828316c037bc9f8726d818d15f79f9a1\": container with ID starting with 586f084de265f6982abbadabe89f0a00828316c037bc9f8726d818d15f79f9a1 not found: ID does not exist" containerID="586f084de265f6982abbadabe89f0a00828316c037bc9f8726d818d15f79f9a1" Dec 01 08:58:40 crc kubenswrapper[4689]: I1201 08:58:40.780851 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"586f084de265f6982abbadabe89f0a00828316c037bc9f8726d818d15f79f9a1"} err="failed to get container status \"586f084de265f6982abbadabe89f0a00828316c037bc9f8726d818d15f79f9a1\": rpc error: code = NotFound desc = could not find container \"586f084de265f6982abbadabe89f0a00828316c037bc9f8726d818d15f79f9a1\": container with ID starting with 586f084de265f6982abbadabe89f0a00828316c037bc9f8726d818d15f79f9a1 not found: ID does not exist" Dec 01 08:58:40 crc kubenswrapper[4689]: I1201 08:58:40.780872 4689 scope.go:117] "RemoveContainer" containerID="a4074dd58183aae6e955b8ee20d54bbaecb524ab4fe5534f1637cbac1ed28e52" Dec 01 08:58:40 crc kubenswrapper[4689]: E1201 08:58:40.784562 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4074dd58183aae6e955b8ee20d54bbaecb524ab4fe5534f1637cbac1ed28e52\": container with ID starting with a4074dd58183aae6e955b8ee20d54bbaecb524ab4fe5534f1637cbac1ed28e52 not found: ID does not exist" containerID="a4074dd58183aae6e955b8ee20d54bbaecb524ab4fe5534f1637cbac1ed28e52" Dec 01 08:58:40 crc kubenswrapper[4689]: I1201 08:58:40.784599 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4074dd58183aae6e955b8ee20d54bbaecb524ab4fe5534f1637cbac1ed28e52"} err="failed to get container status \"a4074dd58183aae6e955b8ee20d54bbaecb524ab4fe5534f1637cbac1ed28e52\": rpc error: code = NotFound desc = could not find container \"a4074dd58183aae6e955b8ee20d54bbaecb524ab4fe5534f1637cbac1ed28e52\": container with ID starting with a4074dd58183aae6e955b8ee20d54bbaecb524ab4fe5534f1637cbac1ed28e52 not found: ID does not exist" Dec 01 08:58:40 crc kubenswrapper[4689]: I1201 08:58:40.879446 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cl658\" (UniqueName: \"kubernetes.io/projected/c79796e3-78be-4803-9627-c8c769fc9f59-kube-api-access-cl658\") pod \"ceilometer-0\" (UID: \"c79796e3-78be-4803-9627-c8c769fc9f59\") " pod="openstack/ceilometer-0" Dec 01 08:58:40 crc kubenswrapper[4689]: I1201 08:58:40.879494 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c79796e3-78be-4803-9627-c8c769fc9f59-run-httpd\") pod \"ceilometer-0\" (UID: \"c79796e3-78be-4803-9627-c8c769fc9f59\") " pod="openstack/ceilometer-0" Dec 01 08:58:40 crc kubenswrapper[4689]: I1201 08:58:40.879525 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c79796e3-78be-4803-9627-c8c769fc9f59-log-httpd\") pod \"ceilometer-0\" (UID: \"c79796e3-78be-4803-9627-c8c769fc9f59\") " pod="openstack/ceilometer-0" Dec 01 08:58:40 crc kubenswrapper[4689]: I1201 08:58:40.879588 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c79796e3-78be-4803-9627-c8c769fc9f59-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c79796e3-78be-4803-9627-c8c769fc9f59\") " pod="openstack/ceilometer-0" Dec 01 08:58:40 crc kubenswrapper[4689]: I1201 08:58:40.879657 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c79796e3-78be-4803-9627-c8c769fc9f59-scripts\") pod \"ceilometer-0\" (UID: \"c79796e3-78be-4803-9627-c8c769fc9f59\") " pod="openstack/ceilometer-0" Dec 01 08:58:40 crc kubenswrapper[4689]: I1201 08:58:40.879691 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c79796e3-78be-4803-9627-c8c769fc9f59-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c79796e3-78be-4803-9627-c8c769fc9f59\") " pod="openstack/ceilometer-0" Dec 01 08:58:40 crc kubenswrapper[4689]: I1201 08:58:40.879726 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c79796e3-78be-4803-9627-c8c769fc9f59-config-data\") pod \"ceilometer-0\" (UID: \"c79796e3-78be-4803-9627-c8c769fc9f59\") " pod="openstack/ceilometer-0" Dec 01 08:58:40 crc kubenswrapper[4689]: I1201 08:58:40.880339 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c79796e3-78be-4803-9627-c8c769fc9f59-run-httpd\") pod \"ceilometer-0\" (UID: \"c79796e3-78be-4803-9627-c8c769fc9f59\") " pod="openstack/ceilometer-0" Dec 01 08:58:40 crc kubenswrapper[4689]: I1201 08:58:40.880459 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c79796e3-78be-4803-9627-c8c769fc9f59-log-httpd\") pod \"ceilometer-0\" (UID: \"c79796e3-78be-4803-9627-c8c769fc9f59\") " pod="openstack/ceilometer-0" Dec 01 08:58:40 crc kubenswrapper[4689]: I1201 08:58:40.882428 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 01 08:58:40 crc kubenswrapper[4689]: I1201 08:58:40.883798 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c79796e3-78be-4803-9627-c8c769fc9f59-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c79796e3-78be-4803-9627-c8c769fc9f59\") " pod="openstack/ceilometer-0" Dec 01 08:58:40 crc kubenswrapper[4689]: I1201 08:58:40.886396 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 01 08:58:40 crc kubenswrapper[4689]: I1201 08:58:40.907338 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c79796e3-78be-4803-9627-c8c769fc9f59-scripts\") pod \"ceilometer-0\" (UID: \"c79796e3-78be-4803-9627-c8c769fc9f59\") " pod="openstack/ceilometer-0" Dec 01 08:58:40 crc kubenswrapper[4689]: I1201 08:58:40.907739 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c79796e3-78be-4803-9627-c8c769fc9f59-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c79796e3-78be-4803-9627-c8c769fc9f59\") " pod="openstack/ceilometer-0" Dec 01 08:58:40 crc kubenswrapper[4689]: I1201 08:58:40.909256 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cl658\" (UniqueName: \"kubernetes.io/projected/c79796e3-78be-4803-9627-c8c769fc9f59-kube-api-access-cl658\") pod \"ceilometer-0\" (UID: \"c79796e3-78be-4803-9627-c8c769fc9f59\") " pod="openstack/ceilometer-0" Dec 01 08:58:40 crc kubenswrapper[4689]: I1201 08:58:40.909513 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c79796e3-78be-4803-9627-c8c769fc9f59-config-data\") pod \"ceilometer-0\" (UID: \"c79796e3-78be-4803-9627-c8c769fc9f59\") " pod="openstack/ceilometer-0" Dec 01 08:58:41 crc kubenswrapper[4689]: I1201 08:58:41.029170 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 08:58:41 crc kubenswrapper[4689]: I1201 08:58:41.075425 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c68dacab-e7a5-480f-b8ec-d2d14169b7c0" path="/var/lib/kubelet/pods/c68dacab-e7a5-480f-b8ec-d2d14169b7c0/volumes" Dec 01 08:58:41 crc kubenswrapper[4689]: W1201 08:58:41.559950 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc79796e3_78be_4803_9627_c8c769fc9f59.slice/crio-12eddc44405cabf2d76f29a6af7f7f3f46cd443f510dcac629dfc811f23db818 WatchSource:0}: Error finding container 12eddc44405cabf2d76f29a6af7f7f3f46cd443f510dcac629dfc811f23db818: Status 404 returned error can't find the container with id 12eddc44405cabf2d76f29a6af7f7f3f46cd443f510dcac629dfc811f23db818 Dec 01 08:58:41 crc kubenswrapper[4689]: I1201 08:58:41.562154 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 01 08:58:42 crc kubenswrapper[4689]: I1201 08:58:42.050187 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-78d9cd9dbd-qxwq7" Dec 01 08:58:42 crc kubenswrapper[4689]: I1201 08:58:42.050529 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-78d9cd9dbd-qxwq7" Dec 01 08:58:42 crc kubenswrapper[4689]: I1201 08:58:42.054589 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-78d9cd9dbd-qxwq7" podUID="e88c04bb-01ff-47a6-8942-05a9a2a68416" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Dec 01 08:58:42 crc kubenswrapper[4689]: I1201 08:58:42.202467 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 01 08:58:42 crc kubenswrapper[4689]: I1201 08:58:42.238698 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-d65b9788-2kr5p" Dec 01 08:58:42 crc kubenswrapper[4689]: I1201 08:58:42.238750 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-d65b9788-2kr5p" Dec 01 08:58:42 crc kubenswrapper[4689]: I1201 08:58:42.240651 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-d65b9788-2kr5p" podUID="fcebf70c-3de0-499e-928d-3419299a512f" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Dec 01 08:58:42 crc kubenswrapper[4689]: I1201 08:58:42.579280 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c79796e3-78be-4803-9627-c8c769fc9f59","Type":"ContainerStarted","Data":"475df4729eed2632dbd08b2aa785b49d951b7332c6ece3f03e6ed362b9c5810b"} Dec 01 08:58:42 crc kubenswrapper[4689]: I1201 08:58:42.579712 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c79796e3-78be-4803-9627-c8c769fc9f59","Type":"ContainerStarted","Data":"12eddc44405cabf2d76f29a6af7f7f3f46cd443f510dcac629dfc811f23db818"} Dec 01 08:58:43 crc kubenswrapper[4689]: I1201 08:58:43.602813 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c79796e3-78be-4803-9627-c8c769fc9f59","Type":"ContainerStarted","Data":"5232d351a200019264c534577cc10a2cf0f3b61d1023f0563bbb71605c8d3cc6"} Dec 01 08:58:44 crc kubenswrapper[4689]: I1201 08:58:44.614388 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c79796e3-78be-4803-9627-c8c769fc9f59","Type":"ContainerStarted","Data":"3946a7012c056df89e0ea86c4c963481d7988fe042c953d2e327a780b54faeee"} Dec 01 08:58:45 crc kubenswrapper[4689]: I1201 08:58:45.406948 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Dec 01 08:58:46 crc kubenswrapper[4689]: I1201 08:58:46.568918 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 01 08:58:46 crc kubenswrapper[4689]: I1201 08:58:46.571067 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="2df3ac61-47b1-4ca0-a5a2-80b2a94d1361" containerName="glance-log" containerID="cri-o://50c87837343c21f6c696f8a4d45c097939acce86c6e84bcf6063952335c4f682" gracePeriod=30 Dec 01 08:58:46 crc kubenswrapper[4689]: I1201 08:58:46.571196 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="2df3ac61-47b1-4ca0-a5a2-80b2a94d1361" containerName="glance-httpd" containerID="cri-o://983d509035efff3130b097cb56b9c7a0702e2f2a59c994ed32cf7b1f0a76b86c" gracePeriod=30 Dec 01 08:58:46 crc kubenswrapper[4689]: I1201 08:58:46.663634 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c79796e3-78be-4803-9627-c8c769fc9f59","Type":"ContainerStarted","Data":"20862422bdfc0395148b7b7e59fd36784331e1a7ce4e9042aa210db85555e028"} Dec 01 08:58:46 crc kubenswrapper[4689]: I1201 08:58:46.664016 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c79796e3-78be-4803-9627-c8c769fc9f59" containerName="ceilometer-central-agent" containerID="cri-o://475df4729eed2632dbd08b2aa785b49d951b7332c6ece3f03e6ed362b9c5810b" gracePeriod=30 Dec 01 08:58:46 crc kubenswrapper[4689]: I1201 08:58:46.664241 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 01 08:58:46 crc kubenswrapper[4689]: I1201 08:58:46.664509 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c79796e3-78be-4803-9627-c8c769fc9f59" containerName="proxy-httpd" containerID="cri-o://20862422bdfc0395148b7b7e59fd36784331e1a7ce4e9042aa210db85555e028" gracePeriod=30 Dec 01 08:58:46 crc kubenswrapper[4689]: I1201 08:58:46.664567 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c79796e3-78be-4803-9627-c8c769fc9f59" containerName="sg-core" containerID="cri-o://3946a7012c056df89e0ea86c4c963481d7988fe042c953d2e327a780b54faeee" gracePeriod=30 Dec 01 08:58:46 crc kubenswrapper[4689]: I1201 08:58:46.664605 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c79796e3-78be-4803-9627-c8c769fc9f59" containerName="ceilometer-notification-agent" containerID="cri-o://5232d351a200019264c534577cc10a2cf0f3b61d1023f0563bbb71605c8d3cc6" gracePeriod=30 Dec 01 08:58:46 crc kubenswrapper[4689]: I1201 08:58:46.705522 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.83815823 podStartE2EDuration="6.705505214s" podCreationTimestamp="2025-12-01 08:58:40 +0000 UTC" firstStartedPulling="2025-12-01 08:58:41.568717282 +0000 UTC m=+1201.641005186" lastFinishedPulling="2025-12-01 08:58:45.436064266 +0000 UTC m=+1205.508352170" observedRunningTime="2025-12-01 08:58:46.699305155 +0000 UTC m=+1206.771593069" watchObservedRunningTime="2025-12-01 08:58:46.705505214 +0000 UTC m=+1206.777793108" Dec 01 08:58:47 crc kubenswrapper[4689]: I1201 08:58:47.673915 4689 generic.go:334] "Generic (PLEG): container finished" podID="2df3ac61-47b1-4ca0-a5a2-80b2a94d1361" containerID="50c87837343c21f6c696f8a4d45c097939acce86c6e84bcf6063952335c4f682" exitCode=143 Dec 01 08:58:47 crc kubenswrapper[4689]: I1201 08:58:47.674022 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2df3ac61-47b1-4ca0-a5a2-80b2a94d1361","Type":"ContainerDied","Data":"50c87837343c21f6c696f8a4d45c097939acce86c6e84bcf6063952335c4f682"} Dec 01 08:58:47 crc kubenswrapper[4689]: I1201 08:58:47.676896 4689 generic.go:334] "Generic (PLEG): container finished" podID="c79796e3-78be-4803-9627-c8c769fc9f59" containerID="20862422bdfc0395148b7b7e59fd36784331e1a7ce4e9042aa210db85555e028" exitCode=0 Dec 01 08:58:47 crc kubenswrapper[4689]: I1201 08:58:47.676928 4689 generic.go:334] "Generic (PLEG): container finished" podID="c79796e3-78be-4803-9627-c8c769fc9f59" containerID="3946a7012c056df89e0ea86c4c963481d7988fe042c953d2e327a780b54faeee" exitCode=2 Dec 01 08:58:47 crc kubenswrapper[4689]: I1201 08:58:47.676936 4689 generic.go:334] "Generic (PLEG): container finished" podID="c79796e3-78be-4803-9627-c8c769fc9f59" containerID="5232d351a200019264c534577cc10a2cf0f3b61d1023f0563bbb71605c8d3cc6" exitCode=0 Dec 01 08:58:47 crc kubenswrapper[4689]: I1201 08:58:47.676956 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c79796e3-78be-4803-9627-c8c769fc9f59","Type":"ContainerDied","Data":"20862422bdfc0395148b7b7e59fd36784331e1a7ce4e9042aa210db85555e028"} Dec 01 08:58:47 crc kubenswrapper[4689]: I1201 08:58:47.676983 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c79796e3-78be-4803-9627-c8c769fc9f59","Type":"ContainerDied","Data":"3946a7012c056df89e0ea86c4c963481d7988fe042c953d2e327a780b54faeee"} Dec 01 08:58:47 crc kubenswrapper[4689]: I1201 08:58:47.676993 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c79796e3-78be-4803-9627-c8c769fc9f59","Type":"ContainerDied","Data":"5232d351a200019264c534577cc10a2cf0f3b61d1023f0563bbb71605c8d3cc6"} Dec 01 08:58:47 crc kubenswrapper[4689]: I1201 08:58:47.731937 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 01 08:58:47 crc kubenswrapper[4689]: I1201 08:58:47.732162 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="4b734f82-967b-49b2-bb9e-2f17fdcf54d3" containerName="glance-log" containerID="cri-o://bceeee0903dfff5ae94b2c6888c44d8c4722f09c38c476bf04eac8d1ee7b266d" gracePeriod=30 Dec 01 08:58:47 crc kubenswrapper[4689]: I1201 08:58:47.732612 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="4b734f82-967b-49b2-bb9e-2f17fdcf54d3" containerName="glance-httpd" containerID="cri-o://707fec18c0267bdb413677b34b46b97f18071f6dc9d667142ed084a3e1f0ab5d" gracePeriod=30 Dec 01 08:58:48 crc kubenswrapper[4689]: I1201 08:58:48.689336 4689 generic.go:334] "Generic (PLEG): container finished" podID="4b734f82-967b-49b2-bb9e-2f17fdcf54d3" containerID="bceeee0903dfff5ae94b2c6888c44d8c4722f09c38c476bf04eac8d1ee7b266d" exitCode=143 Dec 01 08:58:48 crc kubenswrapper[4689]: I1201 08:58:48.689617 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4b734f82-967b-49b2-bb9e-2f17fdcf54d3","Type":"ContainerDied","Data":"bceeee0903dfff5ae94b2c6888c44d8c4722f09c38c476bf04eac8d1ee7b266d"} Dec 01 08:58:50 crc kubenswrapper[4689]: I1201 08:58:50.618406 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 01 08:58:50 crc kubenswrapper[4689]: I1201 08:58:50.697403 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2df3ac61-47b1-4ca0-a5a2-80b2a94d1361-public-tls-certs\") pod \"2df3ac61-47b1-4ca0-a5a2-80b2a94d1361\" (UID: \"2df3ac61-47b1-4ca0-a5a2-80b2a94d1361\") " Dec 01 08:58:50 crc kubenswrapper[4689]: I1201 08:58:50.697451 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2df3ac61-47b1-4ca0-a5a2-80b2a94d1361-scripts\") pod \"2df3ac61-47b1-4ca0-a5a2-80b2a94d1361\" (UID: \"2df3ac61-47b1-4ca0-a5a2-80b2a94d1361\") " Dec 01 08:58:50 crc kubenswrapper[4689]: I1201 08:58:50.697521 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2df3ac61-47b1-4ca0-a5a2-80b2a94d1361-httpd-run\") pod \"2df3ac61-47b1-4ca0-a5a2-80b2a94d1361\" (UID: \"2df3ac61-47b1-4ca0-a5a2-80b2a94d1361\") " Dec 01 08:58:50 crc kubenswrapper[4689]: I1201 08:58:50.697595 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"2df3ac61-47b1-4ca0-a5a2-80b2a94d1361\" (UID: \"2df3ac61-47b1-4ca0-a5a2-80b2a94d1361\") " Dec 01 08:58:50 crc kubenswrapper[4689]: I1201 08:58:50.697699 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhbjv\" (UniqueName: \"kubernetes.io/projected/2df3ac61-47b1-4ca0-a5a2-80b2a94d1361-kube-api-access-fhbjv\") pod \"2df3ac61-47b1-4ca0-a5a2-80b2a94d1361\" (UID: \"2df3ac61-47b1-4ca0-a5a2-80b2a94d1361\") " Dec 01 08:58:50 crc kubenswrapper[4689]: I1201 08:58:50.697756 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2df3ac61-47b1-4ca0-a5a2-80b2a94d1361-combined-ca-bundle\") pod \"2df3ac61-47b1-4ca0-a5a2-80b2a94d1361\" (UID: \"2df3ac61-47b1-4ca0-a5a2-80b2a94d1361\") " Dec 01 08:58:50 crc kubenswrapper[4689]: I1201 08:58:50.697782 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2df3ac61-47b1-4ca0-a5a2-80b2a94d1361-config-data\") pod \"2df3ac61-47b1-4ca0-a5a2-80b2a94d1361\" (UID: \"2df3ac61-47b1-4ca0-a5a2-80b2a94d1361\") " Dec 01 08:58:50 crc kubenswrapper[4689]: I1201 08:58:50.697877 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2df3ac61-47b1-4ca0-a5a2-80b2a94d1361-logs\") pod \"2df3ac61-47b1-4ca0-a5a2-80b2a94d1361\" (UID: \"2df3ac61-47b1-4ca0-a5a2-80b2a94d1361\") " Dec 01 08:58:50 crc kubenswrapper[4689]: I1201 08:58:50.698735 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2df3ac61-47b1-4ca0-a5a2-80b2a94d1361-logs" (OuterVolumeSpecName: "logs") pod "2df3ac61-47b1-4ca0-a5a2-80b2a94d1361" (UID: "2df3ac61-47b1-4ca0-a5a2-80b2a94d1361"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 08:58:50 crc kubenswrapper[4689]: I1201 08:58:50.711054 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "glance") pod "2df3ac61-47b1-4ca0-a5a2-80b2a94d1361" (UID: "2df3ac61-47b1-4ca0-a5a2-80b2a94d1361"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 01 08:58:50 crc kubenswrapper[4689]: I1201 08:58:50.715897 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2df3ac61-47b1-4ca0-a5a2-80b2a94d1361-kube-api-access-fhbjv" (OuterVolumeSpecName: "kube-api-access-fhbjv") pod "2df3ac61-47b1-4ca0-a5a2-80b2a94d1361" (UID: "2df3ac61-47b1-4ca0-a5a2-80b2a94d1361"). InnerVolumeSpecName "kube-api-access-fhbjv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:58:50 crc kubenswrapper[4689]: I1201 08:58:50.717444 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2df3ac61-47b1-4ca0-a5a2-80b2a94d1361-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "2df3ac61-47b1-4ca0-a5a2-80b2a94d1361" (UID: "2df3ac61-47b1-4ca0-a5a2-80b2a94d1361"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 08:58:50 crc kubenswrapper[4689]: I1201 08:58:50.747797 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2df3ac61-47b1-4ca0-a5a2-80b2a94d1361-scripts" (OuterVolumeSpecName: "scripts") pod "2df3ac61-47b1-4ca0-a5a2-80b2a94d1361" (UID: "2df3ac61-47b1-4ca0-a5a2-80b2a94d1361"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:58:50 crc kubenswrapper[4689]: I1201 08:58:50.756016 4689 generic.go:334] "Generic (PLEG): container finished" podID="2df3ac61-47b1-4ca0-a5a2-80b2a94d1361" containerID="983d509035efff3130b097cb56b9c7a0702e2f2a59c994ed32cf7b1f0a76b86c" exitCode=0 Dec 01 08:58:50 crc kubenswrapper[4689]: I1201 08:58:50.756306 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2df3ac61-47b1-4ca0-a5a2-80b2a94d1361","Type":"ContainerDied","Data":"983d509035efff3130b097cb56b9c7a0702e2f2a59c994ed32cf7b1f0a76b86c"} Dec 01 08:58:50 crc kubenswrapper[4689]: I1201 08:58:50.756429 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2df3ac61-47b1-4ca0-a5a2-80b2a94d1361","Type":"ContainerDied","Data":"a6d217e8225f02cd410f30f481332214f8dc6e5ac2c96ec662bf76de52745393"} Dec 01 08:58:50 crc kubenswrapper[4689]: I1201 08:58:50.756487 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 01 08:58:50 crc kubenswrapper[4689]: I1201 08:58:50.756562 4689 scope.go:117] "RemoveContainer" containerID="983d509035efff3130b097cb56b9c7a0702e2f2a59c994ed32cf7b1f0a76b86c" Dec 01 08:58:50 crc kubenswrapper[4689]: I1201 08:58:50.800525 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2df3ac61-47b1-4ca0-a5a2-80b2a94d1361-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "2df3ac61-47b1-4ca0-a5a2-80b2a94d1361" (UID: "2df3ac61-47b1-4ca0-a5a2-80b2a94d1361"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:58:50 crc kubenswrapper[4689]: I1201 08:58:50.800875 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2df3ac61-47b1-4ca0-a5a2-80b2a94d1361-public-tls-certs\") pod \"2df3ac61-47b1-4ca0-a5a2-80b2a94d1361\" (UID: \"2df3ac61-47b1-4ca0-a5a2-80b2a94d1361\") " Dec 01 08:58:50 crc kubenswrapper[4689]: W1201 08:58:50.801041 4689 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/2df3ac61-47b1-4ca0-a5a2-80b2a94d1361/volumes/kubernetes.io~secret/public-tls-certs Dec 01 08:58:50 crc kubenswrapper[4689]: I1201 08:58:50.801077 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2df3ac61-47b1-4ca0-a5a2-80b2a94d1361-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "2df3ac61-47b1-4ca0-a5a2-80b2a94d1361" (UID: "2df3ac61-47b1-4ca0-a5a2-80b2a94d1361"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:58:50 crc kubenswrapper[4689]: I1201 08:58:50.801839 4689 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2df3ac61-47b1-4ca0-a5a2-80b2a94d1361-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 01 08:58:50 crc kubenswrapper[4689]: I1201 08:58:50.801977 4689 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Dec 01 08:58:50 crc kubenswrapper[4689]: I1201 08:58:50.802068 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhbjv\" (UniqueName: \"kubernetes.io/projected/2df3ac61-47b1-4ca0-a5a2-80b2a94d1361-kube-api-access-fhbjv\") on node \"crc\" DevicePath \"\"" Dec 01 08:58:50 crc kubenswrapper[4689]: I1201 08:58:50.802154 4689 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2df3ac61-47b1-4ca0-a5a2-80b2a94d1361-logs\") on node \"crc\" DevicePath \"\"" Dec 01 08:58:50 crc kubenswrapper[4689]: I1201 08:58:50.802233 4689 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2df3ac61-47b1-4ca0-a5a2-80b2a94d1361-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 01 08:58:50 crc kubenswrapper[4689]: I1201 08:58:50.802324 4689 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2df3ac61-47b1-4ca0-a5a2-80b2a94d1361-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 08:58:50 crc kubenswrapper[4689]: I1201 08:58:50.820159 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2df3ac61-47b1-4ca0-a5a2-80b2a94d1361-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2df3ac61-47b1-4ca0-a5a2-80b2a94d1361" (UID: "2df3ac61-47b1-4ca0-a5a2-80b2a94d1361"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:58:50 crc kubenswrapper[4689]: I1201 08:58:50.835720 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2df3ac61-47b1-4ca0-a5a2-80b2a94d1361-config-data" (OuterVolumeSpecName: "config-data") pod "2df3ac61-47b1-4ca0-a5a2-80b2a94d1361" (UID: "2df3ac61-47b1-4ca0-a5a2-80b2a94d1361"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:58:50 crc kubenswrapper[4689]: I1201 08:58:50.857311 4689 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Dec 01 08:58:50 crc kubenswrapper[4689]: I1201 08:58:50.866702 4689 scope.go:117] "RemoveContainer" containerID="50c87837343c21f6c696f8a4d45c097939acce86c6e84bcf6063952335c4f682" Dec 01 08:58:50 crc kubenswrapper[4689]: I1201 08:58:50.904155 4689 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Dec 01 08:58:50 crc kubenswrapper[4689]: I1201 08:58:50.904189 4689 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2df3ac61-47b1-4ca0-a5a2-80b2a94d1361-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 08:58:50 crc kubenswrapper[4689]: I1201 08:58:50.904202 4689 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2df3ac61-47b1-4ca0-a5a2-80b2a94d1361-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 08:58:50 crc kubenswrapper[4689]: I1201 08:58:50.913289 4689 scope.go:117] "RemoveContainer" containerID="983d509035efff3130b097cb56b9c7a0702e2f2a59c994ed32cf7b1f0a76b86c" Dec 01 08:58:50 crc kubenswrapper[4689]: E1201 08:58:50.915660 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"983d509035efff3130b097cb56b9c7a0702e2f2a59c994ed32cf7b1f0a76b86c\": container with ID starting with 983d509035efff3130b097cb56b9c7a0702e2f2a59c994ed32cf7b1f0a76b86c not found: ID does not exist" containerID="983d509035efff3130b097cb56b9c7a0702e2f2a59c994ed32cf7b1f0a76b86c" Dec 01 08:58:50 crc kubenswrapper[4689]: I1201 08:58:50.915701 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"983d509035efff3130b097cb56b9c7a0702e2f2a59c994ed32cf7b1f0a76b86c"} err="failed to get container status \"983d509035efff3130b097cb56b9c7a0702e2f2a59c994ed32cf7b1f0a76b86c\": rpc error: code = NotFound desc = could not find container \"983d509035efff3130b097cb56b9c7a0702e2f2a59c994ed32cf7b1f0a76b86c\": container with ID starting with 983d509035efff3130b097cb56b9c7a0702e2f2a59c994ed32cf7b1f0a76b86c not found: ID does not exist" Dec 01 08:58:50 crc kubenswrapper[4689]: I1201 08:58:50.915730 4689 scope.go:117] "RemoveContainer" containerID="50c87837343c21f6c696f8a4d45c097939acce86c6e84bcf6063952335c4f682" Dec 01 08:58:50 crc kubenswrapper[4689]: E1201 08:58:50.916124 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50c87837343c21f6c696f8a4d45c097939acce86c6e84bcf6063952335c4f682\": container with ID starting with 50c87837343c21f6c696f8a4d45c097939acce86c6e84bcf6063952335c4f682 not found: ID does not exist" containerID="50c87837343c21f6c696f8a4d45c097939acce86c6e84bcf6063952335c4f682" Dec 01 08:58:50 crc kubenswrapper[4689]: I1201 08:58:50.916168 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50c87837343c21f6c696f8a4d45c097939acce86c6e84bcf6063952335c4f682"} err="failed to get container status \"50c87837343c21f6c696f8a4d45c097939acce86c6e84bcf6063952335c4f682\": rpc error: code = NotFound desc = could not find container \"50c87837343c21f6c696f8a4d45c097939acce86c6e84bcf6063952335c4f682\": container with ID starting with 50c87837343c21f6c696f8a4d45c097939acce86c6e84bcf6063952335c4f682 not found: ID does not exist" Dec 01 08:58:51 crc kubenswrapper[4689]: I1201 08:58:51.101731 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 01 08:58:51 crc kubenswrapper[4689]: I1201 08:58:51.114099 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 01 08:58:51 crc kubenswrapper[4689]: I1201 08:58:51.133725 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 01 08:58:51 crc kubenswrapper[4689]: E1201 08:58:51.134245 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2df3ac61-47b1-4ca0-a5a2-80b2a94d1361" containerName="glance-log" Dec 01 08:58:51 crc kubenswrapper[4689]: I1201 08:58:51.134271 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="2df3ac61-47b1-4ca0-a5a2-80b2a94d1361" containerName="glance-log" Dec 01 08:58:51 crc kubenswrapper[4689]: E1201 08:58:51.134306 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2df3ac61-47b1-4ca0-a5a2-80b2a94d1361" containerName="glance-httpd" Dec 01 08:58:51 crc kubenswrapper[4689]: I1201 08:58:51.134316 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="2df3ac61-47b1-4ca0-a5a2-80b2a94d1361" containerName="glance-httpd" Dec 01 08:58:51 crc kubenswrapper[4689]: I1201 08:58:51.134562 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="2df3ac61-47b1-4ca0-a5a2-80b2a94d1361" containerName="glance-httpd" Dec 01 08:58:51 crc kubenswrapper[4689]: I1201 08:58:51.134596 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="2df3ac61-47b1-4ca0-a5a2-80b2a94d1361" containerName="glance-log" Dec 01 08:58:51 crc kubenswrapper[4689]: I1201 08:58:51.135836 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 01 08:58:51 crc kubenswrapper[4689]: I1201 08:58:51.142958 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 01 08:58:51 crc kubenswrapper[4689]: I1201 08:58:51.143211 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Dec 01 08:58:51 crc kubenswrapper[4689]: I1201 08:58:51.145657 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 01 08:58:51 crc kubenswrapper[4689]: I1201 08:58:51.220057 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/097455e0-57c7-4c8e-bd14-86890aecc860-logs\") pod \"glance-default-external-api-0\" (UID: \"097455e0-57c7-4c8e-bd14-86890aecc860\") " pod="openstack/glance-default-external-api-0" Dec 01 08:58:51 crc kubenswrapper[4689]: I1201 08:58:51.220479 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/097455e0-57c7-4c8e-bd14-86890aecc860-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"097455e0-57c7-4c8e-bd14-86890aecc860\") " pod="openstack/glance-default-external-api-0" Dec 01 08:58:51 crc kubenswrapper[4689]: I1201 08:58:51.220590 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/097455e0-57c7-4c8e-bd14-86890aecc860-scripts\") pod \"glance-default-external-api-0\" (UID: \"097455e0-57c7-4c8e-bd14-86890aecc860\") " pod="openstack/glance-default-external-api-0" Dec 01 08:58:51 crc kubenswrapper[4689]: I1201 08:58:51.220638 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"097455e0-57c7-4c8e-bd14-86890aecc860\") " pod="openstack/glance-default-external-api-0" Dec 01 08:58:51 crc kubenswrapper[4689]: I1201 08:58:51.220963 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/097455e0-57c7-4c8e-bd14-86890aecc860-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"097455e0-57c7-4c8e-bd14-86890aecc860\") " pod="openstack/glance-default-external-api-0" Dec 01 08:58:51 crc kubenswrapper[4689]: I1201 08:58:51.221079 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/097455e0-57c7-4c8e-bd14-86890aecc860-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"097455e0-57c7-4c8e-bd14-86890aecc860\") " pod="openstack/glance-default-external-api-0" Dec 01 08:58:51 crc kubenswrapper[4689]: I1201 08:58:51.221418 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdmx2\" (UniqueName: \"kubernetes.io/projected/097455e0-57c7-4c8e-bd14-86890aecc860-kube-api-access-pdmx2\") pod \"glance-default-external-api-0\" (UID: \"097455e0-57c7-4c8e-bd14-86890aecc860\") " pod="openstack/glance-default-external-api-0" Dec 01 08:58:51 crc kubenswrapper[4689]: I1201 08:58:51.222308 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/097455e0-57c7-4c8e-bd14-86890aecc860-config-data\") pod \"glance-default-external-api-0\" (UID: \"097455e0-57c7-4c8e-bd14-86890aecc860\") " pod="openstack/glance-default-external-api-0" Dec 01 08:58:51 crc kubenswrapper[4689]: I1201 08:58:51.325251 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/097455e0-57c7-4c8e-bd14-86890aecc860-logs\") pod \"glance-default-external-api-0\" (UID: \"097455e0-57c7-4c8e-bd14-86890aecc860\") " pod="openstack/glance-default-external-api-0" Dec 01 08:58:51 crc kubenswrapper[4689]: I1201 08:58:51.325458 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/097455e0-57c7-4c8e-bd14-86890aecc860-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"097455e0-57c7-4c8e-bd14-86890aecc860\") " pod="openstack/glance-default-external-api-0" Dec 01 08:58:51 crc kubenswrapper[4689]: I1201 08:58:51.325546 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/097455e0-57c7-4c8e-bd14-86890aecc860-scripts\") pod \"glance-default-external-api-0\" (UID: \"097455e0-57c7-4c8e-bd14-86890aecc860\") " pod="openstack/glance-default-external-api-0" Dec 01 08:58:51 crc kubenswrapper[4689]: I1201 08:58:51.325584 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"097455e0-57c7-4c8e-bd14-86890aecc860\") " pod="openstack/glance-default-external-api-0" Dec 01 08:58:51 crc kubenswrapper[4689]: I1201 08:58:51.325654 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/097455e0-57c7-4c8e-bd14-86890aecc860-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"097455e0-57c7-4c8e-bd14-86890aecc860\") " pod="openstack/glance-default-external-api-0" Dec 01 08:58:51 crc kubenswrapper[4689]: I1201 08:58:51.325712 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/097455e0-57c7-4c8e-bd14-86890aecc860-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"097455e0-57c7-4c8e-bd14-86890aecc860\") " pod="openstack/glance-default-external-api-0" Dec 01 08:58:51 crc kubenswrapper[4689]: I1201 08:58:51.325796 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdmx2\" (UniqueName: \"kubernetes.io/projected/097455e0-57c7-4c8e-bd14-86890aecc860-kube-api-access-pdmx2\") pod \"glance-default-external-api-0\" (UID: \"097455e0-57c7-4c8e-bd14-86890aecc860\") " pod="openstack/glance-default-external-api-0" Dec 01 08:58:51 crc kubenswrapper[4689]: I1201 08:58:51.325860 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/097455e0-57c7-4c8e-bd14-86890aecc860-config-data\") pod \"glance-default-external-api-0\" (UID: \"097455e0-57c7-4c8e-bd14-86890aecc860\") " pod="openstack/glance-default-external-api-0" Dec 01 08:58:51 crc kubenswrapper[4689]: I1201 08:58:51.325947 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/097455e0-57c7-4c8e-bd14-86890aecc860-logs\") pod \"glance-default-external-api-0\" (UID: \"097455e0-57c7-4c8e-bd14-86890aecc860\") " pod="openstack/glance-default-external-api-0" Dec 01 08:58:51 crc kubenswrapper[4689]: I1201 08:58:51.326421 4689 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"097455e0-57c7-4c8e-bd14-86890aecc860\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-external-api-0" Dec 01 08:58:51 crc kubenswrapper[4689]: I1201 08:58:51.327093 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/097455e0-57c7-4c8e-bd14-86890aecc860-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"097455e0-57c7-4c8e-bd14-86890aecc860\") " pod="openstack/glance-default-external-api-0" Dec 01 08:58:51 crc kubenswrapper[4689]: I1201 08:58:51.332080 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/097455e0-57c7-4c8e-bd14-86890aecc860-scripts\") pod \"glance-default-external-api-0\" (UID: \"097455e0-57c7-4c8e-bd14-86890aecc860\") " pod="openstack/glance-default-external-api-0" Dec 01 08:58:51 crc kubenswrapper[4689]: I1201 08:58:51.335135 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/097455e0-57c7-4c8e-bd14-86890aecc860-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"097455e0-57c7-4c8e-bd14-86890aecc860\") " pod="openstack/glance-default-external-api-0" Dec 01 08:58:51 crc kubenswrapper[4689]: I1201 08:58:51.344101 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/097455e0-57c7-4c8e-bd14-86890aecc860-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"097455e0-57c7-4c8e-bd14-86890aecc860\") " pod="openstack/glance-default-external-api-0" Dec 01 08:58:51 crc kubenswrapper[4689]: I1201 08:58:51.344907 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/097455e0-57c7-4c8e-bd14-86890aecc860-config-data\") pod \"glance-default-external-api-0\" (UID: \"097455e0-57c7-4c8e-bd14-86890aecc860\") " pod="openstack/glance-default-external-api-0" Dec 01 08:58:51 crc kubenswrapper[4689]: I1201 08:58:51.346624 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdmx2\" (UniqueName: \"kubernetes.io/projected/097455e0-57c7-4c8e-bd14-86890aecc860-kube-api-access-pdmx2\") pod \"glance-default-external-api-0\" (UID: \"097455e0-57c7-4c8e-bd14-86890aecc860\") " pod="openstack/glance-default-external-api-0" Dec 01 08:58:51 crc kubenswrapper[4689]: I1201 08:58:51.416201 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"097455e0-57c7-4c8e-bd14-86890aecc860\") " pod="openstack/glance-default-external-api-0" Dec 01 08:58:51 crc kubenswrapper[4689]: I1201 08:58:51.472266 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 01 08:58:51 crc kubenswrapper[4689]: I1201 08:58:51.689051 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 01 08:58:51 crc kubenswrapper[4689]: I1201 08:58:51.742886 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b734f82-967b-49b2-bb9e-2f17fdcf54d3-combined-ca-bundle\") pod \"4b734f82-967b-49b2-bb9e-2f17fdcf54d3\" (UID: \"4b734f82-967b-49b2-bb9e-2f17fdcf54d3\") " Dec 01 08:58:51 crc kubenswrapper[4689]: I1201 08:58:51.742984 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b734f82-967b-49b2-bb9e-2f17fdcf54d3-internal-tls-certs\") pod \"4b734f82-967b-49b2-bb9e-2f17fdcf54d3\" (UID: \"4b734f82-967b-49b2-bb9e-2f17fdcf54d3\") " Dec 01 08:58:51 crc kubenswrapper[4689]: I1201 08:58:51.743075 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b734f82-967b-49b2-bb9e-2f17fdcf54d3-scripts\") pod \"4b734f82-967b-49b2-bb9e-2f17fdcf54d3\" (UID: \"4b734f82-967b-49b2-bb9e-2f17fdcf54d3\") " Dec 01 08:58:51 crc kubenswrapper[4689]: I1201 08:58:51.743123 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b734f82-967b-49b2-bb9e-2f17fdcf54d3-config-data\") pod \"4b734f82-967b-49b2-bb9e-2f17fdcf54d3\" (UID: \"4b734f82-967b-49b2-bb9e-2f17fdcf54d3\") " Dec 01 08:58:51 crc kubenswrapper[4689]: I1201 08:58:51.743157 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b734f82-967b-49b2-bb9e-2f17fdcf54d3-logs\") pod \"4b734f82-967b-49b2-bb9e-2f17fdcf54d3\" (UID: \"4b734f82-967b-49b2-bb9e-2f17fdcf54d3\") " Dec 01 08:58:51 crc kubenswrapper[4689]: I1201 08:58:51.743194 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4b734f82-967b-49b2-bb9e-2f17fdcf54d3-httpd-run\") pod \"4b734f82-967b-49b2-bb9e-2f17fdcf54d3\" (UID: \"4b734f82-967b-49b2-bb9e-2f17fdcf54d3\") " Dec 01 08:58:51 crc kubenswrapper[4689]: I1201 08:58:51.743215 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4bgtn\" (UniqueName: \"kubernetes.io/projected/4b734f82-967b-49b2-bb9e-2f17fdcf54d3-kube-api-access-4bgtn\") pod \"4b734f82-967b-49b2-bb9e-2f17fdcf54d3\" (UID: \"4b734f82-967b-49b2-bb9e-2f17fdcf54d3\") " Dec 01 08:58:51 crc kubenswrapper[4689]: I1201 08:58:51.743240 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"4b734f82-967b-49b2-bb9e-2f17fdcf54d3\" (UID: \"4b734f82-967b-49b2-bb9e-2f17fdcf54d3\") " Dec 01 08:58:51 crc kubenswrapper[4689]: I1201 08:58:51.749241 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b734f82-967b-49b2-bb9e-2f17fdcf54d3-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "4b734f82-967b-49b2-bb9e-2f17fdcf54d3" (UID: "4b734f82-967b-49b2-bb9e-2f17fdcf54d3"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 08:58:51 crc kubenswrapper[4689]: I1201 08:58:51.750481 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "glance") pod "4b734f82-967b-49b2-bb9e-2f17fdcf54d3" (UID: "4b734f82-967b-49b2-bb9e-2f17fdcf54d3"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 01 08:58:51 crc kubenswrapper[4689]: I1201 08:58:51.754630 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b734f82-967b-49b2-bb9e-2f17fdcf54d3-kube-api-access-4bgtn" (OuterVolumeSpecName: "kube-api-access-4bgtn") pod "4b734f82-967b-49b2-bb9e-2f17fdcf54d3" (UID: "4b734f82-967b-49b2-bb9e-2f17fdcf54d3"). InnerVolumeSpecName "kube-api-access-4bgtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:58:51 crc kubenswrapper[4689]: I1201 08:58:51.761638 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b734f82-967b-49b2-bb9e-2f17fdcf54d3-logs" (OuterVolumeSpecName: "logs") pod "4b734f82-967b-49b2-bb9e-2f17fdcf54d3" (UID: "4b734f82-967b-49b2-bb9e-2f17fdcf54d3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 08:58:51 crc kubenswrapper[4689]: I1201 08:58:51.776203 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b734f82-967b-49b2-bb9e-2f17fdcf54d3-scripts" (OuterVolumeSpecName: "scripts") pod "4b734f82-967b-49b2-bb9e-2f17fdcf54d3" (UID: "4b734f82-967b-49b2-bb9e-2f17fdcf54d3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:58:51 crc kubenswrapper[4689]: I1201 08:58:51.797198 4689 generic.go:334] "Generic (PLEG): container finished" podID="4b734f82-967b-49b2-bb9e-2f17fdcf54d3" containerID="707fec18c0267bdb413677b34b46b97f18071f6dc9d667142ed084a3e1f0ab5d" exitCode=0 Dec 01 08:58:51 crc kubenswrapper[4689]: I1201 08:58:51.797263 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4b734f82-967b-49b2-bb9e-2f17fdcf54d3","Type":"ContainerDied","Data":"707fec18c0267bdb413677b34b46b97f18071f6dc9d667142ed084a3e1f0ab5d"} Dec 01 08:58:51 crc kubenswrapper[4689]: I1201 08:58:51.797287 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4b734f82-967b-49b2-bb9e-2f17fdcf54d3","Type":"ContainerDied","Data":"23979d6da9c474832d80c68bd7e3d13ee4fd6ef75303ee623b5c595760455ab0"} Dec 01 08:58:51 crc kubenswrapper[4689]: I1201 08:58:51.797324 4689 scope.go:117] "RemoveContainer" containerID="707fec18c0267bdb413677b34b46b97f18071f6dc9d667142ed084a3e1f0ab5d" Dec 01 08:58:51 crc kubenswrapper[4689]: I1201 08:58:51.797757 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 01 08:58:51 crc kubenswrapper[4689]: I1201 08:58:51.839601 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b734f82-967b-49b2-bb9e-2f17fdcf54d3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4b734f82-967b-49b2-bb9e-2f17fdcf54d3" (UID: "4b734f82-967b-49b2-bb9e-2f17fdcf54d3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:58:51 crc kubenswrapper[4689]: I1201 08:58:51.844881 4689 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b734f82-967b-49b2-bb9e-2f17fdcf54d3-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 08:58:51 crc kubenswrapper[4689]: I1201 08:58:51.844913 4689 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b734f82-967b-49b2-bb9e-2f17fdcf54d3-logs\") on node \"crc\" DevicePath \"\"" Dec 01 08:58:51 crc kubenswrapper[4689]: I1201 08:58:51.844921 4689 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4b734f82-967b-49b2-bb9e-2f17fdcf54d3-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 01 08:58:51 crc kubenswrapper[4689]: I1201 08:58:51.844930 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4bgtn\" (UniqueName: \"kubernetes.io/projected/4b734f82-967b-49b2-bb9e-2f17fdcf54d3-kube-api-access-4bgtn\") on node \"crc\" DevicePath \"\"" Dec 01 08:58:51 crc kubenswrapper[4689]: I1201 08:58:51.844958 4689 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Dec 01 08:58:51 crc kubenswrapper[4689]: I1201 08:58:51.844967 4689 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b734f82-967b-49b2-bb9e-2f17fdcf54d3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 08:58:51 crc kubenswrapper[4689]: I1201 08:58:51.846445 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b734f82-967b-49b2-bb9e-2f17fdcf54d3-config-data" (OuterVolumeSpecName: "config-data") pod "4b734f82-967b-49b2-bb9e-2f17fdcf54d3" (UID: "4b734f82-967b-49b2-bb9e-2f17fdcf54d3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:58:51 crc kubenswrapper[4689]: I1201 08:58:51.858431 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b734f82-967b-49b2-bb9e-2f17fdcf54d3-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "4b734f82-967b-49b2-bb9e-2f17fdcf54d3" (UID: "4b734f82-967b-49b2-bb9e-2f17fdcf54d3"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:58:51 crc kubenswrapper[4689]: I1201 08:58:51.876024 4689 scope.go:117] "RemoveContainer" containerID="bceeee0903dfff5ae94b2c6888c44d8c4722f09c38c476bf04eac8d1ee7b266d" Dec 01 08:58:51 crc kubenswrapper[4689]: I1201 08:58:51.889631 4689 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Dec 01 08:58:51 crc kubenswrapper[4689]: I1201 08:58:51.928937 4689 scope.go:117] "RemoveContainer" containerID="707fec18c0267bdb413677b34b46b97f18071f6dc9d667142ed084a3e1f0ab5d" Dec 01 08:58:51 crc kubenswrapper[4689]: E1201 08:58:51.932617 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"707fec18c0267bdb413677b34b46b97f18071f6dc9d667142ed084a3e1f0ab5d\": container with ID starting with 707fec18c0267bdb413677b34b46b97f18071f6dc9d667142ed084a3e1f0ab5d not found: ID does not exist" containerID="707fec18c0267bdb413677b34b46b97f18071f6dc9d667142ed084a3e1f0ab5d" Dec 01 08:58:51 crc kubenswrapper[4689]: I1201 08:58:51.932651 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"707fec18c0267bdb413677b34b46b97f18071f6dc9d667142ed084a3e1f0ab5d"} err="failed to get container status \"707fec18c0267bdb413677b34b46b97f18071f6dc9d667142ed084a3e1f0ab5d\": rpc error: code = NotFound desc = could not find container \"707fec18c0267bdb413677b34b46b97f18071f6dc9d667142ed084a3e1f0ab5d\": container with ID starting with 707fec18c0267bdb413677b34b46b97f18071f6dc9d667142ed084a3e1f0ab5d not found: ID does not exist" Dec 01 08:58:51 crc kubenswrapper[4689]: I1201 08:58:51.932674 4689 scope.go:117] "RemoveContainer" containerID="bceeee0903dfff5ae94b2c6888c44d8c4722f09c38c476bf04eac8d1ee7b266d" Dec 01 08:58:51 crc kubenswrapper[4689]: E1201 08:58:51.935201 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bceeee0903dfff5ae94b2c6888c44d8c4722f09c38c476bf04eac8d1ee7b266d\": container with ID starting with bceeee0903dfff5ae94b2c6888c44d8c4722f09c38c476bf04eac8d1ee7b266d not found: ID does not exist" containerID="bceeee0903dfff5ae94b2c6888c44d8c4722f09c38c476bf04eac8d1ee7b266d" Dec 01 08:58:51 crc kubenswrapper[4689]: I1201 08:58:51.935230 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bceeee0903dfff5ae94b2c6888c44d8c4722f09c38c476bf04eac8d1ee7b266d"} err="failed to get container status \"bceeee0903dfff5ae94b2c6888c44d8c4722f09c38c476bf04eac8d1ee7b266d\": rpc error: code = NotFound desc = could not find container \"bceeee0903dfff5ae94b2c6888c44d8c4722f09c38c476bf04eac8d1ee7b266d\": container with ID starting with bceeee0903dfff5ae94b2c6888c44d8c4722f09c38c476bf04eac8d1ee7b266d not found: ID does not exist" Dec 01 08:58:51 crc kubenswrapper[4689]: I1201 08:58:51.946964 4689 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b734f82-967b-49b2-bb9e-2f17fdcf54d3-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 01 08:58:51 crc kubenswrapper[4689]: I1201 08:58:51.947297 4689 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b734f82-967b-49b2-bb9e-2f17fdcf54d3-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 08:58:51 crc kubenswrapper[4689]: I1201 08:58:51.947312 4689 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Dec 01 08:58:52 crc kubenswrapper[4689]: I1201 08:58:52.053460 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-78d9cd9dbd-qxwq7" podUID="e88c04bb-01ff-47a6-8942-05a9a2a68416" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Dec 01 08:58:52 crc kubenswrapper[4689]: I1201 08:58:52.150738 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 01 08:58:52 crc kubenswrapper[4689]: I1201 08:58:52.158662 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 01 08:58:52 crc kubenswrapper[4689]: I1201 08:58:52.191680 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 01 08:58:52 crc kubenswrapper[4689]: E1201 08:58:52.192115 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b734f82-967b-49b2-bb9e-2f17fdcf54d3" containerName="glance-log" Dec 01 08:58:52 crc kubenswrapper[4689]: I1201 08:58:52.192140 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b734f82-967b-49b2-bb9e-2f17fdcf54d3" containerName="glance-log" Dec 01 08:58:52 crc kubenswrapper[4689]: E1201 08:58:52.192165 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b734f82-967b-49b2-bb9e-2f17fdcf54d3" containerName="glance-httpd" Dec 01 08:58:52 crc kubenswrapper[4689]: I1201 08:58:52.192172 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b734f82-967b-49b2-bb9e-2f17fdcf54d3" containerName="glance-httpd" Dec 01 08:58:52 crc kubenswrapper[4689]: I1201 08:58:52.192395 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b734f82-967b-49b2-bb9e-2f17fdcf54d3" containerName="glance-httpd" Dec 01 08:58:52 crc kubenswrapper[4689]: I1201 08:58:52.192418 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b734f82-967b-49b2-bb9e-2f17fdcf54d3" containerName="glance-log" Dec 01 08:58:52 crc kubenswrapper[4689]: I1201 08:58:52.193659 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 01 08:58:52 crc kubenswrapper[4689]: I1201 08:58:52.225161 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 01 08:58:52 crc kubenswrapper[4689]: I1201 08:58:52.226127 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Dec 01 08:58:52 crc kubenswrapper[4689]: I1201 08:58:52.233903 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 01 08:58:52 crc kubenswrapper[4689]: I1201 08:58:52.245882 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-d65b9788-2kr5p" podUID="fcebf70c-3de0-499e-928d-3419299a512f" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Dec 01 08:58:52 crc kubenswrapper[4689]: I1201 08:58:52.274534 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5dd0e22c-0f46-4089-9ef0-7882c6068697-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5dd0e22c-0f46-4089-9ef0-7882c6068697\") " pod="openstack/glance-default-internal-api-0" Dec 01 08:58:52 crc kubenswrapper[4689]: I1201 08:58:52.274613 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pp6q5\" (UniqueName: \"kubernetes.io/projected/5dd0e22c-0f46-4089-9ef0-7882c6068697-kube-api-access-pp6q5\") pod \"glance-default-internal-api-0\" (UID: \"5dd0e22c-0f46-4089-9ef0-7882c6068697\") " pod="openstack/glance-default-internal-api-0" Dec 01 08:58:52 crc kubenswrapper[4689]: I1201 08:58:52.274642 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5dd0e22c-0f46-4089-9ef0-7882c6068697-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5dd0e22c-0f46-4089-9ef0-7882c6068697\") " pod="openstack/glance-default-internal-api-0" Dec 01 08:58:52 crc kubenswrapper[4689]: I1201 08:58:52.274679 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5dd0e22c-0f46-4089-9ef0-7882c6068697-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5dd0e22c-0f46-4089-9ef0-7882c6068697\") " pod="openstack/glance-default-internal-api-0" Dec 01 08:58:52 crc kubenswrapper[4689]: I1201 08:58:52.274711 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"5dd0e22c-0f46-4089-9ef0-7882c6068697\") " pod="openstack/glance-default-internal-api-0" Dec 01 08:58:52 crc kubenswrapper[4689]: I1201 08:58:52.274764 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5dd0e22c-0f46-4089-9ef0-7882c6068697-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5dd0e22c-0f46-4089-9ef0-7882c6068697\") " pod="openstack/glance-default-internal-api-0" Dec 01 08:58:52 crc kubenswrapper[4689]: I1201 08:58:52.274832 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5dd0e22c-0f46-4089-9ef0-7882c6068697-logs\") pod \"glance-default-internal-api-0\" (UID: \"5dd0e22c-0f46-4089-9ef0-7882c6068697\") " pod="openstack/glance-default-internal-api-0" Dec 01 08:58:52 crc kubenswrapper[4689]: I1201 08:58:52.274944 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5dd0e22c-0f46-4089-9ef0-7882c6068697-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5dd0e22c-0f46-4089-9ef0-7882c6068697\") " pod="openstack/glance-default-internal-api-0" Dec 01 08:58:52 crc kubenswrapper[4689]: I1201 08:58:52.376867 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5dd0e22c-0f46-4089-9ef0-7882c6068697-logs\") pod \"glance-default-internal-api-0\" (UID: \"5dd0e22c-0f46-4089-9ef0-7882c6068697\") " pod="openstack/glance-default-internal-api-0" Dec 01 08:58:52 crc kubenswrapper[4689]: I1201 08:58:52.376961 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5dd0e22c-0f46-4089-9ef0-7882c6068697-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5dd0e22c-0f46-4089-9ef0-7882c6068697\") " pod="openstack/glance-default-internal-api-0" Dec 01 08:58:52 crc kubenswrapper[4689]: I1201 08:58:52.377018 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5dd0e22c-0f46-4089-9ef0-7882c6068697-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5dd0e22c-0f46-4089-9ef0-7882c6068697\") " pod="openstack/glance-default-internal-api-0" Dec 01 08:58:52 crc kubenswrapper[4689]: I1201 08:58:52.377046 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pp6q5\" (UniqueName: \"kubernetes.io/projected/5dd0e22c-0f46-4089-9ef0-7882c6068697-kube-api-access-pp6q5\") pod \"glance-default-internal-api-0\" (UID: \"5dd0e22c-0f46-4089-9ef0-7882c6068697\") " pod="openstack/glance-default-internal-api-0" Dec 01 08:58:52 crc kubenswrapper[4689]: I1201 08:58:52.377066 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5dd0e22c-0f46-4089-9ef0-7882c6068697-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5dd0e22c-0f46-4089-9ef0-7882c6068697\") " pod="openstack/glance-default-internal-api-0" Dec 01 08:58:52 crc kubenswrapper[4689]: I1201 08:58:52.377086 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5dd0e22c-0f46-4089-9ef0-7882c6068697-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5dd0e22c-0f46-4089-9ef0-7882c6068697\") " pod="openstack/glance-default-internal-api-0" Dec 01 08:58:52 crc kubenswrapper[4689]: I1201 08:58:52.377109 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"5dd0e22c-0f46-4089-9ef0-7882c6068697\") " pod="openstack/glance-default-internal-api-0" Dec 01 08:58:52 crc kubenswrapper[4689]: I1201 08:58:52.377139 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5dd0e22c-0f46-4089-9ef0-7882c6068697-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5dd0e22c-0f46-4089-9ef0-7882c6068697\") " pod="openstack/glance-default-internal-api-0" Dec 01 08:58:52 crc kubenswrapper[4689]: I1201 08:58:52.378565 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5dd0e22c-0f46-4089-9ef0-7882c6068697-logs\") pod \"glance-default-internal-api-0\" (UID: \"5dd0e22c-0f46-4089-9ef0-7882c6068697\") " pod="openstack/glance-default-internal-api-0" Dec 01 08:58:52 crc kubenswrapper[4689]: I1201 08:58:52.400781 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5dd0e22c-0f46-4089-9ef0-7882c6068697-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5dd0e22c-0f46-4089-9ef0-7882c6068697\") " pod="openstack/glance-default-internal-api-0" Dec 01 08:58:52 crc kubenswrapper[4689]: I1201 08:58:52.401043 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5dd0e22c-0f46-4089-9ef0-7882c6068697-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5dd0e22c-0f46-4089-9ef0-7882c6068697\") " pod="openstack/glance-default-internal-api-0" Dec 01 08:58:52 crc kubenswrapper[4689]: I1201 08:58:52.403845 4689 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"5dd0e22c-0f46-4089-9ef0-7882c6068697\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-internal-api-0" Dec 01 08:58:52 crc kubenswrapper[4689]: I1201 08:58:52.407649 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 01 08:58:52 crc kubenswrapper[4689]: I1201 08:58:52.410595 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5dd0e22c-0f46-4089-9ef0-7882c6068697-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5dd0e22c-0f46-4089-9ef0-7882c6068697\") " pod="openstack/glance-default-internal-api-0" Dec 01 08:58:52 crc kubenswrapper[4689]: I1201 08:58:52.434129 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5dd0e22c-0f46-4089-9ef0-7882c6068697-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5dd0e22c-0f46-4089-9ef0-7882c6068697\") " pod="openstack/glance-default-internal-api-0" Dec 01 08:58:52 crc kubenswrapper[4689]: I1201 08:58:52.476037 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pp6q5\" (UniqueName: \"kubernetes.io/projected/5dd0e22c-0f46-4089-9ef0-7882c6068697-kube-api-access-pp6q5\") pod \"glance-default-internal-api-0\" (UID: \"5dd0e22c-0f46-4089-9ef0-7882c6068697\") " pod="openstack/glance-default-internal-api-0" Dec 01 08:58:52 crc kubenswrapper[4689]: I1201 08:58:52.480611 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"5dd0e22c-0f46-4089-9ef0-7882c6068697\") " pod="openstack/glance-default-internal-api-0" Dec 01 08:58:52 crc kubenswrapper[4689]: I1201 08:58:52.509198 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5dd0e22c-0f46-4089-9ef0-7882c6068697-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5dd0e22c-0f46-4089-9ef0-7882c6068697\") " pod="openstack/glance-default-internal-api-0" Dec 01 08:58:52 crc kubenswrapper[4689]: I1201 08:58:52.618887 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 01 08:58:52 crc kubenswrapper[4689]: I1201 08:58:52.834982 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"097455e0-57c7-4c8e-bd14-86890aecc860","Type":"ContainerStarted","Data":"c401d12afba537de53c5f1020953e13b09188c0e155995877a2119d082196b87"} Dec 01 08:58:53 crc kubenswrapper[4689]: I1201 08:58:53.061651 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2df3ac61-47b1-4ca0-a5a2-80b2a94d1361" path="/var/lib/kubelet/pods/2df3ac61-47b1-4ca0-a5a2-80b2a94d1361/volumes" Dec 01 08:58:53 crc kubenswrapper[4689]: I1201 08:58:53.062790 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b734f82-967b-49b2-bb9e-2f17fdcf54d3" path="/var/lib/kubelet/pods/4b734f82-967b-49b2-bb9e-2f17fdcf54d3/volumes" Dec 01 08:58:53 crc kubenswrapper[4689]: I1201 08:58:53.237154 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 01 08:58:53 crc kubenswrapper[4689]: W1201 08:58:53.244867 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5dd0e22c_0f46_4089_9ef0_7882c6068697.slice/crio-486b28e3c61661f063be89f9bebccf2def370e429ab9aa101ba0dd96e7d1ce61 WatchSource:0}: Error finding container 486b28e3c61661f063be89f9bebccf2def370e429ab9aa101ba0dd96e7d1ce61: Status 404 returned error can't find the container with id 486b28e3c61661f063be89f9bebccf2def370e429ab9aa101ba0dd96e7d1ce61 Dec 01 08:58:53 crc kubenswrapper[4689]: I1201 08:58:53.858215 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5dd0e22c-0f46-4089-9ef0-7882c6068697","Type":"ContainerStarted","Data":"486b28e3c61661f063be89f9bebccf2def370e429ab9aa101ba0dd96e7d1ce61"} Dec 01 08:58:53 crc kubenswrapper[4689]: I1201 08:58:53.861735 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"097455e0-57c7-4c8e-bd14-86890aecc860","Type":"ContainerStarted","Data":"62bf48a6f5ee1527a0dd091b994696ee2b22caba2db264c60cbbb480896c2817"} Dec 01 08:58:54 crc kubenswrapper[4689]: I1201 08:58:54.891147 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"097455e0-57c7-4c8e-bd14-86890aecc860","Type":"ContainerStarted","Data":"bfc3ed1264d1e3e2e3ee52ac0907964c7cfd9c6335a071742124defc694a8c00"} Dec 01 08:58:54 crc kubenswrapper[4689]: I1201 08:58:54.894679 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5dd0e22c-0f46-4089-9ef0-7882c6068697","Type":"ContainerStarted","Data":"9cf773f26fac668fb8b2e478c84a04f7dfc18f545c05b2a99b85e7e8266b7890"} Dec 01 08:58:54 crc kubenswrapper[4689]: I1201 08:58:54.894716 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5dd0e22c-0f46-4089-9ef0-7882c6068697","Type":"ContainerStarted","Data":"96c59a20773fc3a6df3d2773b078cb41dc12ec77b6a914ab697cfefc90545752"} Dec 01 08:58:54 crc kubenswrapper[4689]: I1201 08:58:54.956528 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=2.9565142079999998 podStartE2EDuration="2.956514208s" podCreationTimestamp="2025-12-01 08:58:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:58:54.950010089 +0000 UTC m=+1215.022297993" watchObservedRunningTime="2025-12-01 08:58:54.956514208 +0000 UTC m=+1215.028802112" Dec 01 08:58:54 crc kubenswrapper[4689]: I1201 08:58:54.957620 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.957614448 podStartE2EDuration="3.957614448s" podCreationTimestamp="2025-12-01 08:58:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:58:54.927477982 +0000 UTC m=+1214.999765896" watchObservedRunningTime="2025-12-01 08:58:54.957614448 +0000 UTC m=+1215.029902352" Dec 01 08:58:56 crc kubenswrapper[4689]: I1201 08:58:56.435078 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 08:58:56 crc kubenswrapper[4689]: I1201 08:58:56.582636 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c79796e3-78be-4803-9627-c8c769fc9f59-log-httpd\") pod \"c79796e3-78be-4803-9627-c8c769fc9f59\" (UID: \"c79796e3-78be-4803-9627-c8c769fc9f59\") " Dec 01 08:58:56 crc kubenswrapper[4689]: I1201 08:58:56.582737 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c79796e3-78be-4803-9627-c8c769fc9f59-combined-ca-bundle\") pod \"c79796e3-78be-4803-9627-c8c769fc9f59\" (UID: \"c79796e3-78be-4803-9627-c8c769fc9f59\") " Dec 01 08:58:56 crc kubenswrapper[4689]: I1201 08:58:56.582759 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c79796e3-78be-4803-9627-c8c769fc9f59-scripts\") pod \"c79796e3-78be-4803-9627-c8c769fc9f59\" (UID: \"c79796e3-78be-4803-9627-c8c769fc9f59\") " Dec 01 08:58:56 crc kubenswrapper[4689]: I1201 08:58:56.582821 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cl658\" (UniqueName: \"kubernetes.io/projected/c79796e3-78be-4803-9627-c8c769fc9f59-kube-api-access-cl658\") pod \"c79796e3-78be-4803-9627-c8c769fc9f59\" (UID: \"c79796e3-78be-4803-9627-c8c769fc9f59\") " Dec 01 08:58:56 crc kubenswrapper[4689]: I1201 08:58:56.582885 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c79796e3-78be-4803-9627-c8c769fc9f59-config-data\") pod \"c79796e3-78be-4803-9627-c8c769fc9f59\" (UID: \"c79796e3-78be-4803-9627-c8c769fc9f59\") " Dec 01 08:58:56 crc kubenswrapper[4689]: I1201 08:58:56.582926 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c79796e3-78be-4803-9627-c8c769fc9f59-sg-core-conf-yaml\") pod \"c79796e3-78be-4803-9627-c8c769fc9f59\" (UID: \"c79796e3-78be-4803-9627-c8c769fc9f59\") " Dec 01 08:58:56 crc kubenswrapper[4689]: I1201 08:58:56.582996 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c79796e3-78be-4803-9627-c8c769fc9f59-run-httpd\") pod \"c79796e3-78be-4803-9627-c8c769fc9f59\" (UID: \"c79796e3-78be-4803-9627-c8c769fc9f59\") " Dec 01 08:58:56 crc kubenswrapper[4689]: I1201 08:58:56.583944 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c79796e3-78be-4803-9627-c8c769fc9f59-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c79796e3-78be-4803-9627-c8c769fc9f59" (UID: "c79796e3-78be-4803-9627-c8c769fc9f59"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 08:58:56 crc kubenswrapper[4689]: I1201 08:58:56.584508 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c79796e3-78be-4803-9627-c8c769fc9f59-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c79796e3-78be-4803-9627-c8c769fc9f59" (UID: "c79796e3-78be-4803-9627-c8c769fc9f59"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 08:58:56 crc kubenswrapper[4689]: I1201 08:58:56.592554 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c79796e3-78be-4803-9627-c8c769fc9f59-scripts" (OuterVolumeSpecName: "scripts") pod "c79796e3-78be-4803-9627-c8c769fc9f59" (UID: "c79796e3-78be-4803-9627-c8c769fc9f59"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:58:56 crc kubenswrapper[4689]: I1201 08:58:56.607124 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c79796e3-78be-4803-9627-c8c769fc9f59-kube-api-access-cl658" (OuterVolumeSpecName: "kube-api-access-cl658") pod "c79796e3-78be-4803-9627-c8c769fc9f59" (UID: "c79796e3-78be-4803-9627-c8c769fc9f59"). InnerVolumeSpecName "kube-api-access-cl658". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:58:56 crc kubenswrapper[4689]: I1201 08:58:56.627925 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c79796e3-78be-4803-9627-c8c769fc9f59-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c79796e3-78be-4803-9627-c8c769fc9f59" (UID: "c79796e3-78be-4803-9627-c8c769fc9f59"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:58:56 crc kubenswrapper[4689]: I1201 08:58:56.685971 4689 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c79796e3-78be-4803-9627-c8c769fc9f59-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 08:58:56 crc kubenswrapper[4689]: I1201 08:58:56.686001 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cl658\" (UniqueName: \"kubernetes.io/projected/c79796e3-78be-4803-9627-c8c769fc9f59-kube-api-access-cl658\") on node \"crc\" DevicePath \"\"" Dec 01 08:58:56 crc kubenswrapper[4689]: I1201 08:58:56.686015 4689 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c79796e3-78be-4803-9627-c8c769fc9f59-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 01 08:58:56 crc kubenswrapper[4689]: I1201 08:58:56.686026 4689 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c79796e3-78be-4803-9627-c8c769fc9f59-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 01 08:58:56 crc kubenswrapper[4689]: I1201 08:58:56.686037 4689 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c79796e3-78be-4803-9627-c8c769fc9f59-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 01 08:58:56 crc kubenswrapper[4689]: I1201 08:58:56.705533 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c79796e3-78be-4803-9627-c8c769fc9f59-config-data" (OuterVolumeSpecName: "config-data") pod "c79796e3-78be-4803-9627-c8c769fc9f59" (UID: "c79796e3-78be-4803-9627-c8c769fc9f59"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:58:56 crc kubenswrapper[4689]: I1201 08:58:56.722199 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c79796e3-78be-4803-9627-c8c769fc9f59-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c79796e3-78be-4803-9627-c8c769fc9f59" (UID: "c79796e3-78be-4803-9627-c8c769fc9f59"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:58:56 crc kubenswrapper[4689]: I1201 08:58:56.787560 4689 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c79796e3-78be-4803-9627-c8c769fc9f59-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 08:58:56 crc kubenswrapper[4689]: I1201 08:58:56.787772 4689 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c79796e3-78be-4803-9627-c8c769fc9f59-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 08:58:56 crc kubenswrapper[4689]: I1201 08:58:56.915664 4689 generic.go:334] "Generic (PLEG): container finished" podID="c79796e3-78be-4803-9627-c8c769fc9f59" containerID="475df4729eed2632dbd08b2aa785b49d951b7332c6ece3f03e6ed362b9c5810b" exitCode=0 Dec 01 08:58:56 crc kubenswrapper[4689]: I1201 08:58:56.915711 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c79796e3-78be-4803-9627-c8c769fc9f59","Type":"ContainerDied","Data":"475df4729eed2632dbd08b2aa785b49d951b7332c6ece3f03e6ed362b9c5810b"} Dec 01 08:58:56 crc kubenswrapper[4689]: I1201 08:58:56.915738 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c79796e3-78be-4803-9627-c8c769fc9f59","Type":"ContainerDied","Data":"12eddc44405cabf2d76f29a6af7f7f3f46cd443f510dcac629dfc811f23db818"} Dec 01 08:58:56 crc kubenswrapper[4689]: I1201 08:58:56.915757 4689 scope.go:117] "RemoveContainer" containerID="20862422bdfc0395148b7b7e59fd36784331e1a7ce4e9042aa210db85555e028" Dec 01 08:58:56 crc kubenswrapper[4689]: I1201 08:58:56.915818 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 08:58:56 crc kubenswrapper[4689]: I1201 08:58:56.940237 4689 scope.go:117] "RemoveContainer" containerID="3946a7012c056df89e0ea86c4c963481d7988fe042c953d2e327a780b54faeee" Dec 01 08:58:56 crc kubenswrapper[4689]: I1201 08:58:56.963451 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 01 08:58:56 crc kubenswrapper[4689]: I1201 08:58:56.998747 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 01 08:58:57 crc kubenswrapper[4689]: I1201 08:58:57.013118 4689 scope.go:117] "RemoveContainer" containerID="5232d351a200019264c534577cc10a2cf0f3b61d1023f0563bbb71605c8d3cc6" Dec 01 08:58:57 crc kubenswrapper[4689]: I1201 08:58:57.016713 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 01 08:58:57 crc kubenswrapper[4689]: E1201 08:58:57.017166 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c79796e3-78be-4803-9627-c8c769fc9f59" containerName="ceilometer-notification-agent" Dec 01 08:58:57 crc kubenswrapper[4689]: I1201 08:58:57.017264 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="c79796e3-78be-4803-9627-c8c769fc9f59" containerName="ceilometer-notification-agent" Dec 01 08:58:57 crc kubenswrapper[4689]: E1201 08:58:57.017323 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c79796e3-78be-4803-9627-c8c769fc9f59" containerName="sg-core" Dec 01 08:58:57 crc kubenswrapper[4689]: I1201 08:58:57.017386 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="c79796e3-78be-4803-9627-c8c769fc9f59" containerName="sg-core" Dec 01 08:58:57 crc kubenswrapper[4689]: E1201 08:58:57.017458 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c79796e3-78be-4803-9627-c8c769fc9f59" containerName="proxy-httpd" Dec 01 08:58:57 crc kubenswrapper[4689]: I1201 08:58:57.017509 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="c79796e3-78be-4803-9627-c8c769fc9f59" containerName="proxy-httpd" Dec 01 08:58:57 crc kubenswrapper[4689]: E1201 08:58:57.017586 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c79796e3-78be-4803-9627-c8c769fc9f59" containerName="ceilometer-central-agent" Dec 01 08:58:57 crc kubenswrapper[4689]: I1201 08:58:57.017635 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="c79796e3-78be-4803-9627-c8c769fc9f59" containerName="ceilometer-central-agent" Dec 01 08:58:57 crc kubenswrapper[4689]: I1201 08:58:57.017878 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="c79796e3-78be-4803-9627-c8c769fc9f59" containerName="ceilometer-notification-agent" Dec 01 08:58:57 crc kubenswrapper[4689]: I1201 08:58:57.017956 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="c79796e3-78be-4803-9627-c8c769fc9f59" containerName="proxy-httpd" Dec 01 08:58:57 crc kubenswrapper[4689]: I1201 08:58:57.018029 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="c79796e3-78be-4803-9627-c8c769fc9f59" containerName="sg-core" Dec 01 08:58:57 crc kubenswrapper[4689]: I1201 08:58:57.018089 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="c79796e3-78be-4803-9627-c8c769fc9f59" containerName="ceilometer-central-agent" Dec 01 08:58:57 crc kubenswrapper[4689]: I1201 08:58:57.019912 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 08:58:57 crc kubenswrapper[4689]: I1201 08:58:57.032673 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 01 08:58:57 crc kubenswrapper[4689]: I1201 08:58:57.033094 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 01 08:58:57 crc kubenswrapper[4689]: I1201 08:58:57.077844 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c79796e3-78be-4803-9627-c8c769fc9f59" path="/var/lib/kubelet/pods/c79796e3-78be-4803-9627-c8c769fc9f59/volumes" Dec 01 08:58:57 crc kubenswrapper[4689]: I1201 08:58:57.078777 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 01 08:58:57 crc kubenswrapper[4689]: I1201 08:58:57.087091 4689 scope.go:117] "RemoveContainer" containerID="475df4729eed2632dbd08b2aa785b49d951b7332c6ece3f03e6ed362b9c5810b" Dec 01 08:58:57 crc kubenswrapper[4689]: I1201 08:58:57.095572 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hjzq\" (UniqueName: \"kubernetes.io/projected/ca91aa7d-c591-4a04-81f6-738d5939ffed-kube-api-access-9hjzq\") pod \"ceilometer-0\" (UID: \"ca91aa7d-c591-4a04-81f6-738d5939ffed\") " pod="openstack/ceilometer-0" Dec 01 08:58:57 crc kubenswrapper[4689]: I1201 08:58:57.095643 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ca91aa7d-c591-4a04-81f6-738d5939ffed-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ca91aa7d-c591-4a04-81f6-738d5939ffed\") " pod="openstack/ceilometer-0" Dec 01 08:58:57 crc kubenswrapper[4689]: I1201 08:58:57.095928 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca91aa7d-c591-4a04-81f6-738d5939ffed-config-data\") pod \"ceilometer-0\" (UID: \"ca91aa7d-c591-4a04-81f6-738d5939ffed\") " pod="openstack/ceilometer-0" Dec 01 08:58:57 crc kubenswrapper[4689]: I1201 08:58:57.096032 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ca91aa7d-c591-4a04-81f6-738d5939ffed-run-httpd\") pod \"ceilometer-0\" (UID: \"ca91aa7d-c591-4a04-81f6-738d5939ffed\") " pod="openstack/ceilometer-0" Dec 01 08:58:57 crc kubenswrapper[4689]: I1201 08:58:57.096160 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca91aa7d-c591-4a04-81f6-738d5939ffed-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ca91aa7d-c591-4a04-81f6-738d5939ffed\") " pod="openstack/ceilometer-0" Dec 01 08:58:57 crc kubenswrapper[4689]: I1201 08:58:57.096184 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca91aa7d-c591-4a04-81f6-738d5939ffed-scripts\") pod \"ceilometer-0\" (UID: \"ca91aa7d-c591-4a04-81f6-738d5939ffed\") " pod="openstack/ceilometer-0" Dec 01 08:58:57 crc kubenswrapper[4689]: I1201 08:58:57.096230 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ca91aa7d-c591-4a04-81f6-738d5939ffed-log-httpd\") pod \"ceilometer-0\" (UID: \"ca91aa7d-c591-4a04-81f6-738d5939ffed\") " pod="openstack/ceilometer-0" Dec 01 08:58:57 crc kubenswrapper[4689]: I1201 08:58:57.125124 4689 scope.go:117] "RemoveContainer" containerID="20862422bdfc0395148b7b7e59fd36784331e1a7ce4e9042aa210db85555e028" Dec 01 08:58:57 crc kubenswrapper[4689]: E1201 08:58:57.125961 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20862422bdfc0395148b7b7e59fd36784331e1a7ce4e9042aa210db85555e028\": container with ID starting with 20862422bdfc0395148b7b7e59fd36784331e1a7ce4e9042aa210db85555e028 not found: ID does not exist" containerID="20862422bdfc0395148b7b7e59fd36784331e1a7ce4e9042aa210db85555e028" Dec 01 08:58:57 crc kubenswrapper[4689]: I1201 08:58:57.126018 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20862422bdfc0395148b7b7e59fd36784331e1a7ce4e9042aa210db85555e028"} err="failed to get container status \"20862422bdfc0395148b7b7e59fd36784331e1a7ce4e9042aa210db85555e028\": rpc error: code = NotFound desc = could not find container \"20862422bdfc0395148b7b7e59fd36784331e1a7ce4e9042aa210db85555e028\": container with ID starting with 20862422bdfc0395148b7b7e59fd36784331e1a7ce4e9042aa210db85555e028 not found: ID does not exist" Dec 01 08:58:57 crc kubenswrapper[4689]: I1201 08:58:57.126059 4689 scope.go:117] "RemoveContainer" containerID="3946a7012c056df89e0ea86c4c963481d7988fe042c953d2e327a780b54faeee" Dec 01 08:58:57 crc kubenswrapper[4689]: E1201 08:58:57.127256 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3946a7012c056df89e0ea86c4c963481d7988fe042c953d2e327a780b54faeee\": container with ID starting with 3946a7012c056df89e0ea86c4c963481d7988fe042c953d2e327a780b54faeee not found: ID does not exist" containerID="3946a7012c056df89e0ea86c4c963481d7988fe042c953d2e327a780b54faeee" Dec 01 08:58:57 crc kubenswrapper[4689]: I1201 08:58:57.127304 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3946a7012c056df89e0ea86c4c963481d7988fe042c953d2e327a780b54faeee"} err="failed to get container status \"3946a7012c056df89e0ea86c4c963481d7988fe042c953d2e327a780b54faeee\": rpc error: code = NotFound desc = could not find container \"3946a7012c056df89e0ea86c4c963481d7988fe042c953d2e327a780b54faeee\": container with ID starting with 3946a7012c056df89e0ea86c4c963481d7988fe042c953d2e327a780b54faeee not found: ID does not exist" Dec 01 08:58:57 crc kubenswrapper[4689]: I1201 08:58:57.127339 4689 scope.go:117] "RemoveContainer" containerID="5232d351a200019264c534577cc10a2cf0f3b61d1023f0563bbb71605c8d3cc6" Dec 01 08:58:57 crc kubenswrapper[4689]: E1201 08:58:57.128581 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5232d351a200019264c534577cc10a2cf0f3b61d1023f0563bbb71605c8d3cc6\": container with ID starting with 5232d351a200019264c534577cc10a2cf0f3b61d1023f0563bbb71605c8d3cc6 not found: ID does not exist" containerID="5232d351a200019264c534577cc10a2cf0f3b61d1023f0563bbb71605c8d3cc6" Dec 01 08:58:57 crc kubenswrapper[4689]: I1201 08:58:57.128618 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5232d351a200019264c534577cc10a2cf0f3b61d1023f0563bbb71605c8d3cc6"} err="failed to get container status \"5232d351a200019264c534577cc10a2cf0f3b61d1023f0563bbb71605c8d3cc6\": rpc error: code = NotFound desc = could not find container \"5232d351a200019264c534577cc10a2cf0f3b61d1023f0563bbb71605c8d3cc6\": container with ID starting with 5232d351a200019264c534577cc10a2cf0f3b61d1023f0563bbb71605c8d3cc6 not found: ID does not exist" Dec 01 08:58:57 crc kubenswrapper[4689]: I1201 08:58:57.128652 4689 scope.go:117] "RemoveContainer" containerID="475df4729eed2632dbd08b2aa785b49d951b7332c6ece3f03e6ed362b9c5810b" Dec 01 08:58:57 crc kubenswrapper[4689]: E1201 08:58:57.128972 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"475df4729eed2632dbd08b2aa785b49d951b7332c6ece3f03e6ed362b9c5810b\": container with ID starting with 475df4729eed2632dbd08b2aa785b49d951b7332c6ece3f03e6ed362b9c5810b not found: ID does not exist" containerID="475df4729eed2632dbd08b2aa785b49d951b7332c6ece3f03e6ed362b9c5810b" Dec 01 08:58:57 crc kubenswrapper[4689]: I1201 08:58:57.128990 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"475df4729eed2632dbd08b2aa785b49d951b7332c6ece3f03e6ed362b9c5810b"} err="failed to get container status \"475df4729eed2632dbd08b2aa785b49d951b7332c6ece3f03e6ed362b9c5810b\": rpc error: code = NotFound desc = could not find container \"475df4729eed2632dbd08b2aa785b49d951b7332c6ece3f03e6ed362b9c5810b\": container with ID starting with 475df4729eed2632dbd08b2aa785b49d951b7332c6ece3f03e6ed362b9c5810b not found: ID does not exist" Dec 01 08:58:57 crc kubenswrapper[4689]: I1201 08:58:57.197706 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca91aa7d-c591-4a04-81f6-738d5939ffed-config-data\") pod \"ceilometer-0\" (UID: \"ca91aa7d-c591-4a04-81f6-738d5939ffed\") " pod="openstack/ceilometer-0" Dec 01 08:58:57 crc kubenswrapper[4689]: I1201 08:58:57.197776 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ca91aa7d-c591-4a04-81f6-738d5939ffed-run-httpd\") pod \"ceilometer-0\" (UID: \"ca91aa7d-c591-4a04-81f6-738d5939ffed\") " pod="openstack/ceilometer-0" Dec 01 08:58:57 crc kubenswrapper[4689]: I1201 08:58:57.197868 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca91aa7d-c591-4a04-81f6-738d5939ffed-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ca91aa7d-c591-4a04-81f6-738d5939ffed\") " pod="openstack/ceilometer-0" Dec 01 08:58:57 crc kubenswrapper[4689]: I1201 08:58:57.197895 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca91aa7d-c591-4a04-81f6-738d5939ffed-scripts\") pod \"ceilometer-0\" (UID: \"ca91aa7d-c591-4a04-81f6-738d5939ffed\") " pod="openstack/ceilometer-0" Dec 01 08:58:57 crc kubenswrapper[4689]: I1201 08:58:57.197926 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ca91aa7d-c591-4a04-81f6-738d5939ffed-log-httpd\") pod \"ceilometer-0\" (UID: \"ca91aa7d-c591-4a04-81f6-738d5939ffed\") " pod="openstack/ceilometer-0" Dec 01 08:58:57 crc kubenswrapper[4689]: I1201 08:58:57.198015 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hjzq\" (UniqueName: \"kubernetes.io/projected/ca91aa7d-c591-4a04-81f6-738d5939ffed-kube-api-access-9hjzq\") pod \"ceilometer-0\" (UID: \"ca91aa7d-c591-4a04-81f6-738d5939ffed\") " pod="openstack/ceilometer-0" Dec 01 08:58:57 crc kubenswrapper[4689]: I1201 08:58:57.198085 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ca91aa7d-c591-4a04-81f6-738d5939ffed-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ca91aa7d-c591-4a04-81f6-738d5939ffed\") " pod="openstack/ceilometer-0" Dec 01 08:58:57 crc kubenswrapper[4689]: I1201 08:58:57.198521 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ca91aa7d-c591-4a04-81f6-738d5939ffed-run-httpd\") pod \"ceilometer-0\" (UID: \"ca91aa7d-c591-4a04-81f6-738d5939ffed\") " pod="openstack/ceilometer-0" Dec 01 08:58:57 crc kubenswrapper[4689]: I1201 08:58:57.198614 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ca91aa7d-c591-4a04-81f6-738d5939ffed-log-httpd\") pod \"ceilometer-0\" (UID: \"ca91aa7d-c591-4a04-81f6-738d5939ffed\") " pod="openstack/ceilometer-0" Dec 01 08:58:57 crc kubenswrapper[4689]: I1201 08:58:57.203437 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ca91aa7d-c591-4a04-81f6-738d5939ffed-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ca91aa7d-c591-4a04-81f6-738d5939ffed\") " pod="openstack/ceilometer-0" Dec 01 08:58:57 crc kubenswrapper[4689]: I1201 08:58:57.203940 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca91aa7d-c591-4a04-81f6-738d5939ffed-config-data\") pod \"ceilometer-0\" (UID: \"ca91aa7d-c591-4a04-81f6-738d5939ffed\") " pod="openstack/ceilometer-0" Dec 01 08:58:57 crc kubenswrapper[4689]: I1201 08:58:57.205834 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca91aa7d-c591-4a04-81f6-738d5939ffed-scripts\") pod \"ceilometer-0\" (UID: \"ca91aa7d-c591-4a04-81f6-738d5939ffed\") " pod="openstack/ceilometer-0" Dec 01 08:58:57 crc kubenswrapper[4689]: I1201 08:58:57.213164 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca91aa7d-c591-4a04-81f6-738d5939ffed-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ca91aa7d-c591-4a04-81f6-738d5939ffed\") " pod="openstack/ceilometer-0" Dec 01 08:58:57 crc kubenswrapper[4689]: I1201 08:58:57.221929 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hjzq\" (UniqueName: \"kubernetes.io/projected/ca91aa7d-c591-4a04-81f6-738d5939ffed-kube-api-access-9hjzq\") pod \"ceilometer-0\" (UID: \"ca91aa7d-c591-4a04-81f6-738d5939ffed\") " pod="openstack/ceilometer-0" Dec 01 08:58:57 crc kubenswrapper[4689]: I1201 08:58:57.372481 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 08:58:57 crc kubenswrapper[4689]: I1201 08:58:57.905967 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 01 08:58:57 crc kubenswrapper[4689]: I1201 08:58:57.947030 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ca91aa7d-c591-4a04-81f6-738d5939ffed","Type":"ContainerStarted","Data":"68f912aaad9845f50ba0e832c61fa54c9f753cfd95401f0a69ab4e143f3c8670"} Dec 01 08:58:59 crc kubenswrapper[4689]: I1201 08:58:59.976947 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ca91aa7d-c591-4a04-81f6-738d5939ffed","Type":"ContainerStarted","Data":"66163a3cdaf21f3e56269495e3505461d7344eeb95c394f02ee78db4813a3803"} Dec 01 08:58:59 crc kubenswrapper[4689]: I1201 08:58:59.977525 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ca91aa7d-c591-4a04-81f6-738d5939ffed","Type":"ContainerStarted","Data":"83d0ac9695779df6e7efd49f1e7e759d0c57d8bbacc41bba3230b2dacced2ab8"} Dec 01 08:59:00 crc kubenswrapper[4689]: I1201 08:59:00.040951 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-zc7tc"] Dec 01 08:59:00 crc kubenswrapper[4689]: I1201 08:59:00.045508 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-zc7tc" Dec 01 08:59:00 crc kubenswrapper[4689]: I1201 08:59:00.071782 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-zc7tc"] Dec 01 08:59:00 crc kubenswrapper[4689]: I1201 08:59:00.139028 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-w59kl"] Dec 01 08:59:00 crc kubenswrapper[4689]: I1201 08:59:00.141600 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-w59kl" Dec 01 08:59:00 crc kubenswrapper[4689]: I1201 08:59:00.155757 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-w59kl"] Dec 01 08:59:00 crc kubenswrapper[4689]: I1201 08:59:00.162360 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b9b4aab-acce-4ef4-b930-9a4942a3dc5d-operator-scripts\") pod \"nova-api-db-create-zc7tc\" (UID: \"5b9b4aab-acce-4ef4-b930-9a4942a3dc5d\") " pod="openstack/nova-api-db-create-zc7tc" Dec 01 08:59:00 crc kubenswrapper[4689]: I1201 08:59:00.162435 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmdpn\" (UniqueName: \"kubernetes.io/projected/5b9b4aab-acce-4ef4-b930-9a4942a3dc5d-kube-api-access-wmdpn\") pod \"nova-api-db-create-zc7tc\" (UID: \"5b9b4aab-acce-4ef4-b930-9a4942a3dc5d\") " pod="openstack/nova-api-db-create-zc7tc" Dec 01 08:59:00 crc kubenswrapper[4689]: I1201 08:59:00.229506 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-c0cb-account-create-update-j6d5l"] Dec 01 08:59:00 crc kubenswrapper[4689]: I1201 08:59:00.230569 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-c0cb-account-create-update-j6d5l" Dec 01 08:59:00 crc kubenswrapper[4689]: I1201 08:59:00.235739 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Dec 01 08:59:00 crc kubenswrapper[4689]: I1201 08:59:00.244074 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-c0cb-account-create-update-j6d5l"] Dec 01 08:59:00 crc kubenswrapper[4689]: I1201 08:59:00.263244 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86d5f546-10fa-4682-95b1-df605b5f23dc-operator-scripts\") pod \"nova-cell0-db-create-w59kl\" (UID: \"86d5f546-10fa-4682-95b1-df605b5f23dc\") " pod="openstack/nova-cell0-db-create-w59kl" Dec 01 08:59:00 crc kubenswrapper[4689]: I1201 08:59:00.266050 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b9b4aab-acce-4ef4-b930-9a4942a3dc5d-operator-scripts\") pod \"nova-api-db-create-zc7tc\" (UID: \"5b9b4aab-acce-4ef4-b930-9a4942a3dc5d\") " pod="openstack/nova-api-db-create-zc7tc" Dec 01 08:59:00 crc kubenswrapper[4689]: I1201 08:59:00.266278 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmdpn\" (UniqueName: \"kubernetes.io/projected/5b9b4aab-acce-4ef4-b930-9a4942a3dc5d-kube-api-access-wmdpn\") pod \"nova-api-db-create-zc7tc\" (UID: \"5b9b4aab-acce-4ef4-b930-9a4942a3dc5d\") " pod="openstack/nova-api-db-create-zc7tc" Dec 01 08:59:00 crc kubenswrapper[4689]: I1201 08:59:00.266484 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kxkx\" (UniqueName: \"kubernetes.io/projected/86d5f546-10fa-4682-95b1-df605b5f23dc-kube-api-access-7kxkx\") pod \"nova-cell0-db-create-w59kl\" (UID: \"86d5f546-10fa-4682-95b1-df605b5f23dc\") " pod="openstack/nova-cell0-db-create-w59kl" Dec 01 08:59:00 crc kubenswrapper[4689]: I1201 08:59:00.267208 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b9b4aab-acce-4ef4-b930-9a4942a3dc5d-operator-scripts\") pod \"nova-api-db-create-zc7tc\" (UID: \"5b9b4aab-acce-4ef4-b930-9a4942a3dc5d\") " pod="openstack/nova-api-db-create-zc7tc" Dec 01 08:59:00 crc kubenswrapper[4689]: I1201 08:59:00.295487 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmdpn\" (UniqueName: \"kubernetes.io/projected/5b9b4aab-acce-4ef4-b930-9a4942a3dc5d-kube-api-access-wmdpn\") pod \"nova-api-db-create-zc7tc\" (UID: \"5b9b4aab-acce-4ef4-b930-9a4942a3dc5d\") " pod="openstack/nova-api-db-create-zc7tc" Dec 01 08:59:00 crc kubenswrapper[4689]: I1201 08:59:00.325538 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-rl897"] Dec 01 08:59:00 crc kubenswrapper[4689]: I1201 08:59:00.330445 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-rl897" Dec 01 08:59:00 crc kubenswrapper[4689]: I1201 08:59:00.349178 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-rl897"] Dec 01 08:59:00 crc kubenswrapper[4689]: I1201 08:59:00.369823 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69206391-deb0-4dc2-a5e9-a7f7cbdd7844-operator-scripts\") pod \"nova-api-c0cb-account-create-update-j6d5l\" (UID: \"69206391-deb0-4dc2-a5e9-a7f7cbdd7844\") " pod="openstack/nova-api-c0cb-account-create-update-j6d5l" Dec 01 08:59:00 crc kubenswrapper[4689]: I1201 08:59:00.369900 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kxkx\" (UniqueName: \"kubernetes.io/projected/86d5f546-10fa-4682-95b1-df605b5f23dc-kube-api-access-7kxkx\") pod \"nova-cell0-db-create-w59kl\" (UID: \"86d5f546-10fa-4682-95b1-df605b5f23dc\") " pod="openstack/nova-cell0-db-create-w59kl" Dec 01 08:59:00 crc kubenswrapper[4689]: I1201 08:59:00.370447 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86d5f546-10fa-4682-95b1-df605b5f23dc-operator-scripts\") pod \"nova-cell0-db-create-w59kl\" (UID: \"86d5f546-10fa-4682-95b1-df605b5f23dc\") " pod="openstack/nova-cell0-db-create-w59kl" Dec 01 08:59:00 crc kubenswrapper[4689]: I1201 08:59:00.371076 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86d5f546-10fa-4682-95b1-df605b5f23dc-operator-scripts\") pod \"nova-cell0-db-create-w59kl\" (UID: \"86d5f546-10fa-4682-95b1-df605b5f23dc\") " pod="openstack/nova-cell0-db-create-w59kl" Dec 01 08:59:00 crc kubenswrapper[4689]: I1201 08:59:00.371238 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-zc7tc" Dec 01 08:59:00 crc kubenswrapper[4689]: I1201 08:59:00.371878 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74kfv\" (UniqueName: \"kubernetes.io/projected/69206391-deb0-4dc2-a5e9-a7f7cbdd7844-kube-api-access-74kfv\") pod \"nova-api-c0cb-account-create-update-j6d5l\" (UID: \"69206391-deb0-4dc2-a5e9-a7f7cbdd7844\") " pod="openstack/nova-api-c0cb-account-create-update-j6d5l" Dec 01 08:59:00 crc kubenswrapper[4689]: I1201 08:59:00.385490 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kxkx\" (UniqueName: \"kubernetes.io/projected/86d5f546-10fa-4682-95b1-df605b5f23dc-kube-api-access-7kxkx\") pod \"nova-cell0-db-create-w59kl\" (UID: \"86d5f546-10fa-4682-95b1-df605b5f23dc\") " pod="openstack/nova-cell0-db-create-w59kl" Dec 01 08:59:00 crc kubenswrapper[4689]: I1201 08:59:00.459359 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-c9c8-account-create-update-h265p"] Dec 01 08:59:00 crc kubenswrapper[4689]: I1201 08:59:00.461132 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-c9c8-account-create-update-h265p" Dec 01 08:59:00 crc kubenswrapper[4689]: I1201 08:59:00.461902 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-w59kl" Dec 01 08:59:00 crc kubenswrapper[4689]: I1201 08:59:00.465311 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Dec 01 08:59:00 crc kubenswrapper[4689]: I1201 08:59:00.473119 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-c9c8-account-create-update-h265p"] Dec 01 08:59:00 crc kubenswrapper[4689]: I1201 08:59:00.476205 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzgrp\" (UniqueName: \"kubernetes.io/projected/6ca8b1e2-5166-488b-bda5-7c97602825da-kube-api-access-fzgrp\") pod \"nova-cell1-db-create-rl897\" (UID: \"6ca8b1e2-5166-488b-bda5-7c97602825da\") " pod="openstack/nova-cell1-db-create-rl897" Dec 01 08:59:00 crc kubenswrapper[4689]: I1201 08:59:00.476289 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69206391-deb0-4dc2-a5e9-a7f7cbdd7844-operator-scripts\") pod \"nova-api-c0cb-account-create-update-j6d5l\" (UID: \"69206391-deb0-4dc2-a5e9-a7f7cbdd7844\") " pod="openstack/nova-api-c0cb-account-create-update-j6d5l" Dec 01 08:59:00 crc kubenswrapper[4689]: I1201 08:59:00.476327 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ca8b1e2-5166-488b-bda5-7c97602825da-operator-scripts\") pod \"nova-cell1-db-create-rl897\" (UID: \"6ca8b1e2-5166-488b-bda5-7c97602825da\") " pod="openstack/nova-cell1-db-create-rl897" Dec 01 08:59:00 crc kubenswrapper[4689]: I1201 08:59:00.476420 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74kfv\" (UniqueName: \"kubernetes.io/projected/69206391-deb0-4dc2-a5e9-a7f7cbdd7844-kube-api-access-74kfv\") pod \"nova-api-c0cb-account-create-update-j6d5l\" (UID: \"69206391-deb0-4dc2-a5e9-a7f7cbdd7844\") " pod="openstack/nova-api-c0cb-account-create-update-j6d5l" Dec 01 08:59:00 crc kubenswrapper[4689]: I1201 08:59:00.478022 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69206391-deb0-4dc2-a5e9-a7f7cbdd7844-operator-scripts\") pod \"nova-api-c0cb-account-create-update-j6d5l\" (UID: \"69206391-deb0-4dc2-a5e9-a7f7cbdd7844\") " pod="openstack/nova-api-c0cb-account-create-update-j6d5l" Dec 01 08:59:00 crc kubenswrapper[4689]: I1201 08:59:00.526768 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74kfv\" (UniqueName: \"kubernetes.io/projected/69206391-deb0-4dc2-a5e9-a7f7cbdd7844-kube-api-access-74kfv\") pod \"nova-api-c0cb-account-create-update-j6d5l\" (UID: \"69206391-deb0-4dc2-a5e9-a7f7cbdd7844\") " pod="openstack/nova-api-c0cb-account-create-update-j6d5l" Dec 01 08:59:00 crc kubenswrapper[4689]: I1201 08:59:00.564037 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-c0cb-account-create-update-j6d5l" Dec 01 08:59:00 crc kubenswrapper[4689]: I1201 08:59:00.578642 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzgrp\" (UniqueName: \"kubernetes.io/projected/6ca8b1e2-5166-488b-bda5-7c97602825da-kube-api-access-fzgrp\") pod \"nova-cell1-db-create-rl897\" (UID: \"6ca8b1e2-5166-488b-bda5-7c97602825da\") " pod="openstack/nova-cell1-db-create-rl897" Dec 01 08:59:00 crc kubenswrapper[4689]: I1201 08:59:00.578710 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/02bd29a4-4353-40b1-b147-f6257ef42632-operator-scripts\") pod \"nova-cell0-c9c8-account-create-update-h265p\" (UID: \"02bd29a4-4353-40b1-b147-f6257ef42632\") " pod="openstack/nova-cell0-c9c8-account-create-update-h265p" Dec 01 08:59:00 crc kubenswrapper[4689]: I1201 08:59:00.578781 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jk2sl\" (UniqueName: \"kubernetes.io/projected/02bd29a4-4353-40b1-b147-f6257ef42632-kube-api-access-jk2sl\") pod \"nova-cell0-c9c8-account-create-update-h265p\" (UID: \"02bd29a4-4353-40b1-b147-f6257ef42632\") " pod="openstack/nova-cell0-c9c8-account-create-update-h265p" Dec 01 08:59:00 crc kubenswrapper[4689]: I1201 08:59:00.578840 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ca8b1e2-5166-488b-bda5-7c97602825da-operator-scripts\") pod \"nova-cell1-db-create-rl897\" (UID: \"6ca8b1e2-5166-488b-bda5-7c97602825da\") " pod="openstack/nova-cell1-db-create-rl897" Dec 01 08:59:00 crc kubenswrapper[4689]: I1201 08:59:00.579612 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ca8b1e2-5166-488b-bda5-7c97602825da-operator-scripts\") pod \"nova-cell1-db-create-rl897\" (UID: \"6ca8b1e2-5166-488b-bda5-7c97602825da\") " pod="openstack/nova-cell1-db-create-rl897" Dec 01 08:59:00 crc kubenswrapper[4689]: I1201 08:59:00.596981 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzgrp\" (UniqueName: \"kubernetes.io/projected/6ca8b1e2-5166-488b-bda5-7c97602825da-kube-api-access-fzgrp\") pod \"nova-cell1-db-create-rl897\" (UID: \"6ca8b1e2-5166-488b-bda5-7c97602825da\") " pod="openstack/nova-cell1-db-create-rl897" Dec 01 08:59:00 crc kubenswrapper[4689]: I1201 08:59:00.648827 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-rl897" Dec 01 08:59:00 crc kubenswrapper[4689]: I1201 08:59:00.683296 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jk2sl\" (UniqueName: \"kubernetes.io/projected/02bd29a4-4353-40b1-b147-f6257ef42632-kube-api-access-jk2sl\") pod \"nova-cell0-c9c8-account-create-update-h265p\" (UID: \"02bd29a4-4353-40b1-b147-f6257ef42632\") " pod="openstack/nova-cell0-c9c8-account-create-update-h265p" Dec 01 08:59:00 crc kubenswrapper[4689]: I1201 08:59:00.683471 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/02bd29a4-4353-40b1-b147-f6257ef42632-operator-scripts\") pod \"nova-cell0-c9c8-account-create-update-h265p\" (UID: \"02bd29a4-4353-40b1-b147-f6257ef42632\") " pod="openstack/nova-cell0-c9c8-account-create-update-h265p" Dec 01 08:59:00 crc kubenswrapper[4689]: I1201 08:59:00.684441 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/02bd29a4-4353-40b1-b147-f6257ef42632-operator-scripts\") pod \"nova-cell0-c9c8-account-create-update-h265p\" (UID: \"02bd29a4-4353-40b1-b147-f6257ef42632\") " pod="openstack/nova-cell0-c9c8-account-create-update-h265p" Dec 01 08:59:00 crc kubenswrapper[4689]: I1201 08:59:00.692493 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-bcf0-account-create-update-r8ch4"] Dec 01 08:59:00 crc kubenswrapper[4689]: I1201 08:59:00.693973 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-bcf0-account-create-update-r8ch4" Dec 01 08:59:00 crc kubenswrapper[4689]: I1201 08:59:00.700838 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Dec 01 08:59:00 crc kubenswrapper[4689]: I1201 08:59:00.731647 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jk2sl\" (UniqueName: \"kubernetes.io/projected/02bd29a4-4353-40b1-b147-f6257ef42632-kube-api-access-jk2sl\") pod \"nova-cell0-c9c8-account-create-update-h265p\" (UID: \"02bd29a4-4353-40b1-b147-f6257ef42632\") " pod="openstack/nova-cell0-c9c8-account-create-update-h265p" Dec 01 08:59:00 crc kubenswrapper[4689]: I1201 08:59:00.749254 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-bcf0-account-create-update-r8ch4"] Dec 01 08:59:00 crc kubenswrapper[4689]: I1201 08:59:00.789761 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-c9c8-account-create-update-h265p" Dec 01 08:59:00 crc kubenswrapper[4689]: I1201 08:59:00.792716 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64a8b1dd-5948-4f22-98f2-a3441493cf5a-operator-scripts\") pod \"nova-cell1-bcf0-account-create-update-r8ch4\" (UID: \"64a8b1dd-5948-4f22-98f2-a3441493cf5a\") " pod="openstack/nova-cell1-bcf0-account-create-update-r8ch4" Dec 01 08:59:00 crc kubenswrapper[4689]: I1201 08:59:00.793079 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f97m7\" (UniqueName: \"kubernetes.io/projected/64a8b1dd-5948-4f22-98f2-a3441493cf5a-kube-api-access-f97m7\") pod \"nova-cell1-bcf0-account-create-update-r8ch4\" (UID: \"64a8b1dd-5948-4f22-98f2-a3441493cf5a\") " pod="openstack/nova-cell1-bcf0-account-create-update-r8ch4" Dec 01 08:59:00 crc kubenswrapper[4689]: I1201 08:59:00.870186 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-zc7tc"] Dec 01 08:59:00 crc kubenswrapper[4689]: I1201 08:59:00.900350 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64a8b1dd-5948-4f22-98f2-a3441493cf5a-operator-scripts\") pod \"nova-cell1-bcf0-account-create-update-r8ch4\" (UID: \"64a8b1dd-5948-4f22-98f2-a3441493cf5a\") " pod="openstack/nova-cell1-bcf0-account-create-update-r8ch4" Dec 01 08:59:00 crc kubenswrapper[4689]: I1201 08:59:00.900522 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f97m7\" (UniqueName: \"kubernetes.io/projected/64a8b1dd-5948-4f22-98f2-a3441493cf5a-kube-api-access-f97m7\") pod \"nova-cell1-bcf0-account-create-update-r8ch4\" (UID: \"64a8b1dd-5948-4f22-98f2-a3441493cf5a\") " pod="openstack/nova-cell1-bcf0-account-create-update-r8ch4" Dec 01 08:59:00 crc kubenswrapper[4689]: I1201 08:59:00.901566 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64a8b1dd-5948-4f22-98f2-a3441493cf5a-operator-scripts\") pod \"nova-cell1-bcf0-account-create-update-r8ch4\" (UID: \"64a8b1dd-5948-4f22-98f2-a3441493cf5a\") " pod="openstack/nova-cell1-bcf0-account-create-update-r8ch4" Dec 01 08:59:00 crc kubenswrapper[4689]: I1201 08:59:00.972724 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f97m7\" (UniqueName: \"kubernetes.io/projected/64a8b1dd-5948-4f22-98f2-a3441493cf5a-kube-api-access-f97m7\") pod \"nova-cell1-bcf0-account-create-update-r8ch4\" (UID: \"64a8b1dd-5948-4f22-98f2-a3441493cf5a\") " pod="openstack/nova-cell1-bcf0-account-create-update-r8ch4" Dec 01 08:59:01 crc kubenswrapper[4689]: I1201 08:59:01.126072 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-bcf0-account-create-update-r8ch4" Dec 01 08:59:01 crc kubenswrapper[4689]: I1201 08:59:01.150227 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-zc7tc" event={"ID":"5b9b4aab-acce-4ef4-b930-9a4942a3dc5d","Type":"ContainerStarted","Data":"b31d607bec3aa0a643d55648f5407d7fa9a2c8323f2f27b340252fc2e163bcfa"} Dec 01 08:59:01 crc kubenswrapper[4689]: I1201 08:59:01.241454 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-w59kl"] Dec 01 08:59:01 crc kubenswrapper[4689]: I1201 08:59:01.391775 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-c0cb-account-create-update-j6d5l"] Dec 01 08:59:01 crc kubenswrapper[4689]: I1201 08:59:01.473159 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 01 08:59:01 crc kubenswrapper[4689]: I1201 08:59:01.473201 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 01 08:59:01 crc kubenswrapper[4689]: I1201 08:59:01.549323 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 01 08:59:01 crc kubenswrapper[4689]: I1201 08:59:01.565354 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 01 08:59:01 crc kubenswrapper[4689]: W1201 08:59:01.585955 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6ca8b1e2_5166_488b_bda5_7c97602825da.slice/crio-691acf3bf1058f669d2897b7872faba6be645dd6bcdfcd1c10fed0e5a243780f WatchSource:0}: Error finding container 691acf3bf1058f669d2897b7872faba6be645dd6bcdfcd1c10fed0e5a243780f: Status 404 returned error can't find the container with id 691acf3bf1058f669d2897b7872faba6be645dd6bcdfcd1c10fed0e5a243780f Dec 01 08:59:01 crc kubenswrapper[4689]: I1201 08:59:01.588490 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-rl897"] Dec 01 08:59:01 crc kubenswrapper[4689]: I1201 08:59:01.782002 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-c9c8-account-create-update-h265p"] Dec 01 08:59:01 crc kubenswrapper[4689]: I1201 08:59:01.820485 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-bcf0-account-create-update-r8ch4"] Dec 01 08:59:02 crc kubenswrapper[4689]: I1201 08:59:02.159712 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ca91aa7d-c591-4a04-81f6-738d5939ffed","Type":"ContainerStarted","Data":"4d4c1874741c71799aa2b671a22dad2656d16a28a0614913a3ebfac8d77bcb57"} Dec 01 08:59:02 crc kubenswrapper[4689]: I1201 08:59:02.161466 4689 generic.go:334] "Generic (PLEG): container finished" podID="86d5f546-10fa-4682-95b1-df605b5f23dc" containerID="3e637158c35c8e1c0dae4fd60dc0dea19f163dc46207fee3e04d157ea4c846fd" exitCode=0 Dec 01 08:59:02 crc kubenswrapper[4689]: I1201 08:59:02.161504 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-w59kl" event={"ID":"86d5f546-10fa-4682-95b1-df605b5f23dc","Type":"ContainerDied","Data":"3e637158c35c8e1c0dae4fd60dc0dea19f163dc46207fee3e04d157ea4c846fd"} Dec 01 08:59:02 crc kubenswrapper[4689]: I1201 08:59:02.161521 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-w59kl" event={"ID":"86d5f546-10fa-4682-95b1-df605b5f23dc","Type":"ContainerStarted","Data":"a416e69584f907d0e111ed80016c1048e73598cd7a2cf4b4fd5c52168366c336"} Dec 01 08:59:02 crc kubenswrapper[4689]: I1201 08:59:02.163354 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-rl897" event={"ID":"6ca8b1e2-5166-488b-bda5-7c97602825da","Type":"ContainerStarted","Data":"17232e796d8a818724743928d131f349c861e7eff94f62322ec1b506bf8db74f"} Dec 01 08:59:02 crc kubenswrapper[4689]: I1201 08:59:02.163415 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-rl897" event={"ID":"6ca8b1e2-5166-488b-bda5-7c97602825da","Type":"ContainerStarted","Data":"691acf3bf1058f669d2897b7872faba6be645dd6bcdfcd1c10fed0e5a243780f"} Dec 01 08:59:02 crc kubenswrapper[4689]: I1201 08:59:02.188191 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-c9c8-account-create-update-h265p" event={"ID":"02bd29a4-4353-40b1-b147-f6257ef42632","Type":"ContainerStarted","Data":"c607840f0fe8fa91a0d1eb02fcfe35cf563783bb0ebf1e6a567d1ba0417e493f"} Dec 01 08:59:02 crc kubenswrapper[4689]: I1201 08:59:02.193458 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-c0cb-account-create-update-j6d5l" event={"ID":"69206391-deb0-4dc2-a5e9-a7f7cbdd7844","Type":"ContainerStarted","Data":"75f0f832ac4509a1221c7b9ecc66cec584668738ecc2eef3558817e0472833af"} Dec 01 08:59:02 crc kubenswrapper[4689]: I1201 08:59:02.193485 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-c0cb-account-create-update-j6d5l" event={"ID":"69206391-deb0-4dc2-a5e9-a7f7cbdd7844","Type":"ContainerStarted","Data":"557415d2e3e817bbfe95d9086949e5aa22a8c37383175d84c5cef9747caf72c1"} Dec 01 08:59:02 crc kubenswrapper[4689]: I1201 08:59:02.198271 4689 generic.go:334] "Generic (PLEG): container finished" podID="5b9b4aab-acce-4ef4-b930-9a4942a3dc5d" containerID="437099511ae5b65c561273bb4417b703c8032f31109e6bf7c406956663ca7ff7" exitCode=0 Dec 01 08:59:02 crc kubenswrapper[4689]: I1201 08:59:02.198352 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-zc7tc" event={"ID":"5b9b4aab-acce-4ef4-b930-9a4942a3dc5d","Type":"ContainerDied","Data":"437099511ae5b65c561273bb4417b703c8032f31109e6bf7c406956663ca7ff7"} Dec 01 08:59:02 crc kubenswrapper[4689]: I1201 08:59:02.202722 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-bcf0-account-create-update-r8ch4" event={"ID":"64a8b1dd-5948-4f22-98f2-a3441493cf5a","Type":"ContainerStarted","Data":"8b9adfbe91a2063c75789a901ae7351acf68cc7cecb92d2ce51d0a7d28ee0f1c"} Dec 01 08:59:02 crc kubenswrapper[4689]: I1201 08:59:02.202770 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 01 08:59:02 crc kubenswrapper[4689]: I1201 08:59:02.203089 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 01 08:59:02 crc kubenswrapper[4689]: I1201 08:59:02.249767 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-rl897" podStartSLOduration=2.249747541 podStartE2EDuration="2.249747541s" podCreationTimestamp="2025-12-01 08:59:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:59:02.244947709 +0000 UTC m=+1222.317235603" watchObservedRunningTime="2025-12-01 08:59:02.249747541 +0000 UTC m=+1222.322035445" Dec 01 08:59:02 crc kubenswrapper[4689]: I1201 08:59:02.620257 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 01 08:59:02 crc kubenswrapper[4689]: I1201 08:59:02.620327 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 01 08:59:02 crc kubenswrapper[4689]: I1201 08:59:02.668541 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 01 08:59:02 crc kubenswrapper[4689]: I1201 08:59:02.684479 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 01 08:59:03 crc kubenswrapper[4689]: I1201 08:59:03.210240 4689 generic.go:334] "Generic (PLEG): container finished" podID="6ca8b1e2-5166-488b-bda5-7c97602825da" containerID="17232e796d8a818724743928d131f349c861e7eff94f62322ec1b506bf8db74f" exitCode=0 Dec 01 08:59:03 crc kubenswrapper[4689]: I1201 08:59:03.210330 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-rl897" event={"ID":"6ca8b1e2-5166-488b-bda5-7c97602825da","Type":"ContainerDied","Data":"17232e796d8a818724743928d131f349c861e7eff94f62322ec1b506bf8db74f"} Dec 01 08:59:03 crc kubenswrapper[4689]: I1201 08:59:03.212425 4689 generic.go:334] "Generic (PLEG): container finished" podID="02bd29a4-4353-40b1-b147-f6257ef42632" containerID="4f756820e53aae93e151411814f9d7216a7d328d3848dd95af96198798ebd783" exitCode=0 Dec 01 08:59:03 crc kubenswrapper[4689]: I1201 08:59:03.212478 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-c9c8-account-create-update-h265p" event={"ID":"02bd29a4-4353-40b1-b147-f6257ef42632","Type":"ContainerDied","Data":"4f756820e53aae93e151411814f9d7216a7d328d3848dd95af96198798ebd783"} Dec 01 08:59:03 crc kubenswrapper[4689]: I1201 08:59:03.214157 4689 generic.go:334] "Generic (PLEG): container finished" podID="69206391-deb0-4dc2-a5e9-a7f7cbdd7844" containerID="75f0f832ac4509a1221c7b9ecc66cec584668738ecc2eef3558817e0472833af" exitCode=0 Dec 01 08:59:03 crc kubenswrapper[4689]: I1201 08:59:03.214226 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-c0cb-account-create-update-j6d5l" event={"ID":"69206391-deb0-4dc2-a5e9-a7f7cbdd7844","Type":"ContainerDied","Data":"75f0f832ac4509a1221c7b9ecc66cec584668738ecc2eef3558817e0472833af"} Dec 01 08:59:03 crc kubenswrapper[4689]: I1201 08:59:03.215634 4689 generic.go:334] "Generic (PLEG): container finished" podID="64a8b1dd-5948-4f22-98f2-a3441493cf5a" containerID="e065f7753d63b5d66f266b4af289dd44b7bad3b9bec8b0bc8ef02553cc492924" exitCode=0 Dec 01 08:59:03 crc kubenswrapper[4689]: I1201 08:59:03.215685 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-bcf0-account-create-update-r8ch4" event={"ID":"64a8b1dd-5948-4f22-98f2-a3441493cf5a","Type":"ContainerDied","Data":"e065f7753d63b5d66f266b4af289dd44b7bad3b9bec8b0bc8ef02553cc492924"} Dec 01 08:59:03 crc kubenswrapper[4689]: I1201 08:59:03.223271 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ca91aa7d-c591-4a04-81f6-738d5939ffed","Type":"ContainerStarted","Data":"1041d558509f86248d1771c1260f555c8f385252e765863851b10f657bfa7828"} Dec 01 08:59:03 crc kubenswrapper[4689]: I1201 08:59:03.223319 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 01 08:59:03 crc kubenswrapper[4689]: I1201 08:59:03.223339 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 01 08:59:03 crc kubenswrapper[4689]: I1201 08:59:03.223548 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 01 08:59:03 crc kubenswrapper[4689]: I1201 08:59:03.373269 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.491601629 podStartE2EDuration="7.373247011s" podCreationTimestamp="2025-12-01 08:58:56 +0000 UTC" firstStartedPulling="2025-12-01 08:58:57.904576666 +0000 UTC m=+1217.976864570" lastFinishedPulling="2025-12-01 08:59:02.786222048 +0000 UTC m=+1222.858509952" observedRunningTime="2025-12-01 08:59:03.308425155 +0000 UTC m=+1223.380713059" watchObservedRunningTime="2025-12-01 08:59:03.373247011 +0000 UTC m=+1223.445534915" Dec 01 08:59:03 crc kubenswrapper[4689]: I1201 08:59:03.773439 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-w59kl" Dec 01 08:59:03 crc kubenswrapper[4689]: I1201 08:59:03.912169 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7kxkx\" (UniqueName: \"kubernetes.io/projected/86d5f546-10fa-4682-95b1-df605b5f23dc-kube-api-access-7kxkx\") pod \"86d5f546-10fa-4682-95b1-df605b5f23dc\" (UID: \"86d5f546-10fa-4682-95b1-df605b5f23dc\") " Dec 01 08:59:03 crc kubenswrapper[4689]: I1201 08:59:03.912253 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86d5f546-10fa-4682-95b1-df605b5f23dc-operator-scripts\") pod \"86d5f546-10fa-4682-95b1-df605b5f23dc\" (UID: \"86d5f546-10fa-4682-95b1-df605b5f23dc\") " Dec 01 08:59:03 crc kubenswrapper[4689]: I1201 08:59:03.913124 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86d5f546-10fa-4682-95b1-df605b5f23dc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "86d5f546-10fa-4682-95b1-df605b5f23dc" (UID: "86d5f546-10fa-4682-95b1-df605b5f23dc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:59:03 crc kubenswrapper[4689]: I1201 08:59:03.939161 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86d5f546-10fa-4682-95b1-df605b5f23dc-kube-api-access-7kxkx" (OuterVolumeSpecName: "kube-api-access-7kxkx") pod "86d5f546-10fa-4682-95b1-df605b5f23dc" (UID: "86d5f546-10fa-4682-95b1-df605b5f23dc"). InnerVolumeSpecName "kube-api-access-7kxkx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:59:04 crc kubenswrapper[4689]: I1201 08:59:04.007454 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-zc7tc" Dec 01 08:59:04 crc kubenswrapper[4689]: I1201 08:59:04.015251 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7kxkx\" (UniqueName: \"kubernetes.io/projected/86d5f546-10fa-4682-95b1-df605b5f23dc-kube-api-access-7kxkx\") on node \"crc\" DevicePath \"\"" Dec 01 08:59:04 crc kubenswrapper[4689]: I1201 08:59:04.015274 4689 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86d5f546-10fa-4682-95b1-df605b5f23dc-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 08:59:04 crc kubenswrapper[4689]: I1201 08:59:04.018131 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-c0cb-account-create-update-j6d5l" Dec 01 08:59:04 crc kubenswrapper[4689]: I1201 08:59:04.116040 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wmdpn\" (UniqueName: \"kubernetes.io/projected/5b9b4aab-acce-4ef4-b930-9a4942a3dc5d-kube-api-access-wmdpn\") pod \"5b9b4aab-acce-4ef4-b930-9a4942a3dc5d\" (UID: \"5b9b4aab-acce-4ef4-b930-9a4942a3dc5d\") " Dec 01 08:59:04 crc kubenswrapper[4689]: I1201 08:59:04.116099 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69206391-deb0-4dc2-a5e9-a7f7cbdd7844-operator-scripts\") pod \"69206391-deb0-4dc2-a5e9-a7f7cbdd7844\" (UID: \"69206391-deb0-4dc2-a5e9-a7f7cbdd7844\") " Dec 01 08:59:04 crc kubenswrapper[4689]: I1201 08:59:04.116161 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-74kfv\" (UniqueName: \"kubernetes.io/projected/69206391-deb0-4dc2-a5e9-a7f7cbdd7844-kube-api-access-74kfv\") pod \"69206391-deb0-4dc2-a5e9-a7f7cbdd7844\" (UID: \"69206391-deb0-4dc2-a5e9-a7f7cbdd7844\") " Dec 01 08:59:04 crc kubenswrapper[4689]: I1201 08:59:04.116197 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b9b4aab-acce-4ef4-b930-9a4942a3dc5d-operator-scripts\") pod \"5b9b4aab-acce-4ef4-b930-9a4942a3dc5d\" (UID: \"5b9b4aab-acce-4ef4-b930-9a4942a3dc5d\") " Dec 01 08:59:04 crc kubenswrapper[4689]: I1201 08:59:04.117220 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b9b4aab-acce-4ef4-b930-9a4942a3dc5d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5b9b4aab-acce-4ef4-b930-9a4942a3dc5d" (UID: "5b9b4aab-acce-4ef4-b930-9a4942a3dc5d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:59:04 crc kubenswrapper[4689]: I1201 08:59:04.117641 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69206391-deb0-4dc2-a5e9-a7f7cbdd7844-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "69206391-deb0-4dc2-a5e9-a7f7cbdd7844" (UID: "69206391-deb0-4dc2-a5e9-a7f7cbdd7844"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:59:04 crc kubenswrapper[4689]: I1201 08:59:04.120441 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b9b4aab-acce-4ef4-b930-9a4942a3dc5d-kube-api-access-wmdpn" (OuterVolumeSpecName: "kube-api-access-wmdpn") pod "5b9b4aab-acce-4ef4-b930-9a4942a3dc5d" (UID: "5b9b4aab-acce-4ef4-b930-9a4942a3dc5d"). InnerVolumeSpecName "kube-api-access-wmdpn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:59:04 crc kubenswrapper[4689]: I1201 08:59:04.122149 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69206391-deb0-4dc2-a5e9-a7f7cbdd7844-kube-api-access-74kfv" (OuterVolumeSpecName: "kube-api-access-74kfv") pod "69206391-deb0-4dc2-a5e9-a7f7cbdd7844" (UID: "69206391-deb0-4dc2-a5e9-a7f7cbdd7844"). InnerVolumeSpecName "kube-api-access-74kfv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:59:04 crc kubenswrapper[4689]: I1201 08:59:04.219915 4689 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69206391-deb0-4dc2-a5e9-a7f7cbdd7844-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 08:59:04 crc kubenswrapper[4689]: I1201 08:59:04.219948 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-74kfv\" (UniqueName: \"kubernetes.io/projected/69206391-deb0-4dc2-a5e9-a7f7cbdd7844-kube-api-access-74kfv\") on node \"crc\" DevicePath \"\"" Dec 01 08:59:04 crc kubenswrapper[4689]: I1201 08:59:04.220003 4689 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b9b4aab-acce-4ef4-b930-9a4942a3dc5d-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 08:59:04 crc kubenswrapper[4689]: I1201 08:59:04.220027 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wmdpn\" (UniqueName: \"kubernetes.io/projected/5b9b4aab-acce-4ef4-b930-9a4942a3dc5d-kube-api-access-wmdpn\") on node \"crc\" DevicePath \"\"" Dec 01 08:59:04 crc kubenswrapper[4689]: I1201 08:59:04.236114 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-w59kl" event={"ID":"86d5f546-10fa-4682-95b1-df605b5f23dc","Type":"ContainerDied","Data":"a416e69584f907d0e111ed80016c1048e73598cd7a2cf4b4fd5c52168366c336"} Dec 01 08:59:04 crc kubenswrapper[4689]: I1201 08:59:04.236165 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a416e69584f907d0e111ed80016c1048e73598cd7a2cf4b4fd5c52168366c336" Dec 01 08:59:04 crc kubenswrapper[4689]: I1201 08:59:04.236217 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-w59kl" Dec 01 08:59:04 crc kubenswrapper[4689]: I1201 08:59:04.252700 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-c0cb-account-create-update-j6d5l" event={"ID":"69206391-deb0-4dc2-a5e9-a7f7cbdd7844","Type":"ContainerDied","Data":"557415d2e3e817bbfe95d9086949e5aa22a8c37383175d84c5cef9747caf72c1"} Dec 01 08:59:04 crc kubenswrapper[4689]: I1201 08:59:04.252746 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="557415d2e3e817bbfe95d9086949e5aa22a8c37383175d84c5cef9747caf72c1" Dec 01 08:59:04 crc kubenswrapper[4689]: I1201 08:59:04.252819 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-c0cb-account-create-update-j6d5l" Dec 01 08:59:04 crc kubenswrapper[4689]: I1201 08:59:04.255605 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-zc7tc" Dec 01 08:59:04 crc kubenswrapper[4689]: I1201 08:59:04.256728 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-zc7tc" event={"ID":"5b9b4aab-acce-4ef4-b930-9a4942a3dc5d","Type":"ContainerDied","Data":"b31d607bec3aa0a643d55648f5407d7fa9a2c8323f2f27b340252fc2e163bcfa"} Dec 01 08:59:04 crc kubenswrapper[4689]: I1201 08:59:04.256758 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b31d607bec3aa0a643d55648f5407d7fa9a2c8323f2f27b340252fc2e163bcfa" Dec 01 08:59:04 crc kubenswrapper[4689]: I1201 08:59:04.257110 4689 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 01 08:59:04 crc kubenswrapper[4689]: I1201 08:59:04.257126 4689 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 01 08:59:04 crc kubenswrapper[4689]: I1201 08:59:04.731249 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-rl897" Dec 01 08:59:04 crc kubenswrapper[4689]: I1201 08:59:04.835693 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fzgrp\" (UniqueName: \"kubernetes.io/projected/6ca8b1e2-5166-488b-bda5-7c97602825da-kube-api-access-fzgrp\") pod \"6ca8b1e2-5166-488b-bda5-7c97602825da\" (UID: \"6ca8b1e2-5166-488b-bda5-7c97602825da\") " Dec 01 08:59:04 crc kubenswrapper[4689]: I1201 08:59:04.835771 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ca8b1e2-5166-488b-bda5-7c97602825da-operator-scripts\") pod \"6ca8b1e2-5166-488b-bda5-7c97602825da\" (UID: \"6ca8b1e2-5166-488b-bda5-7c97602825da\") " Dec 01 08:59:04 crc kubenswrapper[4689]: I1201 08:59:04.838255 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ca8b1e2-5166-488b-bda5-7c97602825da-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6ca8b1e2-5166-488b-bda5-7c97602825da" (UID: "6ca8b1e2-5166-488b-bda5-7c97602825da"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:59:04 crc kubenswrapper[4689]: I1201 08:59:04.843519 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ca8b1e2-5166-488b-bda5-7c97602825da-kube-api-access-fzgrp" (OuterVolumeSpecName: "kube-api-access-fzgrp") pod "6ca8b1e2-5166-488b-bda5-7c97602825da" (UID: "6ca8b1e2-5166-488b-bda5-7c97602825da"). InnerVolumeSpecName "kube-api-access-fzgrp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:59:04 crc kubenswrapper[4689]: I1201 08:59:04.846343 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-c9c8-account-create-update-h265p" Dec 01 08:59:04 crc kubenswrapper[4689]: I1201 08:59:04.938744 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/02bd29a4-4353-40b1-b147-f6257ef42632-operator-scripts\") pod \"02bd29a4-4353-40b1-b147-f6257ef42632\" (UID: \"02bd29a4-4353-40b1-b147-f6257ef42632\") " Dec 01 08:59:04 crc kubenswrapper[4689]: I1201 08:59:04.938897 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jk2sl\" (UniqueName: \"kubernetes.io/projected/02bd29a4-4353-40b1-b147-f6257ef42632-kube-api-access-jk2sl\") pod \"02bd29a4-4353-40b1-b147-f6257ef42632\" (UID: \"02bd29a4-4353-40b1-b147-f6257ef42632\") " Dec 01 08:59:04 crc kubenswrapper[4689]: I1201 08:59:04.939339 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fzgrp\" (UniqueName: \"kubernetes.io/projected/6ca8b1e2-5166-488b-bda5-7c97602825da-kube-api-access-fzgrp\") on node \"crc\" DevicePath \"\"" Dec 01 08:59:04 crc kubenswrapper[4689]: I1201 08:59:04.939356 4689 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ca8b1e2-5166-488b-bda5-7c97602825da-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 08:59:04 crc kubenswrapper[4689]: I1201 08:59:04.939530 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02bd29a4-4353-40b1-b147-f6257ef42632-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "02bd29a4-4353-40b1-b147-f6257ef42632" (UID: "02bd29a4-4353-40b1-b147-f6257ef42632"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:59:04 crc kubenswrapper[4689]: I1201 08:59:04.951740 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02bd29a4-4353-40b1-b147-f6257ef42632-kube-api-access-jk2sl" (OuterVolumeSpecName: "kube-api-access-jk2sl") pod "02bd29a4-4353-40b1-b147-f6257ef42632" (UID: "02bd29a4-4353-40b1-b147-f6257ef42632"). InnerVolumeSpecName "kube-api-access-jk2sl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:59:04 crc kubenswrapper[4689]: I1201 08:59:04.958685 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-bcf0-account-create-update-r8ch4" Dec 01 08:59:05 crc kubenswrapper[4689]: I1201 08:59:05.040222 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f97m7\" (UniqueName: \"kubernetes.io/projected/64a8b1dd-5948-4f22-98f2-a3441493cf5a-kube-api-access-f97m7\") pod \"64a8b1dd-5948-4f22-98f2-a3441493cf5a\" (UID: \"64a8b1dd-5948-4f22-98f2-a3441493cf5a\") " Dec 01 08:59:05 crc kubenswrapper[4689]: I1201 08:59:05.040359 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64a8b1dd-5948-4f22-98f2-a3441493cf5a-operator-scripts\") pod \"64a8b1dd-5948-4f22-98f2-a3441493cf5a\" (UID: \"64a8b1dd-5948-4f22-98f2-a3441493cf5a\") " Dec 01 08:59:05 crc kubenswrapper[4689]: I1201 08:59:05.040774 4689 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/02bd29a4-4353-40b1-b147-f6257ef42632-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 08:59:05 crc kubenswrapper[4689]: I1201 08:59:05.040792 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jk2sl\" (UniqueName: \"kubernetes.io/projected/02bd29a4-4353-40b1-b147-f6257ef42632-kube-api-access-jk2sl\") on node \"crc\" DevicePath \"\"" Dec 01 08:59:05 crc kubenswrapper[4689]: I1201 08:59:05.043135 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64a8b1dd-5948-4f22-98f2-a3441493cf5a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "64a8b1dd-5948-4f22-98f2-a3441493cf5a" (UID: "64a8b1dd-5948-4f22-98f2-a3441493cf5a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:59:05 crc kubenswrapper[4689]: I1201 08:59:05.057278 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64a8b1dd-5948-4f22-98f2-a3441493cf5a-kube-api-access-f97m7" (OuterVolumeSpecName: "kube-api-access-f97m7") pod "64a8b1dd-5948-4f22-98f2-a3441493cf5a" (UID: "64a8b1dd-5948-4f22-98f2-a3441493cf5a"). InnerVolumeSpecName "kube-api-access-f97m7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:59:05 crc kubenswrapper[4689]: I1201 08:59:05.143048 4689 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64a8b1dd-5948-4f22-98f2-a3441493cf5a-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 08:59:05 crc kubenswrapper[4689]: I1201 08:59:05.143384 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f97m7\" (UniqueName: \"kubernetes.io/projected/64a8b1dd-5948-4f22-98f2-a3441493cf5a-kube-api-access-f97m7\") on node \"crc\" DevicePath \"\"" Dec 01 08:59:05 crc kubenswrapper[4689]: I1201 08:59:05.255311 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 01 08:59:05 crc kubenswrapper[4689]: I1201 08:59:05.263002 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 01 08:59:05 crc kubenswrapper[4689]: I1201 08:59:05.264140 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-bcf0-account-create-update-r8ch4" Dec 01 08:59:05 crc kubenswrapper[4689]: I1201 08:59:05.264812 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-bcf0-account-create-update-r8ch4" event={"ID":"64a8b1dd-5948-4f22-98f2-a3441493cf5a","Type":"ContainerDied","Data":"8b9adfbe91a2063c75789a901ae7351acf68cc7cecb92d2ce51d0a7d28ee0f1c"} Dec 01 08:59:05 crc kubenswrapper[4689]: I1201 08:59:05.264835 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b9adfbe91a2063c75789a901ae7351acf68cc7cecb92d2ce51d0a7d28ee0f1c" Dec 01 08:59:05 crc kubenswrapper[4689]: I1201 08:59:05.266246 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-rl897" event={"ID":"6ca8b1e2-5166-488b-bda5-7c97602825da","Type":"ContainerDied","Data":"691acf3bf1058f669d2897b7872faba6be645dd6bcdfcd1c10fed0e5a243780f"} Dec 01 08:59:05 crc kubenswrapper[4689]: I1201 08:59:05.266265 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="691acf3bf1058f669d2897b7872faba6be645dd6bcdfcd1c10fed0e5a243780f" Dec 01 08:59:05 crc kubenswrapper[4689]: I1201 08:59:05.266303 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-rl897" Dec 01 08:59:05 crc kubenswrapper[4689]: I1201 08:59:05.268109 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-c9c8-account-create-update-h265p" event={"ID":"02bd29a4-4353-40b1-b147-f6257ef42632","Type":"ContainerDied","Data":"c607840f0fe8fa91a0d1eb02fcfe35cf563783bb0ebf1e6a567d1ba0417e493f"} Dec 01 08:59:05 crc kubenswrapper[4689]: I1201 08:59:05.268160 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c607840f0fe8fa91a0d1eb02fcfe35cf563783bb0ebf1e6a567d1ba0417e493f" Dec 01 08:59:05 crc kubenswrapper[4689]: I1201 08:59:05.268103 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-c9c8-account-create-update-h265p" Dec 01 08:59:05 crc kubenswrapper[4689]: I1201 08:59:05.906336 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-d65b9788-2kr5p" Dec 01 08:59:05 crc kubenswrapper[4689]: I1201 08:59:05.910634 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-78d9cd9dbd-qxwq7" Dec 01 08:59:06 crc kubenswrapper[4689]: I1201 08:59:06.859949 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 01 08:59:06 crc kubenswrapper[4689]: I1201 08:59:06.860690 4689 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 01 08:59:06 crc kubenswrapper[4689]: I1201 08:59:06.916062 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 01 08:59:08 crc kubenswrapper[4689]: I1201 08:59:08.805441 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-78d9cd9dbd-qxwq7" Dec 01 08:59:09 crc kubenswrapper[4689]: I1201 08:59:09.113788 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-d65b9788-2kr5p" Dec 01 08:59:09 crc kubenswrapper[4689]: I1201 08:59:09.220905 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-78d9cd9dbd-qxwq7"] Dec 01 08:59:09 crc kubenswrapper[4689]: I1201 08:59:09.526122 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-78d9cd9dbd-qxwq7" podUID="e88c04bb-01ff-47a6-8942-05a9a2a68416" containerName="horizon-log" containerID="cri-o://ad38fd3db04934de38e1df2739d0df091d88ad6e59e5e1dc95f4167e1c88b624" gracePeriod=30 Dec 01 08:59:09 crc kubenswrapper[4689]: I1201 08:59:09.526666 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-78d9cd9dbd-qxwq7" podUID="e88c04bb-01ff-47a6-8942-05a9a2a68416" containerName="horizon" containerID="cri-o://dbacf385cc7e024476440ee9e90e68d5f7a572a69d91e9e613651d878c816d6c" gracePeriod=30 Dec 01 08:59:10 crc kubenswrapper[4689]: I1201 08:59:10.774724 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-5p44f"] Dec 01 08:59:10 crc kubenswrapper[4689]: E1201 08:59:10.775381 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b9b4aab-acce-4ef4-b930-9a4942a3dc5d" containerName="mariadb-database-create" Dec 01 08:59:10 crc kubenswrapper[4689]: I1201 08:59:10.775394 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b9b4aab-acce-4ef4-b930-9a4942a3dc5d" containerName="mariadb-database-create" Dec 01 08:59:10 crc kubenswrapper[4689]: E1201 08:59:10.775410 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64a8b1dd-5948-4f22-98f2-a3441493cf5a" containerName="mariadb-account-create-update" Dec 01 08:59:10 crc kubenswrapper[4689]: I1201 08:59:10.775415 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="64a8b1dd-5948-4f22-98f2-a3441493cf5a" containerName="mariadb-account-create-update" Dec 01 08:59:10 crc kubenswrapper[4689]: E1201 08:59:10.775437 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02bd29a4-4353-40b1-b147-f6257ef42632" containerName="mariadb-account-create-update" Dec 01 08:59:10 crc kubenswrapper[4689]: I1201 08:59:10.775442 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="02bd29a4-4353-40b1-b147-f6257ef42632" containerName="mariadb-account-create-update" Dec 01 08:59:10 crc kubenswrapper[4689]: E1201 08:59:10.775453 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86d5f546-10fa-4682-95b1-df605b5f23dc" containerName="mariadb-database-create" Dec 01 08:59:10 crc kubenswrapper[4689]: I1201 08:59:10.775459 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="86d5f546-10fa-4682-95b1-df605b5f23dc" containerName="mariadb-database-create" Dec 01 08:59:10 crc kubenswrapper[4689]: E1201 08:59:10.775479 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69206391-deb0-4dc2-a5e9-a7f7cbdd7844" containerName="mariadb-account-create-update" Dec 01 08:59:10 crc kubenswrapper[4689]: I1201 08:59:10.775484 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="69206391-deb0-4dc2-a5e9-a7f7cbdd7844" containerName="mariadb-account-create-update" Dec 01 08:59:10 crc kubenswrapper[4689]: E1201 08:59:10.775492 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ca8b1e2-5166-488b-bda5-7c97602825da" containerName="mariadb-database-create" Dec 01 08:59:10 crc kubenswrapper[4689]: I1201 08:59:10.775497 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ca8b1e2-5166-488b-bda5-7c97602825da" containerName="mariadb-database-create" Dec 01 08:59:10 crc kubenswrapper[4689]: I1201 08:59:10.775709 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b9b4aab-acce-4ef4-b930-9a4942a3dc5d" containerName="mariadb-database-create" Dec 01 08:59:10 crc kubenswrapper[4689]: I1201 08:59:10.775722 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="69206391-deb0-4dc2-a5e9-a7f7cbdd7844" containerName="mariadb-account-create-update" Dec 01 08:59:10 crc kubenswrapper[4689]: I1201 08:59:10.775732 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="02bd29a4-4353-40b1-b147-f6257ef42632" containerName="mariadb-account-create-update" Dec 01 08:59:10 crc kubenswrapper[4689]: I1201 08:59:10.775745 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ca8b1e2-5166-488b-bda5-7c97602825da" containerName="mariadb-database-create" Dec 01 08:59:10 crc kubenswrapper[4689]: I1201 08:59:10.775759 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="86d5f546-10fa-4682-95b1-df605b5f23dc" containerName="mariadb-database-create" Dec 01 08:59:10 crc kubenswrapper[4689]: I1201 08:59:10.775780 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="64a8b1dd-5948-4f22-98f2-a3441493cf5a" containerName="mariadb-account-create-update" Dec 01 08:59:10 crc kubenswrapper[4689]: I1201 08:59:10.776559 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-5p44f" Dec 01 08:59:10 crc kubenswrapper[4689]: I1201 08:59:10.782225 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-pkghb" Dec 01 08:59:10 crc kubenswrapper[4689]: I1201 08:59:10.788193 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-5p44f"] Dec 01 08:59:10 crc kubenswrapper[4689]: I1201 08:59:10.790258 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Dec 01 08:59:10 crc kubenswrapper[4689]: I1201 08:59:10.791984 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Dec 01 08:59:10 crc kubenswrapper[4689]: I1201 08:59:10.880680 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/961fc017-9eb5-427a-8d58-a189c19eadc5-scripts\") pod \"nova-cell0-conductor-db-sync-5p44f\" (UID: \"961fc017-9eb5-427a-8d58-a189c19eadc5\") " pod="openstack/nova-cell0-conductor-db-sync-5p44f" Dec 01 08:59:10 crc kubenswrapper[4689]: I1201 08:59:10.880752 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csqd6\" (UniqueName: \"kubernetes.io/projected/961fc017-9eb5-427a-8d58-a189c19eadc5-kube-api-access-csqd6\") pod \"nova-cell0-conductor-db-sync-5p44f\" (UID: \"961fc017-9eb5-427a-8d58-a189c19eadc5\") " pod="openstack/nova-cell0-conductor-db-sync-5p44f" Dec 01 08:59:10 crc kubenswrapper[4689]: I1201 08:59:10.880860 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/961fc017-9eb5-427a-8d58-a189c19eadc5-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-5p44f\" (UID: \"961fc017-9eb5-427a-8d58-a189c19eadc5\") " pod="openstack/nova-cell0-conductor-db-sync-5p44f" Dec 01 08:59:10 crc kubenswrapper[4689]: I1201 08:59:10.880903 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/961fc017-9eb5-427a-8d58-a189c19eadc5-config-data\") pod \"nova-cell0-conductor-db-sync-5p44f\" (UID: \"961fc017-9eb5-427a-8d58-a189c19eadc5\") " pod="openstack/nova-cell0-conductor-db-sync-5p44f" Dec 01 08:59:10 crc kubenswrapper[4689]: I1201 08:59:10.982329 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/961fc017-9eb5-427a-8d58-a189c19eadc5-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-5p44f\" (UID: \"961fc017-9eb5-427a-8d58-a189c19eadc5\") " pod="openstack/nova-cell0-conductor-db-sync-5p44f" Dec 01 08:59:10 crc kubenswrapper[4689]: I1201 08:59:10.982424 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/961fc017-9eb5-427a-8d58-a189c19eadc5-config-data\") pod \"nova-cell0-conductor-db-sync-5p44f\" (UID: \"961fc017-9eb5-427a-8d58-a189c19eadc5\") " pod="openstack/nova-cell0-conductor-db-sync-5p44f" Dec 01 08:59:10 crc kubenswrapper[4689]: I1201 08:59:10.982486 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/961fc017-9eb5-427a-8d58-a189c19eadc5-scripts\") pod \"nova-cell0-conductor-db-sync-5p44f\" (UID: \"961fc017-9eb5-427a-8d58-a189c19eadc5\") " pod="openstack/nova-cell0-conductor-db-sync-5p44f" Dec 01 08:59:10 crc kubenswrapper[4689]: I1201 08:59:10.982532 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csqd6\" (UniqueName: \"kubernetes.io/projected/961fc017-9eb5-427a-8d58-a189c19eadc5-kube-api-access-csqd6\") pod \"nova-cell0-conductor-db-sync-5p44f\" (UID: \"961fc017-9eb5-427a-8d58-a189c19eadc5\") " pod="openstack/nova-cell0-conductor-db-sync-5p44f" Dec 01 08:59:10 crc kubenswrapper[4689]: I1201 08:59:10.990668 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/961fc017-9eb5-427a-8d58-a189c19eadc5-config-data\") pod \"nova-cell0-conductor-db-sync-5p44f\" (UID: \"961fc017-9eb5-427a-8d58-a189c19eadc5\") " pod="openstack/nova-cell0-conductor-db-sync-5p44f" Dec 01 08:59:10 crc kubenswrapper[4689]: I1201 08:59:10.991098 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/961fc017-9eb5-427a-8d58-a189c19eadc5-scripts\") pod \"nova-cell0-conductor-db-sync-5p44f\" (UID: \"961fc017-9eb5-427a-8d58-a189c19eadc5\") " pod="openstack/nova-cell0-conductor-db-sync-5p44f" Dec 01 08:59:10 crc kubenswrapper[4689]: I1201 08:59:10.995069 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/961fc017-9eb5-427a-8d58-a189c19eadc5-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-5p44f\" (UID: \"961fc017-9eb5-427a-8d58-a189c19eadc5\") " pod="openstack/nova-cell0-conductor-db-sync-5p44f" Dec 01 08:59:11 crc kubenswrapper[4689]: I1201 08:59:11.043638 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-csqd6\" (UniqueName: \"kubernetes.io/projected/961fc017-9eb5-427a-8d58-a189c19eadc5-kube-api-access-csqd6\") pod \"nova-cell0-conductor-db-sync-5p44f\" (UID: \"961fc017-9eb5-427a-8d58-a189c19eadc5\") " pod="openstack/nova-cell0-conductor-db-sync-5p44f" Dec 01 08:59:11 crc kubenswrapper[4689]: I1201 08:59:11.095096 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-5p44f" Dec 01 08:59:11 crc kubenswrapper[4689]: I1201 08:59:11.657323 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-5p44f"] Dec 01 08:59:11 crc kubenswrapper[4689]: W1201 08:59:11.672114 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod961fc017_9eb5_427a_8d58_a189c19eadc5.slice/crio-6ad27c1ea204e77fd72748173c41350f5af7398095110c35d0f6fcc4129cc722 WatchSource:0}: Error finding container 6ad27c1ea204e77fd72748173c41350f5af7398095110c35d0f6fcc4129cc722: Status 404 returned error can't find the container with id 6ad27c1ea204e77fd72748173c41350f5af7398095110c35d0f6fcc4129cc722 Dec 01 08:59:11 crc kubenswrapper[4689]: I1201 08:59:11.674756 4689 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 01 08:59:12 crc kubenswrapper[4689]: I1201 08:59:12.558450 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-5p44f" event={"ID":"961fc017-9eb5-427a-8d58-a189c19eadc5","Type":"ContainerStarted","Data":"6ad27c1ea204e77fd72748173c41350f5af7398095110c35d0f6fcc4129cc722"} Dec 01 08:59:12 crc kubenswrapper[4689]: I1201 08:59:12.779854 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-78d9cd9dbd-qxwq7" podUID="e88c04bb-01ff-47a6-8942-05a9a2a68416" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": read tcp 10.217.0.2:51126->10.217.0.149:8443: read: connection reset by peer" Dec 01 08:59:13 crc kubenswrapper[4689]: I1201 08:59:13.570028 4689 generic.go:334] "Generic (PLEG): container finished" podID="e88c04bb-01ff-47a6-8942-05a9a2a68416" containerID="dbacf385cc7e024476440ee9e90e68d5f7a572a69d91e9e613651d878c816d6c" exitCode=0 Dec 01 08:59:13 crc kubenswrapper[4689]: I1201 08:59:13.570110 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-78d9cd9dbd-qxwq7" event={"ID":"e88c04bb-01ff-47a6-8942-05a9a2a68416","Type":"ContainerDied","Data":"dbacf385cc7e024476440ee9e90e68d5f7a572a69d91e9e613651d878c816d6c"} Dec 01 08:59:13 crc kubenswrapper[4689]: I1201 08:59:13.570480 4689 scope.go:117] "RemoveContainer" containerID="a091448b207aa75d136d6feb237ad0fa14303d634a2df9de676e06282a8c25ec" Dec 01 08:59:19 crc kubenswrapper[4689]: I1201 08:59:19.636198 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-5p44f" event={"ID":"961fc017-9eb5-427a-8d58-a189c19eadc5","Type":"ContainerStarted","Data":"795a3f0771aad515880e5a8e9d27c0f932059710825db00e8c874b00486bc760"} Dec 01 08:59:19 crc kubenswrapper[4689]: I1201 08:59:19.657781 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-5p44f" podStartSLOduration=1.921697687 podStartE2EDuration="9.657757819s" podCreationTimestamp="2025-12-01 08:59:10 +0000 UTC" firstStartedPulling="2025-12-01 08:59:11.67444458 +0000 UTC m=+1231.746732494" lastFinishedPulling="2025-12-01 08:59:19.410504722 +0000 UTC m=+1239.482792626" observedRunningTime="2025-12-01 08:59:19.651912341 +0000 UTC m=+1239.724200335" watchObservedRunningTime="2025-12-01 08:59:19.657757819 +0000 UTC m=+1239.730045723" Dec 01 08:59:22 crc kubenswrapper[4689]: I1201 08:59:22.050682 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-78d9cd9dbd-qxwq7" podUID="e88c04bb-01ff-47a6-8942-05a9a2a68416" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Dec 01 08:59:27 crc kubenswrapper[4689]: I1201 08:59:27.382449 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 01 08:59:31 crc kubenswrapper[4689]: I1201 08:59:31.749938 4689 generic.go:334] "Generic (PLEG): container finished" podID="961fc017-9eb5-427a-8d58-a189c19eadc5" containerID="795a3f0771aad515880e5a8e9d27c0f932059710825db00e8c874b00486bc760" exitCode=0 Dec 01 08:59:31 crc kubenswrapper[4689]: I1201 08:59:31.750012 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-5p44f" event={"ID":"961fc017-9eb5-427a-8d58-a189c19eadc5","Type":"ContainerDied","Data":"795a3f0771aad515880e5a8e9d27c0f932059710825db00e8c874b00486bc760"} Dec 01 08:59:31 crc kubenswrapper[4689]: I1201 08:59:31.927611 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 01 08:59:31 crc kubenswrapper[4689]: I1201 08:59:31.927873 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="06aa5768-7753-4a2d-8e40-96cea62d055c" containerName="kube-state-metrics" containerID="cri-o://ede28f8b9139bfbecb340bb19f7194de7a0396dcd6e9144ebd65ce65a30b701d" gracePeriod=30 Dec 01 08:59:32 crc kubenswrapper[4689]: I1201 08:59:32.050671 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-78d9cd9dbd-qxwq7" podUID="e88c04bb-01ff-47a6-8942-05a9a2a68416" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Dec 01 08:59:32 crc kubenswrapper[4689]: I1201 08:59:32.050763 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-78d9cd9dbd-qxwq7" Dec 01 08:59:32 crc kubenswrapper[4689]: I1201 08:59:32.677636 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 01 08:59:32 crc kubenswrapper[4689]: I1201 08:59:32.762714 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 01 08:59:32 crc kubenswrapper[4689]: I1201 08:59:32.762773 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"06aa5768-7753-4a2d-8e40-96cea62d055c","Type":"ContainerDied","Data":"ede28f8b9139bfbecb340bb19f7194de7a0396dcd6e9144ebd65ce65a30b701d"} Dec 01 08:59:32 crc kubenswrapper[4689]: I1201 08:59:32.762819 4689 scope.go:117] "RemoveContainer" containerID="ede28f8b9139bfbecb340bb19f7194de7a0396dcd6e9144ebd65ce65a30b701d" Dec 01 08:59:32 crc kubenswrapper[4689]: I1201 08:59:32.762629 4689 generic.go:334] "Generic (PLEG): container finished" podID="06aa5768-7753-4a2d-8e40-96cea62d055c" containerID="ede28f8b9139bfbecb340bb19f7194de7a0396dcd6e9144ebd65ce65a30b701d" exitCode=2 Dec 01 08:59:32 crc kubenswrapper[4689]: I1201 08:59:32.763346 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"06aa5768-7753-4a2d-8e40-96cea62d055c","Type":"ContainerDied","Data":"4686626c6e4ed23fe9fa424e4f1dc1c9ac0d67a1a54b5a1938e0bd987d830ba0"} Dec 01 08:59:32 crc kubenswrapper[4689]: I1201 08:59:32.794872 4689 scope.go:117] "RemoveContainer" containerID="ede28f8b9139bfbecb340bb19f7194de7a0396dcd6e9144ebd65ce65a30b701d" Dec 01 08:59:32 crc kubenswrapper[4689]: E1201 08:59:32.796403 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ede28f8b9139bfbecb340bb19f7194de7a0396dcd6e9144ebd65ce65a30b701d\": container with ID starting with ede28f8b9139bfbecb340bb19f7194de7a0396dcd6e9144ebd65ce65a30b701d not found: ID does not exist" containerID="ede28f8b9139bfbecb340bb19f7194de7a0396dcd6e9144ebd65ce65a30b701d" Dec 01 08:59:32 crc kubenswrapper[4689]: I1201 08:59:32.796490 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ede28f8b9139bfbecb340bb19f7194de7a0396dcd6e9144ebd65ce65a30b701d"} err="failed to get container status \"ede28f8b9139bfbecb340bb19f7194de7a0396dcd6e9144ebd65ce65a30b701d\": rpc error: code = NotFound desc = could not find container \"ede28f8b9139bfbecb340bb19f7194de7a0396dcd6e9144ebd65ce65a30b701d\": container with ID starting with ede28f8b9139bfbecb340bb19f7194de7a0396dcd6e9144ebd65ce65a30b701d not found: ID does not exist" Dec 01 08:59:32 crc kubenswrapper[4689]: I1201 08:59:32.807175 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fq4xn\" (UniqueName: \"kubernetes.io/projected/06aa5768-7753-4a2d-8e40-96cea62d055c-kube-api-access-fq4xn\") pod \"06aa5768-7753-4a2d-8e40-96cea62d055c\" (UID: \"06aa5768-7753-4a2d-8e40-96cea62d055c\") " Dec 01 08:59:32 crc kubenswrapper[4689]: I1201 08:59:32.813565 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06aa5768-7753-4a2d-8e40-96cea62d055c-kube-api-access-fq4xn" (OuterVolumeSpecName: "kube-api-access-fq4xn") pod "06aa5768-7753-4a2d-8e40-96cea62d055c" (UID: "06aa5768-7753-4a2d-8e40-96cea62d055c"). InnerVolumeSpecName "kube-api-access-fq4xn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:59:32 crc kubenswrapper[4689]: I1201 08:59:32.909902 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fq4xn\" (UniqueName: \"kubernetes.io/projected/06aa5768-7753-4a2d-8e40-96cea62d055c-kube-api-access-fq4xn\") on node \"crc\" DevicePath \"\"" Dec 01 08:59:33 crc kubenswrapper[4689]: I1201 08:59:33.193673 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-5p44f" Dec 01 08:59:33 crc kubenswrapper[4689]: I1201 08:59:33.206868 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 01 08:59:33 crc kubenswrapper[4689]: I1201 08:59:33.231594 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 01 08:59:33 crc kubenswrapper[4689]: I1201 08:59:33.296473 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Dec 01 08:59:33 crc kubenswrapper[4689]: E1201 08:59:33.297070 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="961fc017-9eb5-427a-8d58-a189c19eadc5" containerName="nova-cell0-conductor-db-sync" Dec 01 08:59:33 crc kubenswrapper[4689]: I1201 08:59:33.297137 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="961fc017-9eb5-427a-8d58-a189c19eadc5" containerName="nova-cell0-conductor-db-sync" Dec 01 08:59:33 crc kubenswrapper[4689]: E1201 08:59:33.297201 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06aa5768-7753-4a2d-8e40-96cea62d055c" containerName="kube-state-metrics" Dec 01 08:59:33 crc kubenswrapper[4689]: I1201 08:59:33.297278 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="06aa5768-7753-4a2d-8e40-96cea62d055c" containerName="kube-state-metrics" Dec 01 08:59:33 crc kubenswrapper[4689]: I1201 08:59:33.297522 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="961fc017-9eb5-427a-8d58-a189c19eadc5" containerName="nova-cell0-conductor-db-sync" Dec 01 08:59:33 crc kubenswrapper[4689]: I1201 08:59:33.297588 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="06aa5768-7753-4a2d-8e40-96cea62d055c" containerName="kube-state-metrics" Dec 01 08:59:33 crc kubenswrapper[4689]: I1201 08:59:33.298295 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 01 08:59:33 crc kubenswrapper[4689]: I1201 08:59:33.302572 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Dec 01 08:59:33 crc kubenswrapper[4689]: I1201 08:59:33.308461 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Dec 01 08:59:33 crc kubenswrapper[4689]: I1201 08:59:33.314022 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 01 08:59:33 crc kubenswrapper[4689]: I1201 08:59:33.326051 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/961fc017-9eb5-427a-8d58-a189c19eadc5-config-data\") pod \"961fc017-9eb5-427a-8d58-a189c19eadc5\" (UID: \"961fc017-9eb5-427a-8d58-a189c19eadc5\") " Dec 01 08:59:33 crc kubenswrapper[4689]: I1201 08:59:33.326207 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-csqd6\" (UniqueName: \"kubernetes.io/projected/961fc017-9eb5-427a-8d58-a189c19eadc5-kube-api-access-csqd6\") pod \"961fc017-9eb5-427a-8d58-a189c19eadc5\" (UID: \"961fc017-9eb5-427a-8d58-a189c19eadc5\") " Dec 01 08:59:33 crc kubenswrapper[4689]: I1201 08:59:33.326338 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/961fc017-9eb5-427a-8d58-a189c19eadc5-combined-ca-bundle\") pod \"961fc017-9eb5-427a-8d58-a189c19eadc5\" (UID: \"961fc017-9eb5-427a-8d58-a189c19eadc5\") " Dec 01 08:59:33 crc kubenswrapper[4689]: I1201 08:59:33.326950 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/961fc017-9eb5-427a-8d58-a189c19eadc5-scripts\") pod \"961fc017-9eb5-427a-8d58-a189c19eadc5\" (UID: \"961fc017-9eb5-427a-8d58-a189c19eadc5\") " Dec 01 08:59:33 crc kubenswrapper[4689]: I1201 08:59:33.351473 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/961fc017-9eb5-427a-8d58-a189c19eadc5-kube-api-access-csqd6" (OuterVolumeSpecName: "kube-api-access-csqd6") pod "961fc017-9eb5-427a-8d58-a189c19eadc5" (UID: "961fc017-9eb5-427a-8d58-a189c19eadc5"). InnerVolumeSpecName "kube-api-access-csqd6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:59:33 crc kubenswrapper[4689]: I1201 08:59:33.354104 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/961fc017-9eb5-427a-8d58-a189c19eadc5-scripts" (OuterVolumeSpecName: "scripts") pod "961fc017-9eb5-427a-8d58-a189c19eadc5" (UID: "961fc017-9eb5-427a-8d58-a189c19eadc5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:59:33 crc kubenswrapper[4689]: I1201 08:59:33.356340 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/961fc017-9eb5-427a-8d58-a189c19eadc5-config-data" (OuterVolumeSpecName: "config-data") pod "961fc017-9eb5-427a-8d58-a189c19eadc5" (UID: "961fc017-9eb5-427a-8d58-a189c19eadc5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:59:33 crc kubenswrapper[4689]: I1201 08:59:33.428891 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/432574e7-df30-4103-a396-c758c4df932c-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"432574e7-df30-4103-a396-c758c4df932c\") " pod="openstack/kube-state-metrics-0" Dec 01 08:59:33 crc kubenswrapper[4689]: I1201 08:59:33.429029 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97s8s\" (UniqueName: \"kubernetes.io/projected/432574e7-df30-4103-a396-c758c4df932c-kube-api-access-97s8s\") pod \"kube-state-metrics-0\" (UID: \"432574e7-df30-4103-a396-c758c4df932c\") " pod="openstack/kube-state-metrics-0" Dec 01 08:59:33 crc kubenswrapper[4689]: I1201 08:59:33.429062 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/432574e7-df30-4103-a396-c758c4df932c-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"432574e7-df30-4103-a396-c758c4df932c\") " pod="openstack/kube-state-metrics-0" Dec 01 08:59:33 crc kubenswrapper[4689]: I1201 08:59:33.429111 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/432574e7-df30-4103-a396-c758c4df932c-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"432574e7-df30-4103-a396-c758c4df932c\") " pod="openstack/kube-state-metrics-0" Dec 01 08:59:33 crc kubenswrapper[4689]: I1201 08:59:33.429177 4689 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/961fc017-9eb5-427a-8d58-a189c19eadc5-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 08:59:33 crc kubenswrapper[4689]: I1201 08:59:33.429190 4689 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/961fc017-9eb5-427a-8d58-a189c19eadc5-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 08:59:33 crc kubenswrapper[4689]: I1201 08:59:33.429200 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-csqd6\" (UniqueName: \"kubernetes.io/projected/961fc017-9eb5-427a-8d58-a189c19eadc5-kube-api-access-csqd6\") on node \"crc\" DevicePath \"\"" Dec 01 08:59:33 crc kubenswrapper[4689]: I1201 08:59:33.455081 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/961fc017-9eb5-427a-8d58-a189c19eadc5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "961fc017-9eb5-427a-8d58-a189c19eadc5" (UID: "961fc017-9eb5-427a-8d58-a189c19eadc5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:59:33 crc kubenswrapper[4689]: I1201 08:59:33.531063 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97s8s\" (UniqueName: \"kubernetes.io/projected/432574e7-df30-4103-a396-c758c4df932c-kube-api-access-97s8s\") pod \"kube-state-metrics-0\" (UID: \"432574e7-df30-4103-a396-c758c4df932c\") " pod="openstack/kube-state-metrics-0" Dec 01 08:59:33 crc kubenswrapper[4689]: I1201 08:59:33.531133 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/432574e7-df30-4103-a396-c758c4df932c-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"432574e7-df30-4103-a396-c758c4df932c\") " pod="openstack/kube-state-metrics-0" Dec 01 08:59:33 crc kubenswrapper[4689]: I1201 08:59:33.531194 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/432574e7-df30-4103-a396-c758c4df932c-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"432574e7-df30-4103-a396-c758c4df932c\") " pod="openstack/kube-state-metrics-0" Dec 01 08:59:33 crc kubenswrapper[4689]: I1201 08:59:33.531243 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/432574e7-df30-4103-a396-c758c4df932c-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"432574e7-df30-4103-a396-c758c4df932c\") " pod="openstack/kube-state-metrics-0" Dec 01 08:59:33 crc kubenswrapper[4689]: I1201 08:59:33.531310 4689 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/961fc017-9eb5-427a-8d58-a189c19eadc5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 08:59:33 crc kubenswrapper[4689]: I1201 08:59:33.536977 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/432574e7-df30-4103-a396-c758c4df932c-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"432574e7-df30-4103-a396-c758c4df932c\") " pod="openstack/kube-state-metrics-0" Dec 01 08:59:33 crc kubenswrapper[4689]: I1201 08:59:33.537594 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/432574e7-df30-4103-a396-c758c4df932c-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"432574e7-df30-4103-a396-c758c4df932c\") " pod="openstack/kube-state-metrics-0" Dec 01 08:59:33 crc kubenswrapper[4689]: I1201 08:59:33.538959 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/432574e7-df30-4103-a396-c758c4df932c-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"432574e7-df30-4103-a396-c758c4df932c\") " pod="openstack/kube-state-metrics-0" Dec 01 08:59:33 crc kubenswrapper[4689]: I1201 08:59:33.581159 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97s8s\" (UniqueName: \"kubernetes.io/projected/432574e7-df30-4103-a396-c758c4df932c-kube-api-access-97s8s\") pod \"kube-state-metrics-0\" (UID: \"432574e7-df30-4103-a396-c758c4df932c\") " pod="openstack/kube-state-metrics-0" Dec 01 08:59:33 crc kubenswrapper[4689]: I1201 08:59:33.646532 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 01 08:59:33 crc kubenswrapper[4689]: I1201 08:59:33.807187 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-5p44f" event={"ID":"961fc017-9eb5-427a-8d58-a189c19eadc5","Type":"ContainerDied","Data":"6ad27c1ea204e77fd72748173c41350f5af7398095110c35d0f6fcc4129cc722"} Dec 01 08:59:33 crc kubenswrapper[4689]: I1201 08:59:33.807499 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ad27c1ea204e77fd72748173c41350f5af7398095110c35d0f6fcc4129cc722" Dec 01 08:59:33 crc kubenswrapper[4689]: I1201 08:59:33.807589 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-5p44f" Dec 01 08:59:33 crc kubenswrapper[4689]: I1201 08:59:33.897137 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 01 08:59:33 crc kubenswrapper[4689]: I1201 08:59:33.898945 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 01 08:59:33 crc kubenswrapper[4689]: I1201 08:59:33.901615 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Dec 01 08:59:33 crc kubenswrapper[4689]: I1201 08:59:33.902290 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-pkghb" Dec 01 08:59:33 crc kubenswrapper[4689]: I1201 08:59:33.922108 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 01 08:59:34 crc kubenswrapper[4689]: I1201 08:59:34.058484 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72cnl\" (UniqueName: \"kubernetes.io/projected/d111a251-6be1-4996-a20d-a6ecdb0dbec9-kube-api-access-72cnl\") pod \"nova-cell0-conductor-0\" (UID: \"d111a251-6be1-4996-a20d-a6ecdb0dbec9\") " pod="openstack/nova-cell0-conductor-0" Dec 01 08:59:34 crc kubenswrapper[4689]: I1201 08:59:34.058544 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d111a251-6be1-4996-a20d-a6ecdb0dbec9-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"d111a251-6be1-4996-a20d-a6ecdb0dbec9\") " pod="openstack/nova-cell0-conductor-0" Dec 01 08:59:34 crc kubenswrapper[4689]: I1201 08:59:34.058635 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d111a251-6be1-4996-a20d-a6ecdb0dbec9-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"d111a251-6be1-4996-a20d-a6ecdb0dbec9\") " pod="openstack/nova-cell0-conductor-0" Dec 01 08:59:34 crc kubenswrapper[4689]: I1201 08:59:34.160226 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72cnl\" (UniqueName: \"kubernetes.io/projected/d111a251-6be1-4996-a20d-a6ecdb0dbec9-kube-api-access-72cnl\") pod \"nova-cell0-conductor-0\" (UID: \"d111a251-6be1-4996-a20d-a6ecdb0dbec9\") " pod="openstack/nova-cell0-conductor-0" Dec 01 08:59:34 crc kubenswrapper[4689]: I1201 08:59:34.160317 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d111a251-6be1-4996-a20d-a6ecdb0dbec9-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"d111a251-6be1-4996-a20d-a6ecdb0dbec9\") " pod="openstack/nova-cell0-conductor-0" Dec 01 08:59:34 crc kubenswrapper[4689]: I1201 08:59:34.160500 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d111a251-6be1-4996-a20d-a6ecdb0dbec9-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"d111a251-6be1-4996-a20d-a6ecdb0dbec9\") " pod="openstack/nova-cell0-conductor-0" Dec 01 08:59:34 crc kubenswrapper[4689]: I1201 08:59:34.166952 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d111a251-6be1-4996-a20d-a6ecdb0dbec9-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"d111a251-6be1-4996-a20d-a6ecdb0dbec9\") " pod="openstack/nova-cell0-conductor-0" Dec 01 08:59:34 crc kubenswrapper[4689]: I1201 08:59:34.184261 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d111a251-6be1-4996-a20d-a6ecdb0dbec9-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"d111a251-6be1-4996-a20d-a6ecdb0dbec9\") " pod="openstack/nova-cell0-conductor-0" Dec 01 08:59:34 crc kubenswrapper[4689]: I1201 08:59:34.187818 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72cnl\" (UniqueName: \"kubernetes.io/projected/d111a251-6be1-4996-a20d-a6ecdb0dbec9-kube-api-access-72cnl\") pod \"nova-cell0-conductor-0\" (UID: \"d111a251-6be1-4996-a20d-a6ecdb0dbec9\") " pod="openstack/nova-cell0-conductor-0" Dec 01 08:59:34 crc kubenswrapper[4689]: I1201 08:59:34.192715 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 01 08:59:34 crc kubenswrapper[4689]: I1201 08:59:34.224472 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 01 08:59:34 crc kubenswrapper[4689]: I1201 08:59:34.750052 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 01 08:59:34 crc kubenswrapper[4689]: W1201 08:59:34.759263 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd111a251_6be1_4996_a20d_a6ecdb0dbec9.slice/crio-f10c8b7e5fac091641fc3c24f0720fcf29ab313387fe6c54b8a406ba19a0bf2a WatchSource:0}: Error finding container f10c8b7e5fac091641fc3c24f0720fcf29ab313387fe6c54b8a406ba19a0bf2a: Status 404 returned error can't find the container with id f10c8b7e5fac091641fc3c24f0720fcf29ab313387fe6c54b8a406ba19a0bf2a Dec 01 08:59:34 crc kubenswrapper[4689]: I1201 08:59:34.804188 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 01 08:59:34 crc kubenswrapper[4689]: I1201 08:59:34.804505 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ca91aa7d-c591-4a04-81f6-738d5939ffed" containerName="sg-core" containerID="cri-o://4d4c1874741c71799aa2b671a22dad2656d16a28a0614913a3ebfac8d77bcb57" gracePeriod=30 Dec 01 08:59:34 crc kubenswrapper[4689]: I1201 08:59:34.804555 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ca91aa7d-c591-4a04-81f6-738d5939ffed" containerName="ceilometer-notification-agent" containerID="cri-o://66163a3cdaf21f3e56269495e3505461d7344eeb95c394f02ee78db4813a3803" gracePeriod=30 Dec 01 08:59:34 crc kubenswrapper[4689]: I1201 08:59:34.804533 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ca91aa7d-c591-4a04-81f6-738d5939ffed" containerName="proxy-httpd" containerID="cri-o://1041d558509f86248d1771c1260f555c8f385252e765863851b10f657bfa7828" gracePeriod=30 Dec 01 08:59:34 crc kubenswrapper[4689]: I1201 08:59:34.804841 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ca91aa7d-c591-4a04-81f6-738d5939ffed" containerName="ceilometer-central-agent" containerID="cri-o://83d0ac9695779df6e7efd49f1e7e759d0c57d8bbacc41bba3230b2dacced2ab8" gracePeriod=30 Dec 01 08:59:34 crc kubenswrapper[4689]: I1201 08:59:34.818977 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"432574e7-df30-4103-a396-c758c4df932c","Type":"ContainerStarted","Data":"fc6e702cb6b67bbf9e1db1517b394e86fb757ac3d1d7520dd453e1a6f597b3e8"} Dec 01 08:59:34 crc kubenswrapper[4689]: I1201 08:59:34.820143 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"d111a251-6be1-4996-a20d-a6ecdb0dbec9","Type":"ContainerStarted","Data":"f10c8b7e5fac091641fc3c24f0720fcf29ab313387fe6c54b8a406ba19a0bf2a"} Dec 01 08:59:35 crc kubenswrapper[4689]: I1201 08:59:35.082182 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06aa5768-7753-4a2d-8e40-96cea62d055c" path="/var/lib/kubelet/pods/06aa5768-7753-4a2d-8e40-96cea62d055c/volumes" Dec 01 08:59:35 crc kubenswrapper[4689]: I1201 08:59:35.830310 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"432574e7-df30-4103-a396-c758c4df932c","Type":"ContainerStarted","Data":"1c208c6d7e118cbc858f8c9cceb237c4f18b73917b5d1fc309cdaffdab0f24b0"} Dec 01 08:59:35 crc kubenswrapper[4689]: I1201 08:59:35.830437 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Dec 01 08:59:35 crc kubenswrapper[4689]: I1201 08:59:35.834861 4689 generic.go:334] "Generic (PLEG): container finished" podID="ca91aa7d-c591-4a04-81f6-738d5939ffed" containerID="1041d558509f86248d1771c1260f555c8f385252e765863851b10f657bfa7828" exitCode=0 Dec 01 08:59:35 crc kubenswrapper[4689]: I1201 08:59:35.834889 4689 generic.go:334] "Generic (PLEG): container finished" podID="ca91aa7d-c591-4a04-81f6-738d5939ffed" containerID="4d4c1874741c71799aa2b671a22dad2656d16a28a0614913a3ebfac8d77bcb57" exitCode=2 Dec 01 08:59:35 crc kubenswrapper[4689]: I1201 08:59:35.834899 4689 generic.go:334] "Generic (PLEG): container finished" podID="ca91aa7d-c591-4a04-81f6-738d5939ffed" containerID="83d0ac9695779df6e7efd49f1e7e759d0c57d8bbacc41bba3230b2dacced2ab8" exitCode=0 Dec 01 08:59:35 crc kubenswrapper[4689]: I1201 08:59:35.834915 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ca91aa7d-c591-4a04-81f6-738d5939ffed","Type":"ContainerDied","Data":"1041d558509f86248d1771c1260f555c8f385252e765863851b10f657bfa7828"} Dec 01 08:59:35 crc kubenswrapper[4689]: I1201 08:59:35.834973 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ca91aa7d-c591-4a04-81f6-738d5939ffed","Type":"ContainerDied","Data":"4d4c1874741c71799aa2b671a22dad2656d16a28a0614913a3ebfac8d77bcb57"} Dec 01 08:59:35 crc kubenswrapper[4689]: I1201 08:59:35.834984 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ca91aa7d-c591-4a04-81f6-738d5939ffed","Type":"ContainerDied","Data":"83d0ac9695779df6e7efd49f1e7e759d0c57d8bbacc41bba3230b2dacced2ab8"} Dec 01 08:59:35 crc kubenswrapper[4689]: I1201 08:59:35.837231 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"d111a251-6be1-4996-a20d-a6ecdb0dbec9","Type":"ContainerStarted","Data":"b383be4ebc9a4f61298c4032bfb276f6e9589aa230889f5549cf9bde3fcd07eb"} Dec 01 08:59:35 crc kubenswrapper[4689]: I1201 08:59:35.837461 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Dec 01 08:59:35 crc kubenswrapper[4689]: I1201 08:59:35.925392 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.860768076 podStartE2EDuration="2.925359518s" podCreationTimestamp="2025-12-01 08:59:33 +0000 UTC" firstStartedPulling="2025-12-01 08:59:34.192875808 +0000 UTC m=+1254.265163712" lastFinishedPulling="2025-12-01 08:59:35.25746726 +0000 UTC m=+1255.329755154" observedRunningTime="2025-12-01 08:59:35.857090074 +0000 UTC m=+1255.929377988" watchObservedRunningTime="2025-12-01 08:59:35.925359518 +0000 UTC m=+1255.997647422" Dec 01 08:59:35 crc kubenswrapper[4689]: I1201 08:59:35.927576 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.927567417 podStartE2EDuration="2.927567417s" podCreationTimestamp="2025-12-01 08:59:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:59:35.923276581 +0000 UTC m=+1255.995564485" watchObservedRunningTime="2025-12-01 08:59:35.927567417 +0000 UTC m=+1255.999855321" Dec 01 08:59:36 crc kubenswrapper[4689]: I1201 08:59:36.446520 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 08:59:36 crc kubenswrapper[4689]: I1201 08:59:36.625635 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ca91aa7d-c591-4a04-81f6-738d5939ffed-log-httpd\") pod \"ca91aa7d-c591-4a04-81f6-738d5939ffed\" (UID: \"ca91aa7d-c591-4a04-81f6-738d5939ffed\") " Dec 01 08:59:36 crc kubenswrapper[4689]: I1201 08:59:36.625965 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ca91aa7d-c591-4a04-81f6-738d5939ffed-sg-core-conf-yaml\") pod \"ca91aa7d-c591-4a04-81f6-738d5939ffed\" (UID: \"ca91aa7d-c591-4a04-81f6-738d5939ffed\") " Dec 01 08:59:36 crc kubenswrapper[4689]: I1201 08:59:36.626080 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca91aa7d-c591-4a04-81f6-738d5939ffed-combined-ca-bundle\") pod \"ca91aa7d-c591-4a04-81f6-738d5939ffed\" (UID: \"ca91aa7d-c591-4a04-81f6-738d5939ffed\") " Dec 01 08:59:36 crc kubenswrapper[4689]: I1201 08:59:36.626198 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ca91aa7d-c591-4a04-81f6-738d5939ffed-run-httpd\") pod \"ca91aa7d-c591-4a04-81f6-738d5939ffed\" (UID: \"ca91aa7d-c591-4a04-81f6-738d5939ffed\") " Dec 01 08:59:36 crc kubenswrapper[4689]: I1201 08:59:36.626346 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9hjzq\" (UniqueName: \"kubernetes.io/projected/ca91aa7d-c591-4a04-81f6-738d5939ffed-kube-api-access-9hjzq\") pod \"ca91aa7d-c591-4a04-81f6-738d5939ffed\" (UID: \"ca91aa7d-c591-4a04-81f6-738d5939ffed\") " Dec 01 08:59:36 crc kubenswrapper[4689]: I1201 08:59:36.626461 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca91aa7d-c591-4a04-81f6-738d5939ffed-config-data\") pod \"ca91aa7d-c591-4a04-81f6-738d5939ffed\" (UID: \"ca91aa7d-c591-4a04-81f6-738d5939ffed\") " Dec 01 08:59:36 crc kubenswrapper[4689]: I1201 08:59:36.626568 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca91aa7d-c591-4a04-81f6-738d5939ffed-scripts\") pod \"ca91aa7d-c591-4a04-81f6-738d5939ffed\" (UID: \"ca91aa7d-c591-4a04-81f6-738d5939ffed\") " Dec 01 08:59:36 crc kubenswrapper[4689]: I1201 08:59:36.629829 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca91aa7d-c591-4a04-81f6-738d5939ffed-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ca91aa7d-c591-4a04-81f6-738d5939ffed" (UID: "ca91aa7d-c591-4a04-81f6-738d5939ffed"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 08:59:36 crc kubenswrapper[4689]: I1201 08:59:36.632008 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca91aa7d-c591-4a04-81f6-738d5939ffed-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ca91aa7d-c591-4a04-81f6-738d5939ffed" (UID: "ca91aa7d-c591-4a04-81f6-738d5939ffed"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 08:59:36 crc kubenswrapper[4689]: I1201 08:59:36.644212 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca91aa7d-c591-4a04-81f6-738d5939ffed-scripts" (OuterVolumeSpecName: "scripts") pod "ca91aa7d-c591-4a04-81f6-738d5939ffed" (UID: "ca91aa7d-c591-4a04-81f6-738d5939ffed"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:59:36 crc kubenswrapper[4689]: I1201 08:59:36.644762 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca91aa7d-c591-4a04-81f6-738d5939ffed-kube-api-access-9hjzq" (OuterVolumeSpecName: "kube-api-access-9hjzq") pod "ca91aa7d-c591-4a04-81f6-738d5939ffed" (UID: "ca91aa7d-c591-4a04-81f6-738d5939ffed"). InnerVolumeSpecName "kube-api-access-9hjzq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:59:36 crc kubenswrapper[4689]: I1201 08:59:36.728684 4689 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ca91aa7d-c591-4a04-81f6-738d5939ffed-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 01 08:59:36 crc kubenswrapper[4689]: I1201 08:59:36.729265 4689 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ca91aa7d-c591-4a04-81f6-738d5939ffed-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 01 08:59:36 crc kubenswrapper[4689]: I1201 08:59:36.729348 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9hjzq\" (UniqueName: \"kubernetes.io/projected/ca91aa7d-c591-4a04-81f6-738d5939ffed-kube-api-access-9hjzq\") on node \"crc\" DevicePath \"\"" Dec 01 08:59:36 crc kubenswrapper[4689]: I1201 08:59:36.729453 4689 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca91aa7d-c591-4a04-81f6-738d5939ffed-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 08:59:36 crc kubenswrapper[4689]: I1201 08:59:36.744719 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca91aa7d-c591-4a04-81f6-738d5939ffed-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ca91aa7d-c591-4a04-81f6-738d5939ffed" (UID: "ca91aa7d-c591-4a04-81f6-738d5939ffed"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:59:36 crc kubenswrapper[4689]: I1201 08:59:36.787094 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca91aa7d-c591-4a04-81f6-738d5939ffed-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ca91aa7d-c591-4a04-81f6-738d5939ffed" (UID: "ca91aa7d-c591-4a04-81f6-738d5939ffed"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:59:36 crc kubenswrapper[4689]: I1201 08:59:36.789246 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca91aa7d-c591-4a04-81f6-738d5939ffed-config-data" (OuterVolumeSpecName: "config-data") pod "ca91aa7d-c591-4a04-81f6-738d5939ffed" (UID: "ca91aa7d-c591-4a04-81f6-738d5939ffed"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:59:36 crc kubenswrapper[4689]: I1201 08:59:36.831761 4689 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ca91aa7d-c591-4a04-81f6-738d5939ffed-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 01 08:59:36 crc kubenswrapper[4689]: I1201 08:59:36.831805 4689 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca91aa7d-c591-4a04-81f6-738d5939ffed-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 08:59:36 crc kubenswrapper[4689]: I1201 08:59:36.831816 4689 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca91aa7d-c591-4a04-81f6-738d5939ffed-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 08:59:36 crc kubenswrapper[4689]: I1201 08:59:36.857050 4689 generic.go:334] "Generic (PLEG): container finished" podID="ca91aa7d-c591-4a04-81f6-738d5939ffed" containerID="66163a3cdaf21f3e56269495e3505461d7344eeb95c394f02ee78db4813a3803" exitCode=0 Dec 01 08:59:36 crc kubenswrapper[4689]: I1201 08:59:36.857120 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 08:59:36 crc kubenswrapper[4689]: I1201 08:59:36.857122 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ca91aa7d-c591-4a04-81f6-738d5939ffed","Type":"ContainerDied","Data":"66163a3cdaf21f3e56269495e3505461d7344eeb95c394f02ee78db4813a3803"} Dec 01 08:59:36 crc kubenswrapper[4689]: I1201 08:59:36.857520 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ca91aa7d-c591-4a04-81f6-738d5939ffed","Type":"ContainerDied","Data":"68f912aaad9845f50ba0e832c61fa54c9f753cfd95401f0a69ab4e143f3c8670"} Dec 01 08:59:36 crc kubenswrapper[4689]: I1201 08:59:36.857538 4689 scope.go:117] "RemoveContainer" containerID="1041d558509f86248d1771c1260f555c8f385252e765863851b10f657bfa7828" Dec 01 08:59:36 crc kubenswrapper[4689]: I1201 08:59:36.880038 4689 scope.go:117] "RemoveContainer" containerID="4d4c1874741c71799aa2b671a22dad2656d16a28a0614913a3ebfac8d77bcb57" Dec 01 08:59:36 crc kubenswrapper[4689]: I1201 08:59:36.906590 4689 scope.go:117] "RemoveContainer" containerID="66163a3cdaf21f3e56269495e3505461d7344eeb95c394f02ee78db4813a3803" Dec 01 08:59:36 crc kubenswrapper[4689]: I1201 08:59:36.912604 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 01 08:59:36 crc kubenswrapper[4689]: I1201 08:59:36.934138 4689 scope.go:117] "RemoveContainer" containerID="83d0ac9695779df6e7efd49f1e7e759d0c57d8bbacc41bba3230b2dacced2ab8" Dec 01 08:59:36 crc kubenswrapper[4689]: I1201 08:59:36.937024 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 01 08:59:36 crc kubenswrapper[4689]: I1201 08:59:36.953202 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 01 08:59:36 crc kubenswrapper[4689]: E1201 08:59:36.953787 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca91aa7d-c591-4a04-81f6-738d5939ffed" containerName="ceilometer-central-agent" Dec 01 08:59:36 crc kubenswrapper[4689]: I1201 08:59:36.953814 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca91aa7d-c591-4a04-81f6-738d5939ffed" containerName="ceilometer-central-agent" Dec 01 08:59:36 crc kubenswrapper[4689]: E1201 08:59:36.953845 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca91aa7d-c591-4a04-81f6-738d5939ffed" containerName="sg-core" Dec 01 08:59:36 crc kubenswrapper[4689]: I1201 08:59:36.953854 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca91aa7d-c591-4a04-81f6-738d5939ffed" containerName="sg-core" Dec 01 08:59:36 crc kubenswrapper[4689]: E1201 08:59:36.953867 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca91aa7d-c591-4a04-81f6-738d5939ffed" containerName="proxy-httpd" Dec 01 08:59:36 crc kubenswrapper[4689]: I1201 08:59:36.953876 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca91aa7d-c591-4a04-81f6-738d5939ffed" containerName="proxy-httpd" Dec 01 08:59:36 crc kubenswrapper[4689]: E1201 08:59:36.953913 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca91aa7d-c591-4a04-81f6-738d5939ffed" containerName="ceilometer-notification-agent" Dec 01 08:59:36 crc kubenswrapper[4689]: I1201 08:59:36.953921 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca91aa7d-c591-4a04-81f6-738d5939ffed" containerName="ceilometer-notification-agent" Dec 01 08:59:36 crc kubenswrapper[4689]: I1201 08:59:36.954165 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca91aa7d-c591-4a04-81f6-738d5939ffed" containerName="ceilometer-central-agent" Dec 01 08:59:36 crc kubenswrapper[4689]: I1201 08:59:36.954191 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca91aa7d-c591-4a04-81f6-738d5939ffed" containerName="proxy-httpd" Dec 01 08:59:36 crc kubenswrapper[4689]: I1201 08:59:36.954216 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca91aa7d-c591-4a04-81f6-738d5939ffed" containerName="sg-core" Dec 01 08:59:36 crc kubenswrapper[4689]: I1201 08:59:36.954231 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca91aa7d-c591-4a04-81f6-738d5939ffed" containerName="ceilometer-notification-agent" Dec 01 08:59:36 crc kubenswrapper[4689]: I1201 08:59:36.955267 4689 scope.go:117] "RemoveContainer" containerID="1041d558509f86248d1771c1260f555c8f385252e765863851b10f657bfa7828" Dec 01 08:59:36 crc kubenswrapper[4689]: E1201 08:59:36.955691 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1041d558509f86248d1771c1260f555c8f385252e765863851b10f657bfa7828\": container with ID starting with 1041d558509f86248d1771c1260f555c8f385252e765863851b10f657bfa7828 not found: ID does not exist" containerID="1041d558509f86248d1771c1260f555c8f385252e765863851b10f657bfa7828" Dec 01 08:59:36 crc kubenswrapper[4689]: I1201 08:59:36.955735 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1041d558509f86248d1771c1260f555c8f385252e765863851b10f657bfa7828"} err="failed to get container status \"1041d558509f86248d1771c1260f555c8f385252e765863851b10f657bfa7828\": rpc error: code = NotFound desc = could not find container \"1041d558509f86248d1771c1260f555c8f385252e765863851b10f657bfa7828\": container with ID starting with 1041d558509f86248d1771c1260f555c8f385252e765863851b10f657bfa7828 not found: ID does not exist" Dec 01 08:59:36 crc kubenswrapper[4689]: I1201 08:59:36.955761 4689 scope.go:117] "RemoveContainer" containerID="4d4c1874741c71799aa2b671a22dad2656d16a28a0614913a3ebfac8d77bcb57" Dec 01 08:59:36 crc kubenswrapper[4689]: E1201 08:59:36.956006 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d4c1874741c71799aa2b671a22dad2656d16a28a0614913a3ebfac8d77bcb57\": container with ID starting with 4d4c1874741c71799aa2b671a22dad2656d16a28a0614913a3ebfac8d77bcb57 not found: ID does not exist" containerID="4d4c1874741c71799aa2b671a22dad2656d16a28a0614913a3ebfac8d77bcb57" Dec 01 08:59:36 crc kubenswrapper[4689]: I1201 08:59:36.956037 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d4c1874741c71799aa2b671a22dad2656d16a28a0614913a3ebfac8d77bcb57"} err="failed to get container status \"4d4c1874741c71799aa2b671a22dad2656d16a28a0614913a3ebfac8d77bcb57\": rpc error: code = NotFound desc = could not find container \"4d4c1874741c71799aa2b671a22dad2656d16a28a0614913a3ebfac8d77bcb57\": container with ID starting with 4d4c1874741c71799aa2b671a22dad2656d16a28a0614913a3ebfac8d77bcb57 not found: ID does not exist" Dec 01 08:59:36 crc kubenswrapper[4689]: I1201 08:59:36.956056 4689 scope.go:117] "RemoveContainer" containerID="66163a3cdaf21f3e56269495e3505461d7344eeb95c394f02ee78db4813a3803" Dec 01 08:59:36 crc kubenswrapper[4689]: E1201 08:59:36.956247 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66163a3cdaf21f3e56269495e3505461d7344eeb95c394f02ee78db4813a3803\": container with ID starting with 66163a3cdaf21f3e56269495e3505461d7344eeb95c394f02ee78db4813a3803 not found: ID does not exist" containerID="66163a3cdaf21f3e56269495e3505461d7344eeb95c394f02ee78db4813a3803" Dec 01 08:59:36 crc kubenswrapper[4689]: I1201 08:59:36.956284 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66163a3cdaf21f3e56269495e3505461d7344eeb95c394f02ee78db4813a3803"} err="failed to get container status \"66163a3cdaf21f3e56269495e3505461d7344eeb95c394f02ee78db4813a3803\": rpc error: code = NotFound desc = could not find container \"66163a3cdaf21f3e56269495e3505461d7344eeb95c394f02ee78db4813a3803\": container with ID starting with 66163a3cdaf21f3e56269495e3505461d7344eeb95c394f02ee78db4813a3803 not found: ID does not exist" Dec 01 08:59:36 crc kubenswrapper[4689]: I1201 08:59:36.956302 4689 scope.go:117] "RemoveContainer" containerID="83d0ac9695779df6e7efd49f1e7e759d0c57d8bbacc41bba3230b2dacced2ab8" Dec 01 08:59:36 crc kubenswrapper[4689]: E1201 08:59:36.956549 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83d0ac9695779df6e7efd49f1e7e759d0c57d8bbacc41bba3230b2dacced2ab8\": container with ID starting with 83d0ac9695779df6e7efd49f1e7e759d0c57d8bbacc41bba3230b2dacced2ab8 not found: ID does not exist" containerID="83d0ac9695779df6e7efd49f1e7e759d0c57d8bbacc41bba3230b2dacced2ab8" Dec 01 08:59:36 crc kubenswrapper[4689]: I1201 08:59:36.956577 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83d0ac9695779df6e7efd49f1e7e759d0c57d8bbacc41bba3230b2dacced2ab8"} err="failed to get container status \"83d0ac9695779df6e7efd49f1e7e759d0c57d8bbacc41bba3230b2dacced2ab8\": rpc error: code = NotFound desc = could not find container \"83d0ac9695779df6e7efd49f1e7e759d0c57d8bbacc41bba3230b2dacced2ab8\": container with ID starting with 83d0ac9695779df6e7efd49f1e7e759d0c57d8bbacc41bba3230b2dacced2ab8 not found: ID does not exist" Dec 01 08:59:36 crc kubenswrapper[4689]: I1201 08:59:36.956787 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 08:59:36 crc kubenswrapper[4689]: I1201 08:59:36.959866 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 01 08:59:36 crc kubenswrapper[4689]: I1201 08:59:36.961756 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 01 08:59:36 crc kubenswrapper[4689]: I1201 08:59:36.961953 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Dec 01 08:59:36 crc kubenswrapper[4689]: I1201 08:59:36.966208 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 01 08:59:37 crc kubenswrapper[4689]: I1201 08:59:37.060728 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca91aa7d-c591-4a04-81f6-738d5939ffed" path="/var/lib/kubelet/pods/ca91aa7d-c591-4a04-81f6-738d5939ffed/volumes" Dec 01 08:59:37 crc kubenswrapper[4689]: I1201 08:59:37.137035 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b80ca10a-fd51-4c1c-b6ed-ca470af97459-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b80ca10a-fd51-4c1c-b6ed-ca470af97459\") " pod="openstack/ceilometer-0" Dec 01 08:59:37 crc kubenswrapper[4689]: I1201 08:59:37.137100 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b80ca10a-fd51-4c1c-b6ed-ca470af97459-run-httpd\") pod \"ceilometer-0\" (UID: \"b80ca10a-fd51-4c1c-b6ed-ca470af97459\") " pod="openstack/ceilometer-0" Dec 01 08:59:37 crc kubenswrapper[4689]: I1201 08:59:37.137133 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b80ca10a-fd51-4c1c-b6ed-ca470af97459-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b80ca10a-fd51-4c1c-b6ed-ca470af97459\") " pod="openstack/ceilometer-0" Dec 01 08:59:37 crc kubenswrapper[4689]: I1201 08:59:37.137153 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56wc4\" (UniqueName: \"kubernetes.io/projected/b80ca10a-fd51-4c1c-b6ed-ca470af97459-kube-api-access-56wc4\") pod \"ceilometer-0\" (UID: \"b80ca10a-fd51-4c1c-b6ed-ca470af97459\") " pod="openstack/ceilometer-0" Dec 01 08:59:37 crc kubenswrapper[4689]: I1201 08:59:37.138399 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b80ca10a-fd51-4c1c-b6ed-ca470af97459-log-httpd\") pod \"ceilometer-0\" (UID: \"b80ca10a-fd51-4c1c-b6ed-ca470af97459\") " pod="openstack/ceilometer-0" Dec 01 08:59:37 crc kubenswrapper[4689]: I1201 08:59:37.138448 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b80ca10a-fd51-4c1c-b6ed-ca470af97459-scripts\") pod \"ceilometer-0\" (UID: \"b80ca10a-fd51-4c1c-b6ed-ca470af97459\") " pod="openstack/ceilometer-0" Dec 01 08:59:37 crc kubenswrapper[4689]: I1201 08:59:37.138492 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b80ca10a-fd51-4c1c-b6ed-ca470af97459-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b80ca10a-fd51-4c1c-b6ed-ca470af97459\") " pod="openstack/ceilometer-0" Dec 01 08:59:37 crc kubenswrapper[4689]: I1201 08:59:37.138616 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b80ca10a-fd51-4c1c-b6ed-ca470af97459-config-data\") pod \"ceilometer-0\" (UID: \"b80ca10a-fd51-4c1c-b6ed-ca470af97459\") " pod="openstack/ceilometer-0" Dec 01 08:59:37 crc kubenswrapper[4689]: I1201 08:59:37.239984 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b80ca10a-fd51-4c1c-b6ed-ca470af97459-config-data\") pod \"ceilometer-0\" (UID: \"b80ca10a-fd51-4c1c-b6ed-ca470af97459\") " pod="openstack/ceilometer-0" Dec 01 08:59:37 crc kubenswrapper[4689]: I1201 08:59:37.240070 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b80ca10a-fd51-4c1c-b6ed-ca470af97459-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b80ca10a-fd51-4c1c-b6ed-ca470af97459\") " pod="openstack/ceilometer-0" Dec 01 08:59:37 crc kubenswrapper[4689]: I1201 08:59:37.240109 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b80ca10a-fd51-4c1c-b6ed-ca470af97459-run-httpd\") pod \"ceilometer-0\" (UID: \"b80ca10a-fd51-4c1c-b6ed-ca470af97459\") " pod="openstack/ceilometer-0" Dec 01 08:59:37 crc kubenswrapper[4689]: I1201 08:59:37.240133 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b80ca10a-fd51-4c1c-b6ed-ca470af97459-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b80ca10a-fd51-4c1c-b6ed-ca470af97459\") " pod="openstack/ceilometer-0" Dec 01 08:59:37 crc kubenswrapper[4689]: I1201 08:59:37.240154 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56wc4\" (UniqueName: \"kubernetes.io/projected/b80ca10a-fd51-4c1c-b6ed-ca470af97459-kube-api-access-56wc4\") pod \"ceilometer-0\" (UID: \"b80ca10a-fd51-4c1c-b6ed-ca470af97459\") " pod="openstack/ceilometer-0" Dec 01 08:59:37 crc kubenswrapper[4689]: I1201 08:59:37.240178 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b80ca10a-fd51-4c1c-b6ed-ca470af97459-log-httpd\") pod \"ceilometer-0\" (UID: \"b80ca10a-fd51-4c1c-b6ed-ca470af97459\") " pod="openstack/ceilometer-0" Dec 01 08:59:37 crc kubenswrapper[4689]: I1201 08:59:37.240201 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b80ca10a-fd51-4c1c-b6ed-ca470af97459-scripts\") pod \"ceilometer-0\" (UID: \"b80ca10a-fd51-4c1c-b6ed-ca470af97459\") " pod="openstack/ceilometer-0" Dec 01 08:59:37 crc kubenswrapper[4689]: I1201 08:59:37.240238 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b80ca10a-fd51-4c1c-b6ed-ca470af97459-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b80ca10a-fd51-4c1c-b6ed-ca470af97459\") " pod="openstack/ceilometer-0" Dec 01 08:59:37 crc kubenswrapper[4689]: I1201 08:59:37.241332 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b80ca10a-fd51-4c1c-b6ed-ca470af97459-log-httpd\") pod \"ceilometer-0\" (UID: \"b80ca10a-fd51-4c1c-b6ed-ca470af97459\") " pod="openstack/ceilometer-0" Dec 01 08:59:37 crc kubenswrapper[4689]: I1201 08:59:37.241459 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b80ca10a-fd51-4c1c-b6ed-ca470af97459-run-httpd\") pod \"ceilometer-0\" (UID: \"b80ca10a-fd51-4c1c-b6ed-ca470af97459\") " pod="openstack/ceilometer-0" Dec 01 08:59:37 crc kubenswrapper[4689]: I1201 08:59:37.254686 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b80ca10a-fd51-4c1c-b6ed-ca470af97459-scripts\") pod \"ceilometer-0\" (UID: \"b80ca10a-fd51-4c1c-b6ed-ca470af97459\") " pod="openstack/ceilometer-0" Dec 01 08:59:37 crc kubenswrapper[4689]: I1201 08:59:37.255335 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b80ca10a-fd51-4c1c-b6ed-ca470af97459-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b80ca10a-fd51-4c1c-b6ed-ca470af97459\") " pod="openstack/ceilometer-0" Dec 01 08:59:37 crc kubenswrapper[4689]: I1201 08:59:37.255932 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56wc4\" (UniqueName: \"kubernetes.io/projected/b80ca10a-fd51-4c1c-b6ed-ca470af97459-kube-api-access-56wc4\") pod \"ceilometer-0\" (UID: \"b80ca10a-fd51-4c1c-b6ed-ca470af97459\") " pod="openstack/ceilometer-0" Dec 01 08:59:37 crc kubenswrapper[4689]: I1201 08:59:37.256728 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b80ca10a-fd51-4c1c-b6ed-ca470af97459-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b80ca10a-fd51-4c1c-b6ed-ca470af97459\") " pod="openstack/ceilometer-0" Dec 01 08:59:37 crc kubenswrapper[4689]: I1201 08:59:37.257387 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b80ca10a-fd51-4c1c-b6ed-ca470af97459-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b80ca10a-fd51-4c1c-b6ed-ca470af97459\") " pod="openstack/ceilometer-0" Dec 01 08:59:37 crc kubenswrapper[4689]: I1201 08:59:37.258636 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b80ca10a-fd51-4c1c-b6ed-ca470af97459-config-data\") pod \"ceilometer-0\" (UID: \"b80ca10a-fd51-4c1c-b6ed-ca470af97459\") " pod="openstack/ceilometer-0" Dec 01 08:59:37 crc kubenswrapper[4689]: I1201 08:59:37.287579 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 08:59:37 crc kubenswrapper[4689]: I1201 08:59:37.656435 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 01 08:59:37 crc kubenswrapper[4689]: I1201 08:59:37.868267 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b80ca10a-fd51-4c1c-b6ed-ca470af97459","Type":"ContainerStarted","Data":"1f9ecc37647420096cf8d05c17e3f0511511c21b496c1746e7448c22fc7f24e9"} Dec 01 08:59:38 crc kubenswrapper[4689]: I1201 08:59:38.882060 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b80ca10a-fd51-4c1c-b6ed-ca470af97459","Type":"ContainerStarted","Data":"ae05c105819f7b8b5d0855d5d06b27ba8a91eb34cde922ba4feca5fdf9f4e5c9"} Dec 01 08:59:39 crc kubenswrapper[4689]: I1201 08:59:39.146736 4689 patch_prober.go:28] interesting pod/machine-config-daemon-hmdnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 08:59:39 crc kubenswrapper[4689]: I1201 08:59:39.147074 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 08:59:39 crc kubenswrapper[4689]: I1201 08:59:39.895719 4689 generic.go:334] "Generic (PLEG): container finished" podID="e88c04bb-01ff-47a6-8942-05a9a2a68416" containerID="ad38fd3db04934de38e1df2739d0df091d88ad6e59e5e1dc95f4167e1c88b624" exitCode=137 Dec 01 08:59:39 crc kubenswrapper[4689]: I1201 08:59:39.896143 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-78d9cd9dbd-qxwq7" event={"ID":"e88c04bb-01ff-47a6-8942-05a9a2a68416","Type":"ContainerDied","Data":"ad38fd3db04934de38e1df2739d0df091d88ad6e59e5e1dc95f4167e1c88b624"} Dec 01 08:59:39 crc kubenswrapper[4689]: I1201 08:59:39.900259 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b80ca10a-fd51-4c1c-b6ed-ca470af97459","Type":"ContainerStarted","Data":"be8cc33258dfc0a1bf1d4f5c12eca9a0ee0f07e8ad7da84c3ca1bec9bbb28daa"} Dec 01 08:59:40 crc kubenswrapper[4689]: I1201 08:59:40.002572 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-78d9cd9dbd-qxwq7" Dec 01 08:59:40 crc kubenswrapper[4689]: I1201 08:59:40.102287 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e88c04bb-01ff-47a6-8942-05a9a2a68416-scripts\") pod \"e88c04bb-01ff-47a6-8942-05a9a2a68416\" (UID: \"e88c04bb-01ff-47a6-8942-05a9a2a68416\") " Dec 01 08:59:40 crc kubenswrapper[4689]: I1201 08:59:40.102739 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e88c04bb-01ff-47a6-8942-05a9a2a68416-combined-ca-bundle\") pod \"e88c04bb-01ff-47a6-8942-05a9a2a68416\" (UID: \"e88c04bb-01ff-47a6-8942-05a9a2a68416\") " Dec 01 08:59:40 crc kubenswrapper[4689]: I1201 08:59:40.102793 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e88c04bb-01ff-47a6-8942-05a9a2a68416-logs\") pod \"e88c04bb-01ff-47a6-8942-05a9a2a68416\" (UID: \"e88c04bb-01ff-47a6-8942-05a9a2a68416\") " Dec 01 08:59:40 crc kubenswrapper[4689]: I1201 08:59:40.102915 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/e88c04bb-01ff-47a6-8942-05a9a2a68416-horizon-tls-certs\") pod \"e88c04bb-01ff-47a6-8942-05a9a2a68416\" (UID: \"e88c04bb-01ff-47a6-8942-05a9a2a68416\") " Dec 01 08:59:40 crc kubenswrapper[4689]: I1201 08:59:40.102949 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e88c04bb-01ff-47a6-8942-05a9a2a68416-horizon-secret-key\") pod \"e88c04bb-01ff-47a6-8942-05a9a2a68416\" (UID: \"e88c04bb-01ff-47a6-8942-05a9a2a68416\") " Dec 01 08:59:40 crc kubenswrapper[4689]: I1201 08:59:40.103041 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e88c04bb-01ff-47a6-8942-05a9a2a68416-config-data\") pod \"e88c04bb-01ff-47a6-8942-05a9a2a68416\" (UID: \"e88c04bb-01ff-47a6-8942-05a9a2a68416\") " Dec 01 08:59:40 crc kubenswrapper[4689]: I1201 08:59:40.103092 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rx6kr\" (UniqueName: \"kubernetes.io/projected/e88c04bb-01ff-47a6-8942-05a9a2a68416-kube-api-access-rx6kr\") pod \"e88c04bb-01ff-47a6-8942-05a9a2a68416\" (UID: \"e88c04bb-01ff-47a6-8942-05a9a2a68416\") " Dec 01 08:59:40 crc kubenswrapper[4689]: I1201 08:59:40.103242 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e88c04bb-01ff-47a6-8942-05a9a2a68416-logs" (OuterVolumeSpecName: "logs") pod "e88c04bb-01ff-47a6-8942-05a9a2a68416" (UID: "e88c04bb-01ff-47a6-8942-05a9a2a68416"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 08:59:40 crc kubenswrapper[4689]: I1201 08:59:40.103891 4689 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e88c04bb-01ff-47a6-8942-05a9a2a68416-logs\") on node \"crc\" DevicePath \"\"" Dec 01 08:59:40 crc kubenswrapper[4689]: I1201 08:59:40.113723 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e88c04bb-01ff-47a6-8942-05a9a2a68416-kube-api-access-rx6kr" (OuterVolumeSpecName: "kube-api-access-rx6kr") pod "e88c04bb-01ff-47a6-8942-05a9a2a68416" (UID: "e88c04bb-01ff-47a6-8942-05a9a2a68416"). InnerVolumeSpecName "kube-api-access-rx6kr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:59:40 crc kubenswrapper[4689]: I1201 08:59:40.114242 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e88c04bb-01ff-47a6-8942-05a9a2a68416-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "e88c04bb-01ff-47a6-8942-05a9a2a68416" (UID: "e88c04bb-01ff-47a6-8942-05a9a2a68416"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:59:40 crc kubenswrapper[4689]: I1201 08:59:40.139041 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e88c04bb-01ff-47a6-8942-05a9a2a68416-scripts" (OuterVolumeSpecName: "scripts") pod "e88c04bb-01ff-47a6-8942-05a9a2a68416" (UID: "e88c04bb-01ff-47a6-8942-05a9a2a68416"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:59:40 crc kubenswrapper[4689]: I1201 08:59:40.165271 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e88c04bb-01ff-47a6-8942-05a9a2a68416-config-data" (OuterVolumeSpecName: "config-data") pod "e88c04bb-01ff-47a6-8942-05a9a2a68416" (UID: "e88c04bb-01ff-47a6-8942-05a9a2a68416"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:59:40 crc kubenswrapper[4689]: I1201 08:59:40.170045 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e88c04bb-01ff-47a6-8942-05a9a2a68416-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e88c04bb-01ff-47a6-8942-05a9a2a68416" (UID: "e88c04bb-01ff-47a6-8942-05a9a2a68416"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:59:40 crc kubenswrapper[4689]: I1201 08:59:40.203901 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e88c04bb-01ff-47a6-8942-05a9a2a68416-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "e88c04bb-01ff-47a6-8942-05a9a2a68416" (UID: "e88c04bb-01ff-47a6-8942-05a9a2a68416"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:59:40 crc kubenswrapper[4689]: I1201 08:59:40.205629 4689 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e88c04bb-01ff-47a6-8942-05a9a2a68416-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 08:59:40 crc kubenswrapper[4689]: I1201 08:59:40.205711 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rx6kr\" (UniqueName: \"kubernetes.io/projected/e88c04bb-01ff-47a6-8942-05a9a2a68416-kube-api-access-rx6kr\") on node \"crc\" DevicePath \"\"" Dec 01 08:59:40 crc kubenswrapper[4689]: I1201 08:59:40.205820 4689 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e88c04bb-01ff-47a6-8942-05a9a2a68416-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 08:59:40 crc kubenswrapper[4689]: I1201 08:59:40.205876 4689 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e88c04bb-01ff-47a6-8942-05a9a2a68416-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 08:59:40 crc kubenswrapper[4689]: I1201 08:59:40.205946 4689 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/e88c04bb-01ff-47a6-8942-05a9a2a68416-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 01 08:59:40 crc kubenswrapper[4689]: I1201 08:59:40.206005 4689 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e88c04bb-01ff-47a6-8942-05a9a2a68416-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 01 08:59:40 crc kubenswrapper[4689]: I1201 08:59:40.943288 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-78d9cd9dbd-qxwq7" event={"ID":"e88c04bb-01ff-47a6-8942-05a9a2a68416","Type":"ContainerDied","Data":"775cf8bfba0575c3f0bd28d24b74d52353c3436f003d538b93c085ffb30a1471"} Dec 01 08:59:40 crc kubenswrapper[4689]: I1201 08:59:40.943587 4689 scope.go:117] "RemoveContainer" containerID="dbacf385cc7e024476440ee9e90e68d5f7a572a69d91e9e613651d878c816d6c" Dec 01 08:59:40 crc kubenswrapper[4689]: I1201 08:59:40.943323 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-78d9cd9dbd-qxwq7" Dec 01 08:59:40 crc kubenswrapper[4689]: I1201 08:59:40.954633 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b80ca10a-fd51-4c1c-b6ed-ca470af97459","Type":"ContainerStarted","Data":"2247cac836a1a5d7af8f2e4b0043ccf541b27ba3ff89f4921e4f1e6a5d6c4f63"} Dec 01 08:59:40 crc kubenswrapper[4689]: I1201 08:59:40.993798 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-78d9cd9dbd-qxwq7"] Dec 01 08:59:41 crc kubenswrapper[4689]: I1201 08:59:41.014979 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-78d9cd9dbd-qxwq7"] Dec 01 08:59:41 crc kubenswrapper[4689]: I1201 08:59:41.063775 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e88c04bb-01ff-47a6-8942-05a9a2a68416" path="/var/lib/kubelet/pods/e88c04bb-01ff-47a6-8942-05a9a2a68416/volumes" Dec 01 08:59:41 crc kubenswrapper[4689]: I1201 08:59:41.180057 4689 scope.go:117] "RemoveContainer" containerID="ad38fd3db04934de38e1df2739d0df091d88ad6e59e5e1dc95f4167e1c88b624" Dec 01 08:59:42 crc kubenswrapper[4689]: I1201 08:59:42.976852 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b80ca10a-fd51-4c1c-b6ed-ca470af97459","Type":"ContainerStarted","Data":"cd17cffe682a76fad9d61d955d69ed2977145dda1dc83815feb448dda30ac918"} Dec 01 08:59:42 crc kubenswrapper[4689]: I1201 08:59:42.977519 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 01 08:59:43 crc kubenswrapper[4689]: I1201 08:59:43.007748 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.702404638 podStartE2EDuration="7.007716454s" podCreationTimestamp="2025-12-01 08:59:36 +0000 UTC" firstStartedPulling="2025-12-01 08:59:37.662084612 +0000 UTC m=+1257.734372516" lastFinishedPulling="2025-12-01 08:59:41.967396428 +0000 UTC m=+1262.039684332" observedRunningTime="2025-12-01 08:59:43.000066638 +0000 UTC m=+1263.072354552" watchObservedRunningTime="2025-12-01 08:59:43.007716454 +0000 UTC m=+1263.080004388" Dec 01 08:59:43 crc kubenswrapper[4689]: I1201 08:59:43.657371 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Dec 01 08:59:44 crc kubenswrapper[4689]: I1201 08:59:44.271397 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Dec 01 08:59:44 crc kubenswrapper[4689]: I1201 08:59:44.807800 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-gf7l5"] Dec 01 08:59:44 crc kubenswrapper[4689]: E1201 08:59:44.808724 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e88c04bb-01ff-47a6-8942-05a9a2a68416" containerName="horizon" Dec 01 08:59:44 crc kubenswrapper[4689]: I1201 08:59:44.808796 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="e88c04bb-01ff-47a6-8942-05a9a2a68416" containerName="horizon" Dec 01 08:59:44 crc kubenswrapper[4689]: E1201 08:59:44.808866 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e88c04bb-01ff-47a6-8942-05a9a2a68416" containerName="horizon-log" Dec 01 08:59:44 crc kubenswrapper[4689]: I1201 08:59:44.808919 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="e88c04bb-01ff-47a6-8942-05a9a2a68416" containerName="horizon-log" Dec 01 08:59:44 crc kubenswrapper[4689]: I1201 08:59:44.809163 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="e88c04bb-01ff-47a6-8942-05a9a2a68416" containerName="horizon" Dec 01 08:59:44 crc kubenswrapper[4689]: I1201 08:59:44.809225 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="e88c04bb-01ff-47a6-8942-05a9a2a68416" containerName="horizon-log" Dec 01 08:59:44 crc kubenswrapper[4689]: I1201 08:59:44.809288 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="e88c04bb-01ff-47a6-8942-05a9a2a68416" containerName="horizon" Dec 01 08:59:44 crc kubenswrapper[4689]: I1201 08:59:44.809943 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-gf7l5" Dec 01 08:59:44 crc kubenswrapper[4689]: I1201 08:59:44.821052 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Dec 01 08:59:44 crc kubenswrapper[4689]: I1201 08:59:44.831534 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Dec 01 08:59:44 crc kubenswrapper[4689]: I1201 08:59:44.836468 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prmsv\" (UniqueName: \"kubernetes.io/projected/08dd5230-a82f-43bc-9517-78b80ed7b39a-kube-api-access-prmsv\") pod \"nova-cell0-cell-mapping-gf7l5\" (UID: \"08dd5230-a82f-43bc-9517-78b80ed7b39a\") " pod="openstack/nova-cell0-cell-mapping-gf7l5" Dec 01 08:59:44 crc kubenswrapper[4689]: I1201 08:59:44.836541 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08dd5230-a82f-43bc-9517-78b80ed7b39a-scripts\") pod \"nova-cell0-cell-mapping-gf7l5\" (UID: \"08dd5230-a82f-43bc-9517-78b80ed7b39a\") " pod="openstack/nova-cell0-cell-mapping-gf7l5" Dec 01 08:59:44 crc kubenswrapper[4689]: I1201 08:59:44.836579 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08dd5230-a82f-43bc-9517-78b80ed7b39a-config-data\") pod \"nova-cell0-cell-mapping-gf7l5\" (UID: \"08dd5230-a82f-43bc-9517-78b80ed7b39a\") " pod="openstack/nova-cell0-cell-mapping-gf7l5" Dec 01 08:59:44 crc kubenswrapper[4689]: I1201 08:59:44.836601 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08dd5230-a82f-43bc-9517-78b80ed7b39a-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-gf7l5\" (UID: \"08dd5230-a82f-43bc-9517-78b80ed7b39a\") " pod="openstack/nova-cell0-cell-mapping-gf7l5" Dec 01 08:59:44 crc kubenswrapper[4689]: I1201 08:59:44.937671 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-gf7l5"] Dec 01 08:59:44 crc kubenswrapper[4689]: I1201 08:59:44.938616 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08dd5230-a82f-43bc-9517-78b80ed7b39a-scripts\") pod \"nova-cell0-cell-mapping-gf7l5\" (UID: \"08dd5230-a82f-43bc-9517-78b80ed7b39a\") " pod="openstack/nova-cell0-cell-mapping-gf7l5" Dec 01 08:59:44 crc kubenswrapper[4689]: I1201 08:59:44.938680 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08dd5230-a82f-43bc-9517-78b80ed7b39a-config-data\") pod \"nova-cell0-cell-mapping-gf7l5\" (UID: \"08dd5230-a82f-43bc-9517-78b80ed7b39a\") " pod="openstack/nova-cell0-cell-mapping-gf7l5" Dec 01 08:59:44 crc kubenswrapper[4689]: I1201 08:59:44.938703 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08dd5230-a82f-43bc-9517-78b80ed7b39a-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-gf7l5\" (UID: \"08dd5230-a82f-43bc-9517-78b80ed7b39a\") " pod="openstack/nova-cell0-cell-mapping-gf7l5" Dec 01 08:59:44 crc kubenswrapper[4689]: I1201 08:59:44.938788 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prmsv\" (UniqueName: \"kubernetes.io/projected/08dd5230-a82f-43bc-9517-78b80ed7b39a-kube-api-access-prmsv\") pod \"nova-cell0-cell-mapping-gf7l5\" (UID: \"08dd5230-a82f-43bc-9517-78b80ed7b39a\") " pod="openstack/nova-cell0-cell-mapping-gf7l5" Dec 01 08:59:44 crc kubenswrapper[4689]: I1201 08:59:44.947910 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08dd5230-a82f-43bc-9517-78b80ed7b39a-config-data\") pod \"nova-cell0-cell-mapping-gf7l5\" (UID: \"08dd5230-a82f-43bc-9517-78b80ed7b39a\") " pod="openstack/nova-cell0-cell-mapping-gf7l5" Dec 01 08:59:44 crc kubenswrapper[4689]: I1201 08:59:44.956074 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08dd5230-a82f-43bc-9517-78b80ed7b39a-scripts\") pod \"nova-cell0-cell-mapping-gf7l5\" (UID: \"08dd5230-a82f-43bc-9517-78b80ed7b39a\") " pod="openstack/nova-cell0-cell-mapping-gf7l5" Dec 01 08:59:44 crc kubenswrapper[4689]: I1201 08:59:44.956663 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08dd5230-a82f-43bc-9517-78b80ed7b39a-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-gf7l5\" (UID: \"08dd5230-a82f-43bc-9517-78b80ed7b39a\") " pod="openstack/nova-cell0-cell-mapping-gf7l5" Dec 01 08:59:44 crc kubenswrapper[4689]: I1201 08:59:44.976476 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prmsv\" (UniqueName: \"kubernetes.io/projected/08dd5230-a82f-43bc-9517-78b80ed7b39a-kube-api-access-prmsv\") pod \"nova-cell0-cell-mapping-gf7l5\" (UID: \"08dd5230-a82f-43bc-9517-78b80ed7b39a\") " pod="openstack/nova-cell0-cell-mapping-gf7l5" Dec 01 08:59:45 crc kubenswrapper[4689]: I1201 08:59:45.169797 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-gf7l5" Dec 01 08:59:45 crc kubenswrapper[4689]: I1201 08:59:45.265915 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 01 08:59:45 crc kubenswrapper[4689]: E1201 08:59:45.287321 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e88c04bb-01ff-47a6-8942-05a9a2a68416" containerName="horizon" Dec 01 08:59:45 crc kubenswrapper[4689]: I1201 08:59:45.287367 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="e88c04bb-01ff-47a6-8942-05a9a2a68416" containerName="horizon" Dec 01 08:59:45 crc kubenswrapper[4689]: I1201 08:59:45.288557 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 01 08:59:45 crc kubenswrapper[4689]: I1201 08:59:45.300471 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 01 08:59:45 crc kubenswrapper[4689]: I1201 08:59:45.322820 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 01 08:59:45 crc kubenswrapper[4689]: I1201 08:59:45.376685 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4bef3fc-9bf2-4daf-a366-29c8129db360-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c4bef3fc-9bf2-4daf-a366-29c8129db360\") " pod="openstack/nova-api-0" Dec 01 08:59:45 crc kubenswrapper[4689]: I1201 08:59:45.376730 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4bef3fc-9bf2-4daf-a366-29c8129db360-config-data\") pod \"nova-api-0\" (UID: \"c4bef3fc-9bf2-4daf-a366-29c8129db360\") " pod="openstack/nova-api-0" Dec 01 08:59:45 crc kubenswrapper[4689]: I1201 08:59:45.376827 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4bef3fc-9bf2-4daf-a366-29c8129db360-logs\") pod \"nova-api-0\" (UID: \"c4bef3fc-9bf2-4daf-a366-29c8129db360\") " pod="openstack/nova-api-0" Dec 01 08:59:45 crc kubenswrapper[4689]: I1201 08:59:45.376872 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqcjp\" (UniqueName: \"kubernetes.io/projected/c4bef3fc-9bf2-4daf-a366-29c8129db360-kube-api-access-kqcjp\") pod \"nova-api-0\" (UID: \"c4bef3fc-9bf2-4daf-a366-29c8129db360\") " pod="openstack/nova-api-0" Dec 01 08:59:45 crc kubenswrapper[4689]: I1201 08:59:45.475532 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 01 08:59:45 crc kubenswrapper[4689]: I1201 08:59:45.476986 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 01 08:59:45 crc kubenswrapper[4689]: I1201 08:59:45.483174 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4bef3fc-9bf2-4daf-a366-29c8129db360-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c4bef3fc-9bf2-4daf-a366-29c8129db360\") " pod="openstack/nova-api-0" Dec 01 08:59:45 crc kubenswrapper[4689]: I1201 08:59:45.483230 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4bef3fc-9bf2-4daf-a366-29c8129db360-config-data\") pod \"nova-api-0\" (UID: \"c4bef3fc-9bf2-4daf-a366-29c8129db360\") " pod="openstack/nova-api-0" Dec 01 08:59:45 crc kubenswrapper[4689]: I1201 08:59:45.483564 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4bef3fc-9bf2-4daf-a366-29c8129db360-logs\") pod \"nova-api-0\" (UID: \"c4bef3fc-9bf2-4daf-a366-29c8129db360\") " pod="openstack/nova-api-0" Dec 01 08:59:45 crc kubenswrapper[4689]: I1201 08:59:45.483656 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqcjp\" (UniqueName: \"kubernetes.io/projected/c4bef3fc-9bf2-4daf-a366-29c8129db360-kube-api-access-kqcjp\") pod \"nova-api-0\" (UID: \"c4bef3fc-9bf2-4daf-a366-29c8129db360\") " pod="openstack/nova-api-0" Dec 01 08:59:45 crc kubenswrapper[4689]: I1201 08:59:45.495542 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4bef3fc-9bf2-4daf-a366-29c8129db360-logs\") pod \"nova-api-0\" (UID: \"c4bef3fc-9bf2-4daf-a366-29c8129db360\") " pod="openstack/nova-api-0" Dec 01 08:59:45 crc kubenswrapper[4689]: I1201 08:59:45.523178 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4bef3fc-9bf2-4daf-a366-29c8129db360-config-data\") pod \"nova-api-0\" (UID: \"c4bef3fc-9bf2-4daf-a366-29c8129db360\") " pod="openstack/nova-api-0" Dec 01 08:59:45 crc kubenswrapper[4689]: I1201 08:59:45.523707 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4bef3fc-9bf2-4daf-a366-29c8129db360-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c4bef3fc-9bf2-4daf-a366-29c8129db360\") " pod="openstack/nova-api-0" Dec 01 08:59:45 crc kubenswrapper[4689]: I1201 08:59:45.537232 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Dec 01 08:59:45 crc kubenswrapper[4689]: I1201 08:59:45.554941 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 01 08:59:45 crc kubenswrapper[4689]: I1201 08:59:45.579240 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqcjp\" (UniqueName: \"kubernetes.io/projected/c4bef3fc-9bf2-4daf-a366-29c8129db360-kube-api-access-kqcjp\") pod \"nova-api-0\" (UID: \"c4bef3fc-9bf2-4daf-a366-29c8129db360\") " pod="openstack/nova-api-0" Dec 01 08:59:45 crc kubenswrapper[4689]: I1201 08:59:45.597217 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd6b584a-d753-4c05-a893-b160f9109965-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"fd6b584a-d753-4c05-a893-b160f9109965\") " pod="openstack/nova-cell1-novncproxy-0" Dec 01 08:59:45 crc kubenswrapper[4689]: I1201 08:59:45.597376 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5wnd\" (UniqueName: \"kubernetes.io/projected/fd6b584a-d753-4c05-a893-b160f9109965-kube-api-access-h5wnd\") pod \"nova-cell1-novncproxy-0\" (UID: \"fd6b584a-d753-4c05-a893-b160f9109965\") " pod="openstack/nova-cell1-novncproxy-0" Dec 01 08:59:45 crc kubenswrapper[4689]: I1201 08:59:45.597441 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd6b584a-d753-4c05-a893-b160f9109965-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"fd6b584a-d753-4c05-a893-b160f9109965\") " pod="openstack/nova-cell1-novncproxy-0" Dec 01 08:59:45 crc kubenswrapper[4689]: I1201 08:59:45.663656 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 01 08:59:45 crc kubenswrapper[4689]: I1201 08:59:45.743832 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5wnd\" (UniqueName: \"kubernetes.io/projected/fd6b584a-d753-4c05-a893-b160f9109965-kube-api-access-h5wnd\") pod \"nova-cell1-novncproxy-0\" (UID: \"fd6b584a-d753-4c05-a893-b160f9109965\") " pod="openstack/nova-cell1-novncproxy-0" Dec 01 08:59:45 crc kubenswrapper[4689]: I1201 08:59:45.744121 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd6b584a-d753-4c05-a893-b160f9109965-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"fd6b584a-d753-4c05-a893-b160f9109965\") " pod="openstack/nova-cell1-novncproxy-0" Dec 01 08:59:45 crc kubenswrapper[4689]: I1201 08:59:45.744168 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd6b584a-d753-4c05-a893-b160f9109965-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"fd6b584a-d753-4c05-a893-b160f9109965\") " pod="openstack/nova-cell1-novncproxy-0" Dec 01 08:59:45 crc kubenswrapper[4689]: I1201 08:59:45.762378 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 01 08:59:45 crc kubenswrapper[4689]: I1201 08:59:45.764711 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd6b584a-d753-4c05-a893-b160f9109965-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"fd6b584a-d753-4c05-a893-b160f9109965\") " pod="openstack/nova-cell1-novncproxy-0" Dec 01 08:59:45 crc kubenswrapper[4689]: I1201 08:59:45.766462 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 01 08:59:45 crc kubenswrapper[4689]: I1201 08:59:45.770902 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd6b584a-d753-4c05-a893-b160f9109965-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"fd6b584a-d753-4c05-a893-b160f9109965\") " pod="openstack/nova-cell1-novncproxy-0" Dec 01 08:59:45 crc kubenswrapper[4689]: I1201 08:59:45.800164 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5wnd\" (UniqueName: \"kubernetes.io/projected/fd6b584a-d753-4c05-a893-b160f9109965-kube-api-access-h5wnd\") pod \"nova-cell1-novncproxy-0\" (UID: \"fd6b584a-d753-4c05-a893-b160f9109965\") " pod="openstack/nova-cell1-novncproxy-0" Dec 01 08:59:45 crc kubenswrapper[4689]: I1201 08:59:45.814466 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 01 08:59:45 crc kubenswrapper[4689]: I1201 08:59:45.861144 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 01 08:59:45 crc kubenswrapper[4689]: I1201 08:59:45.866798 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfddk\" (UniqueName: \"kubernetes.io/projected/d6040d5d-158a-4d64-89b6-3f17ad666c40-kube-api-access-cfddk\") pod \"nova-scheduler-0\" (UID: \"d6040d5d-158a-4d64-89b6-3f17ad666c40\") " pod="openstack/nova-scheduler-0" Dec 01 08:59:45 crc kubenswrapper[4689]: I1201 08:59:45.869859 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6040d5d-158a-4d64-89b6-3f17ad666c40-config-data\") pod \"nova-scheduler-0\" (UID: \"d6040d5d-158a-4d64-89b6-3f17ad666c40\") " pod="openstack/nova-scheduler-0" Dec 01 08:59:45 crc kubenswrapper[4689]: I1201 08:59:45.870192 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6040d5d-158a-4d64-89b6-3f17ad666c40-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d6040d5d-158a-4d64-89b6-3f17ad666c40\") " pod="openstack/nova-scheduler-0" Dec 01 08:59:45 crc kubenswrapper[4689]: I1201 08:59:45.888583 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 01 08:59:45 crc kubenswrapper[4689]: I1201 08:59:45.896000 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 01 08:59:45 crc kubenswrapper[4689]: I1201 08:59:45.901437 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 01 08:59:45 crc kubenswrapper[4689]: I1201 08:59:45.914194 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 01 08:59:45 crc kubenswrapper[4689]: I1201 08:59:45.924476 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 01 08:59:45 crc kubenswrapper[4689]: I1201 08:59:45.992429 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5p547\" (UniqueName: \"kubernetes.io/projected/f8db9c55-de1b-42b3-bfda-29d16626b13b-kube-api-access-5p547\") pod \"nova-metadata-0\" (UID: \"f8db9c55-de1b-42b3-bfda-29d16626b13b\") " pod="openstack/nova-metadata-0" Dec 01 08:59:45 crc kubenswrapper[4689]: I1201 08:59:45.992482 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfddk\" (UniqueName: \"kubernetes.io/projected/d6040d5d-158a-4d64-89b6-3f17ad666c40-kube-api-access-cfddk\") pod \"nova-scheduler-0\" (UID: \"d6040d5d-158a-4d64-89b6-3f17ad666c40\") " pod="openstack/nova-scheduler-0" Dec 01 08:59:45 crc kubenswrapper[4689]: I1201 08:59:45.992510 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6040d5d-158a-4d64-89b6-3f17ad666c40-config-data\") pod \"nova-scheduler-0\" (UID: \"d6040d5d-158a-4d64-89b6-3f17ad666c40\") " pod="openstack/nova-scheduler-0" Dec 01 08:59:45 crc kubenswrapper[4689]: I1201 08:59:45.992533 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6040d5d-158a-4d64-89b6-3f17ad666c40-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d6040d5d-158a-4d64-89b6-3f17ad666c40\") " pod="openstack/nova-scheduler-0" Dec 01 08:59:45 crc kubenswrapper[4689]: I1201 08:59:45.992554 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8db9c55-de1b-42b3-bfda-29d16626b13b-config-data\") pod \"nova-metadata-0\" (UID: \"f8db9c55-de1b-42b3-bfda-29d16626b13b\") " pod="openstack/nova-metadata-0" Dec 01 08:59:45 crc kubenswrapper[4689]: I1201 08:59:45.992571 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f8db9c55-de1b-42b3-bfda-29d16626b13b-logs\") pod \"nova-metadata-0\" (UID: \"f8db9c55-de1b-42b3-bfda-29d16626b13b\") " pod="openstack/nova-metadata-0" Dec 01 08:59:45 crc kubenswrapper[4689]: I1201 08:59:45.992596 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8db9c55-de1b-42b3-bfda-29d16626b13b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f8db9c55-de1b-42b3-bfda-29d16626b13b\") " pod="openstack/nova-metadata-0" Dec 01 08:59:46 crc kubenswrapper[4689]: I1201 08:59:46.012194 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6040d5d-158a-4d64-89b6-3f17ad666c40-config-data\") pod \"nova-scheduler-0\" (UID: \"d6040d5d-158a-4d64-89b6-3f17ad666c40\") " pod="openstack/nova-scheduler-0" Dec 01 08:59:46 crc kubenswrapper[4689]: I1201 08:59:46.021867 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6040d5d-158a-4d64-89b6-3f17ad666c40-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d6040d5d-158a-4d64-89b6-3f17ad666c40\") " pod="openstack/nova-scheduler-0" Dec 01 08:59:46 crc kubenswrapper[4689]: I1201 08:59:46.053954 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfddk\" (UniqueName: \"kubernetes.io/projected/d6040d5d-158a-4d64-89b6-3f17ad666c40-kube-api-access-cfddk\") pod \"nova-scheduler-0\" (UID: \"d6040d5d-158a-4d64-89b6-3f17ad666c40\") " pod="openstack/nova-scheduler-0" Dec 01 08:59:46 crc kubenswrapper[4689]: I1201 08:59:46.096249 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5p547\" (UniqueName: \"kubernetes.io/projected/f8db9c55-de1b-42b3-bfda-29d16626b13b-kube-api-access-5p547\") pod \"nova-metadata-0\" (UID: \"f8db9c55-de1b-42b3-bfda-29d16626b13b\") " pod="openstack/nova-metadata-0" Dec 01 08:59:46 crc kubenswrapper[4689]: I1201 08:59:46.096316 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8db9c55-de1b-42b3-bfda-29d16626b13b-config-data\") pod \"nova-metadata-0\" (UID: \"f8db9c55-de1b-42b3-bfda-29d16626b13b\") " pod="openstack/nova-metadata-0" Dec 01 08:59:46 crc kubenswrapper[4689]: I1201 08:59:46.096337 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f8db9c55-de1b-42b3-bfda-29d16626b13b-logs\") pod \"nova-metadata-0\" (UID: \"f8db9c55-de1b-42b3-bfda-29d16626b13b\") " pod="openstack/nova-metadata-0" Dec 01 08:59:46 crc kubenswrapper[4689]: I1201 08:59:46.096363 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8db9c55-de1b-42b3-bfda-29d16626b13b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f8db9c55-de1b-42b3-bfda-29d16626b13b\") " pod="openstack/nova-metadata-0" Dec 01 08:59:46 crc kubenswrapper[4689]: I1201 08:59:46.097971 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f8db9c55-de1b-42b3-bfda-29d16626b13b-logs\") pod \"nova-metadata-0\" (UID: \"f8db9c55-de1b-42b3-bfda-29d16626b13b\") " pod="openstack/nova-metadata-0" Dec 01 08:59:46 crc kubenswrapper[4689]: I1201 08:59:46.106299 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8db9c55-de1b-42b3-bfda-29d16626b13b-config-data\") pod \"nova-metadata-0\" (UID: \"f8db9c55-de1b-42b3-bfda-29d16626b13b\") " pod="openstack/nova-metadata-0" Dec 01 08:59:46 crc kubenswrapper[4689]: I1201 08:59:46.107129 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-59d8q"] Dec 01 08:59:46 crc kubenswrapper[4689]: I1201 08:59:46.108839 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8db9c55-de1b-42b3-bfda-29d16626b13b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f8db9c55-de1b-42b3-bfda-29d16626b13b\") " pod="openstack/nova-metadata-0" Dec 01 08:59:46 crc kubenswrapper[4689]: I1201 08:59:46.114098 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-845d6d6f59-59d8q" Dec 01 08:59:46 crc kubenswrapper[4689]: I1201 08:59:46.155561 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-59d8q"] Dec 01 08:59:46 crc kubenswrapper[4689]: I1201 08:59:46.164721 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 01 08:59:46 crc kubenswrapper[4689]: I1201 08:59:46.183397 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5p547\" (UniqueName: \"kubernetes.io/projected/f8db9c55-de1b-42b3-bfda-29d16626b13b-kube-api-access-5p547\") pod \"nova-metadata-0\" (UID: \"f8db9c55-de1b-42b3-bfda-29d16626b13b\") " pod="openstack/nova-metadata-0" Dec 01 08:59:46 crc kubenswrapper[4689]: I1201 08:59:46.202204 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kltzq\" (UniqueName: \"kubernetes.io/projected/168c36e5-41be-45fa-8a86-334ccc148504-kube-api-access-kltzq\") pod \"dnsmasq-dns-845d6d6f59-59d8q\" (UID: \"168c36e5-41be-45fa-8a86-334ccc148504\") " pod="openstack/dnsmasq-dns-845d6d6f59-59d8q" Dec 01 08:59:46 crc kubenswrapper[4689]: I1201 08:59:46.202303 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/168c36e5-41be-45fa-8a86-334ccc148504-config\") pod \"dnsmasq-dns-845d6d6f59-59d8q\" (UID: \"168c36e5-41be-45fa-8a86-334ccc148504\") " pod="openstack/dnsmasq-dns-845d6d6f59-59d8q" Dec 01 08:59:46 crc kubenswrapper[4689]: I1201 08:59:46.202335 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/168c36e5-41be-45fa-8a86-334ccc148504-dns-svc\") pod \"dnsmasq-dns-845d6d6f59-59d8q\" (UID: \"168c36e5-41be-45fa-8a86-334ccc148504\") " pod="openstack/dnsmasq-dns-845d6d6f59-59d8q" Dec 01 08:59:46 crc kubenswrapper[4689]: I1201 08:59:46.202423 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/168c36e5-41be-45fa-8a86-334ccc148504-ovsdbserver-sb\") pod \"dnsmasq-dns-845d6d6f59-59d8q\" (UID: \"168c36e5-41be-45fa-8a86-334ccc148504\") " pod="openstack/dnsmasq-dns-845d6d6f59-59d8q" Dec 01 08:59:46 crc kubenswrapper[4689]: I1201 08:59:46.202456 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/168c36e5-41be-45fa-8a86-334ccc148504-ovsdbserver-nb\") pod \"dnsmasq-dns-845d6d6f59-59d8q\" (UID: \"168c36e5-41be-45fa-8a86-334ccc148504\") " pod="openstack/dnsmasq-dns-845d6d6f59-59d8q" Dec 01 08:59:46 crc kubenswrapper[4689]: I1201 08:59:46.202478 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/168c36e5-41be-45fa-8a86-334ccc148504-dns-swift-storage-0\") pod \"dnsmasq-dns-845d6d6f59-59d8q\" (UID: \"168c36e5-41be-45fa-8a86-334ccc148504\") " pod="openstack/dnsmasq-dns-845d6d6f59-59d8q" Dec 01 08:59:46 crc kubenswrapper[4689]: I1201 08:59:46.305920 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/168c36e5-41be-45fa-8a86-334ccc148504-ovsdbserver-sb\") pod \"dnsmasq-dns-845d6d6f59-59d8q\" (UID: \"168c36e5-41be-45fa-8a86-334ccc148504\") " pod="openstack/dnsmasq-dns-845d6d6f59-59d8q" Dec 01 08:59:46 crc kubenswrapper[4689]: I1201 08:59:46.305973 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/168c36e5-41be-45fa-8a86-334ccc148504-ovsdbserver-nb\") pod \"dnsmasq-dns-845d6d6f59-59d8q\" (UID: \"168c36e5-41be-45fa-8a86-334ccc148504\") " pod="openstack/dnsmasq-dns-845d6d6f59-59d8q" Dec 01 08:59:46 crc kubenswrapper[4689]: I1201 08:59:46.306000 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/168c36e5-41be-45fa-8a86-334ccc148504-dns-swift-storage-0\") pod \"dnsmasq-dns-845d6d6f59-59d8q\" (UID: \"168c36e5-41be-45fa-8a86-334ccc148504\") " pod="openstack/dnsmasq-dns-845d6d6f59-59d8q" Dec 01 08:59:46 crc kubenswrapper[4689]: I1201 08:59:46.306061 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kltzq\" (UniqueName: \"kubernetes.io/projected/168c36e5-41be-45fa-8a86-334ccc148504-kube-api-access-kltzq\") pod \"dnsmasq-dns-845d6d6f59-59d8q\" (UID: \"168c36e5-41be-45fa-8a86-334ccc148504\") " pod="openstack/dnsmasq-dns-845d6d6f59-59d8q" Dec 01 08:59:46 crc kubenswrapper[4689]: I1201 08:59:46.306107 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/168c36e5-41be-45fa-8a86-334ccc148504-config\") pod \"dnsmasq-dns-845d6d6f59-59d8q\" (UID: \"168c36e5-41be-45fa-8a86-334ccc148504\") " pod="openstack/dnsmasq-dns-845d6d6f59-59d8q" Dec 01 08:59:46 crc kubenswrapper[4689]: I1201 08:59:46.306135 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/168c36e5-41be-45fa-8a86-334ccc148504-dns-svc\") pod \"dnsmasq-dns-845d6d6f59-59d8q\" (UID: \"168c36e5-41be-45fa-8a86-334ccc148504\") " pod="openstack/dnsmasq-dns-845d6d6f59-59d8q" Dec 01 08:59:46 crc kubenswrapper[4689]: I1201 08:59:46.308850 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/168c36e5-41be-45fa-8a86-334ccc148504-dns-svc\") pod \"dnsmasq-dns-845d6d6f59-59d8q\" (UID: \"168c36e5-41be-45fa-8a86-334ccc148504\") " pod="openstack/dnsmasq-dns-845d6d6f59-59d8q" Dec 01 08:59:46 crc kubenswrapper[4689]: I1201 08:59:46.309476 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/168c36e5-41be-45fa-8a86-334ccc148504-dns-swift-storage-0\") pod \"dnsmasq-dns-845d6d6f59-59d8q\" (UID: \"168c36e5-41be-45fa-8a86-334ccc148504\") " pod="openstack/dnsmasq-dns-845d6d6f59-59d8q" Dec 01 08:59:46 crc kubenswrapper[4689]: I1201 08:59:46.309736 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/168c36e5-41be-45fa-8a86-334ccc148504-config\") pod \"dnsmasq-dns-845d6d6f59-59d8q\" (UID: \"168c36e5-41be-45fa-8a86-334ccc148504\") " pod="openstack/dnsmasq-dns-845d6d6f59-59d8q" Dec 01 08:59:46 crc kubenswrapper[4689]: I1201 08:59:46.310196 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/168c36e5-41be-45fa-8a86-334ccc148504-ovsdbserver-sb\") pod \"dnsmasq-dns-845d6d6f59-59d8q\" (UID: \"168c36e5-41be-45fa-8a86-334ccc148504\") " pod="openstack/dnsmasq-dns-845d6d6f59-59d8q" Dec 01 08:59:46 crc kubenswrapper[4689]: I1201 08:59:46.312023 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/168c36e5-41be-45fa-8a86-334ccc148504-ovsdbserver-nb\") pod \"dnsmasq-dns-845d6d6f59-59d8q\" (UID: \"168c36e5-41be-45fa-8a86-334ccc148504\") " pod="openstack/dnsmasq-dns-845d6d6f59-59d8q" Dec 01 08:59:46 crc kubenswrapper[4689]: I1201 08:59:46.341921 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kltzq\" (UniqueName: \"kubernetes.io/projected/168c36e5-41be-45fa-8a86-334ccc148504-kube-api-access-kltzq\") pod \"dnsmasq-dns-845d6d6f59-59d8q\" (UID: \"168c36e5-41be-45fa-8a86-334ccc148504\") " pod="openstack/dnsmasq-dns-845d6d6f59-59d8q" Dec 01 08:59:46 crc kubenswrapper[4689]: I1201 08:59:46.392637 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-gf7l5"] Dec 01 08:59:46 crc kubenswrapper[4689]: I1201 08:59:46.452317 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 01 08:59:46 crc kubenswrapper[4689]: I1201 08:59:46.508317 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-845d6d6f59-59d8q" Dec 01 08:59:46 crc kubenswrapper[4689]: I1201 08:59:46.695987 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 01 08:59:46 crc kubenswrapper[4689]: I1201 08:59:46.835672 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 01 08:59:47 crc kubenswrapper[4689]: I1201 08:59:47.041080 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 01 08:59:47 crc kubenswrapper[4689]: I1201 08:59:47.083446 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c4bef3fc-9bf2-4daf-a366-29c8129db360","Type":"ContainerStarted","Data":"2a8d72d3b633207c7345eaca11f8761d6ca5fdf6287256def1b35eb9bf006e4a"} Dec 01 08:59:47 crc kubenswrapper[4689]: I1201 08:59:47.099604 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d6040d5d-158a-4d64-89b6-3f17ad666c40","Type":"ContainerStarted","Data":"3aa81cd41953c10eef702ee8d320e32c8bd43d423e4e2ca4c400dcddc954b74c"} Dec 01 08:59:47 crc kubenswrapper[4689]: I1201 08:59:47.107094 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-gf7l5" event={"ID":"08dd5230-a82f-43bc-9517-78b80ed7b39a","Type":"ContainerStarted","Data":"89f7562fa8966edf05d3e6681ce4143d625c7241d261a3c58263f01a0f70d79e"} Dec 01 08:59:47 crc kubenswrapper[4689]: I1201 08:59:47.107138 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-gf7l5" event={"ID":"08dd5230-a82f-43bc-9517-78b80ed7b39a","Type":"ContainerStarted","Data":"bcc94656b6d40a2e57c236cef3edd1f080dfe529a44808cdb96c03627414fb45"} Dec 01 08:59:47 crc kubenswrapper[4689]: I1201 08:59:47.117181 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"fd6b584a-d753-4c05-a893-b160f9109965","Type":"ContainerStarted","Data":"3b3f8552718664b5bc3ea1db52c05874c3c9b8e30a7b608cb9e78fa769814532"} Dec 01 08:59:47 crc kubenswrapper[4689]: I1201 08:59:47.143428 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-gf7l5" podStartSLOduration=3.143406159 podStartE2EDuration="3.143406159s" podCreationTimestamp="2025-12-01 08:59:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:59:47.120896061 +0000 UTC m=+1267.193183965" watchObservedRunningTime="2025-12-01 08:59:47.143406159 +0000 UTC m=+1267.215694063" Dec 01 08:59:47 crc kubenswrapper[4689]: I1201 08:59:47.351709 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 01 08:59:47 crc kubenswrapper[4689]: I1201 08:59:47.437805 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-59d8q"] Dec 01 08:59:47 crc kubenswrapper[4689]: I1201 08:59:47.583791 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-qds9f"] Dec 01 08:59:47 crc kubenswrapper[4689]: I1201 08:59:47.586370 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-qds9f" Dec 01 08:59:47 crc kubenswrapper[4689]: I1201 08:59:47.591285 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Dec 01 08:59:47 crc kubenswrapper[4689]: I1201 08:59:47.596061 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Dec 01 08:59:47 crc kubenswrapper[4689]: I1201 08:59:47.602969 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-qds9f"] Dec 01 08:59:47 crc kubenswrapper[4689]: I1201 08:59:47.757908 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63fc0d00-4168-47eb-998a-d32962b46bad-scripts\") pod \"nova-cell1-conductor-db-sync-qds9f\" (UID: \"63fc0d00-4168-47eb-998a-d32962b46bad\") " pod="openstack/nova-cell1-conductor-db-sync-qds9f" Dec 01 08:59:47 crc kubenswrapper[4689]: I1201 08:59:47.757956 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63fc0d00-4168-47eb-998a-d32962b46bad-config-data\") pod \"nova-cell1-conductor-db-sync-qds9f\" (UID: \"63fc0d00-4168-47eb-998a-d32962b46bad\") " pod="openstack/nova-cell1-conductor-db-sync-qds9f" Dec 01 08:59:47 crc kubenswrapper[4689]: I1201 08:59:47.758000 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63fc0d00-4168-47eb-998a-d32962b46bad-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-qds9f\" (UID: \"63fc0d00-4168-47eb-998a-d32962b46bad\") " pod="openstack/nova-cell1-conductor-db-sync-qds9f" Dec 01 08:59:47 crc kubenswrapper[4689]: I1201 08:59:47.758055 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kjkv\" (UniqueName: \"kubernetes.io/projected/63fc0d00-4168-47eb-998a-d32962b46bad-kube-api-access-5kjkv\") pod \"nova-cell1-conductor-db-sync-qds9f\" (UID: \"63fc0d00-4168-47eb-998a-d32962b46bad\") " pod="openstack/nova-cell1-conductor-db-sync-qds9f" Dec 01 08:59:48 crc kubenswrapper[4689]: I1201 08:59:48.073659 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63fc0d00-4168-47eb-998a-d32962b46bad-scripts\") pod \"nova-cell1-conductor-db-sync-qds9f\" (UID: \"63fc0d00-4168-47eb-998a-d32962b46bad\") " pod="openstack/nova-cell1-conductor-db-sync-qds9f" Dec 01 08:59:48 crc kubenswrapper[4689]: I1201 08:59:48.073740 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63fc0d00-4168-47eb-998a-d32962b46bad-config-data\") pod \"nova-cell1-conductor-db-sync-qds9f\" (UID: \"63fc0d00-4168-47eb-998a-d32962b46bad\") " pod="openstack/nova-cell1-conductor-db-sync-qds9f" Dec 01 08:59:48 crc kubenswrapper[4689]: I1201 08:59:48.074005 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63fc0d00-4168-47eb-998a-d32962b46bad-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-qds9f\" (UID: \"63fc0d00-4168-47eb-998a-d32962b46bad\") " pod="openstack/nova-cell1-conductor-db-sync-qds9f" Dec 01 08:59:48 crc kubenswrapper[4689]: I1201 08:59:48.074077 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kjkv\" (UniqueName: \"kubernetes.io/projected/63fc0d00-4168-47eb-998a-d32962b46bad-kube-api-access-5kjkv\") pod \"nova-cell1-conductor-db-sync-qds9f\" (UID: \"63fc0d00-4168-47eb-998a-d32962b46bad\") " pod="openstack/nova-cell1-conductor-db-sync-qds9f" Dec 01 08:59:48 crc kubenswrapper[4689]: I1201 08:59:48.088304 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63fc0d00-4168-47eb-998a-d32962b46bad-scripts\") pod \"nova-cell1-conductor-db-sync-qds9f\" (UID: \"63fc0d00-4168-47eb-998a-d32962b46bad\") " pod="openstack/nova-cell1-conductor-db-sync-qds9f" Dec 01 08:59:48 crc kubenswrapper[4689]: I1201 08:59:48.089621 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63fc0d00-4168-47eb-998a-d32962b46bad-config-data\") pod \"nova-cell1-conductor-db-sync-qds9f\" (UID: \"63fc0d00-4168-47eb-998a-d32962b46bad\") " pod="openstack/nova-cell1-conductor-db-sync-qds9f" Dec 01 08:59:48 crc kubenswrapper[4689]: I1201 08:59:48.125947 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63fc0d00-4168-47eb-998a-d32962b46bad-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-qds9f\" (UID: \"63fc0d00-4168-47eb-998a-d32962b46bad\") " pod="openstack/nova-cell1-conductor-db-sync-qds9f" Dec 01 08:59:48 crc kubenswrapper[4689]: I1201 08:59:48.171135 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kjkv\" (UniqueName: \"kubernetes.io/projected/63fc0d00-4168-47eb-998a-d32962b46bad-kube-api-access-5kjkv\") pod \"nova-cell1-conductor-db-sync-qds9f\" (UID: \"63fc0d00-4168-47eb-998a-d32962b46bad\") " pod="openstack/nova-cell1-conductor-db-sync-qds9f" Dec 01 08:59:48 crc kubenswrapper[4689]: I1201 08:59:48.287492 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f8db9c55-de1b-42b3-bfda-29d16626b13b","Type":"ContainerStarted","Data":"91ecbd866bc02517847542605cfae2801c8ede4dc587ab0caa9a44b6d54991dd"} Dec 01 08:59:48 crc kubenswrapper[4689]: I1201 08:59:48.290985 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-59d8q" event={"ID":"168c36e5-41be-45fa-8a86-334ccc148504","Type":"ContainerStarted","Data":"61b5a6302b698504a6bb0098be6e2120a5c4d5da5e21e823f01e4d587642a97f"} Dec 01 08:59:48 crc kubenswrapper[4689]: I1201 08:59:48.291047 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-59d8q" event={"ID":"168c36e5-41be-45fa-8a86-334ccc148504","Type":"ContainerStarted","Data":"6891b878febc452425d70233668c965ae7ab6a7e219c6a47f374ea353338d135"} Dec 01 08:59:48 crc kubenswrapper[4689]: I1201 08:59:48.344263 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-qds9f" Dec 01 08:59:48 crc kubenswrapper[4689]: I1201 08:59:48.773831 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-qds9f"] Dec 01 08:59:49 crc kubenswrapper[4689]: I1201 08:59:49.303657 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-qds9f" event={"ID":"63fc0d00-4168-47eb-998a-d32962b46bad","Type":"ContainerStarted","Data":"61cc432ae69ff9c2c03725e948d044f0bdef4f09f870962a779ebfd9f635da7f"} Dec 01 08:59:49 crc kubenswrapper[4689]: I1201 08:59:49.303927 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-qds9f" event={"ID":"63fc0d00-4168-47eb-998a-d32962b46bad","Type":"ContainerStarted","Data":"db46b3d3e74f753c8c5e8b7d1dcabf732363bc050aa77b5e431a62f5b6164c09"} Dec 01 08:59:49 crc kubenswrapper[4689]: I1201 08:59:49.312524 4689 generic.go:334] "Generic (PLEG): container finished" podID="168c36e5-41be-45fa-8a86-334ccc148504" containerID="61b5a6302b698504a6bb0098be6e2120a5c4d5da5e21e823f01e4d587642a97f" exitCode=0 Dec 01 08:59:49 crc kubenswrapper[4689]: I1201 08:59:49.312581 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-59d8q" event={"ID":"168c36e5-41be-45fa-8a86-334ccc148504","Type":"ContainerDied","Data":"61b5a6302b698504a6bb0098be6e2120a5c4d5da5e21e823f01e4d587642a97f"} Dec 01 08:59:49 crc kubenswrapper[4689]: I1201 08:59:49.312604 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-59d8q" event={"ID":"168c36e5-41be-45fa-8a86-334ccc148504","Type":"ContainerStarted","Data":"57490bb6e9268b7bada778cd46cdb78d2be222b035283746ea691170a54c8ddb"} Dec 01 08:59:49 crc kubenswrapper[4689]: I1201 08:59:49.313482 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-845d6d6f59-59d8q" Dec 01 08:59:49 crc kubenswrapper[4689]: I1201 08:59:49.327472 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-qds9f" podStartSLOduration=2.327455916 podStartE2EDuration="2.327455916s" podCreationTimestamp="2025-12-01 08:59:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:59:49.322536852 +0000 UTC m=+1269.394824746" watchObservedRunningTime="2025-12-01 08:59:49.327455916 +0000 UTC m=+1269.399743820" Dec 01 08:59:49 crc kubenswrapper[4689]: I1201 08:59:49.352932 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-845d6d6f59-59d8q" podStartSLOduration=4.352908403 podStartE2EDuration="4.352908403s" podCreationTimestamp="2025-12-01 08:59:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:59:49.342310146 +0000 UTC m=+1269.414598050" watchObservedRunningTime="2025-12-01 08:59:49.352908403 +0000 UTC m=+1269.425196307" Dec 01 08:59:50 crc kubenswrapper[4689]: I1201 08:59:50.025186 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 01 08:59:50 crc kubenswrapper[4689]: I1201 08:59:50.065988 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 01 08:59:53 crc kubenswrapper[4689]: I1201 08:59:53.393683 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c4bef3fc-9bf2-4daf-a366-29c8129db360","Type":"ContainerStarted","Data":"d0b73d1af1bd065da9ea2195a49a229e7b9d7064a16fc3dd8beead91a4603a8d"} Dec 01 08:59:53 crc kubenswrapper[4689]: I1201 08:59:53.394275 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c4bef3fc-9bf2-4daf-a366-29c8129db360","Type":"ContainerStarted","Data":"32c9256f3a21c69e378ab512f259fefac500a84846bba1151754d3d5a73c11b5"} Dec 01 08:59:53 crc kubenswrapper[4689]: I1201 08:59:53.397043 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f8db9c55-de1b-42b3-bfda-29d16626b13b","Type":"ContainerStarted","Data":"0818314c133cdbeed4a17ad523cf5e856115715332a6fe688636fd00b4dd08a8"} Dec 01 08:59:53 crc kubenswrapper[4689]: I1201 08:59:53.397104 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f8db9c55-de1b-42b3-bfda-29d16626b13b","Type":"ContainerStarted","Data":"f46fad7f4b73f8ccbc74beff9e24c30e1ecb6244810d57b22371f505c19ac4fe"} Dec 01 08:59:53 crc kubenswrapper[4689]: I1201 08:59:53.397111 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="f8db9c55-de1b-42b3-bfda-29d16626b13b" containerName="nova-metadata-log" containerID="cri-o://f46fad7f4b73f8ccbc74beff9e24c30e1ecb6244810d57b22371f505c19ac4fe" gracePeriod=30 Dec 01 08:59:53 crc kubenswrapper[4689]: I1201 08:59:53.397166 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="f8db9c55-de1b-42b3-bfda-29d16626b13b" containerName="nova-metadata-metadata" containerID="cri-o://0818314c133cdbeed4a17ad523cf5e856115715332a6fe688636fd00b4dd08a8" gracePeriod=30 Dec 01 08:59:53 crc kubenswrapper[4689]: I1201 08:59:53.399302 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d6040d5d-158a-4d64-89b6-3f17ad666c40","Type":"ContainerStarted","Data":"c335021333ea411feaf224203bdf87f3b7f3c9045229455ee1b84db774e393f2"} Dec 01 08:59:53 crc kubenswrapper[4689]: I1201 08:59:53.407644 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"fd6b584a-d753-4c05-a893-b160f9109965","Type":"ContainerStarted","Data":"40dc725143e3e76b8dd21236f927d1798ba62d5eb8337fb070a3e93226aca31f"} Dec 01 08:59:53 crc kubenswrapper[4689]: I1201 08:59:53.407866 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="fd6b584a-d753-4c05-a893-b160f9109965" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://40dc725143e3e76b8dd21236f927d1798ba62d5eb8337fb070a3e93226aca31f" gracePeriod=30 Dec 01 08:59:53 crc kubenswrapper[4689]: I1201 08:59:53.413788 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.609501818 podStartE2EDuration="8.413777708s" podCreationTimestamp="2025-12-01 08:59:45 +0000 UTC" firstStartedPulling="2025-12-01 08:59:46.726538801 +0000 UTC m=+1266.798826705" lastFinishedPulling="2025-12-01 08:59:52.530814691 +0000 UTC m=+1272.603102595" observedRunningTime="2025-12-01 08:59:53.4112872 +0000 UTC m=+1273.483575104" watchObservedRunningTime="2025-12-01 08:59:53.413777708 +0000 UTC m=+1273.486065612" Dec 01 08:59:53 crc kubenswrapper[4689]: I1201 08:59:53.430695 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.970177249 podStartE2EDuration="8.430677184s" podCreationTimestamp="2025-12-01 08:59:45 +0000 UTC" firstStartedPulling="2025-12-01 08:59:47.062194927 +0000 UTC m=+1267.134482831" lastFinishedPulling="2025-12-01 08:59:52.522694862 +0000 UTC m=+1272.594982766" observedRunningTime="2025-12-01 08:59:53.425507794 +0000 UTC m=+1273.497795708" watchObservedRunningTime="2025-12-01 08:59:53.430677184 +0000 UTC m=+1273.502965088" Dec 01 08:59:53 crc kubenswrapper[4689]: I1201 08:59:53.451345 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.810009682 podStartE2EDuration="8.451326471s" podCreationTimestamp="2025-12-01 08:59:45 +0000 UTC" firstStartedPulling="2025-12-01 08:59:46.859398249 +0000 UTC m=+1266.931686153" lastFinishedPulling="2025-12-01 08:59:52.500715038 +0000 UTC m=+1272.573002942" observedRunningTime="2025-12-01 08:59:53.444894848 +0000 UTC m=+1273.517182752" watchObservedRunningTime="2025-12-01 08:59:53.451326471 +0000 UTC m=+1273.523614375" Dec 01 08:59:54 crc kubenswrapper[4689]: I1201 08:59:54.350641 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 01 08:59:54 crc kubenswrapper[4689]: I1201 08:59:54.422107 4689 generic.go:334] "Generic (PLEG): container finished" podID="f8db9c55-de1b-42b3-bfda-29d16626b13b" containerID="0818314c133cdbeed4a17ad523cf5e856115715332a6fe688636fd00b4dd08a8" exitCode=0 Dec 01 08:59:54 crc kubenswrapper[4689]: I1201 08:59:54.422135 4689 generic.go:334] "Generic (PLEG): container finished" podID="f8db9c55-de1b-42b3-bfda-29d16626b13b" containerID="f46fad7f4b73f8ccbc74beff9e24c30e1ecb6244810d57b22371f505c19ac4fe" exitCode=143 Dec 01 08:59:54 crc kubenswrapper[4689]: I1201 08:59:54.422178 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f8db9c55-de1b-42b3-bfda-29d16626b13b","Type":"ContainerDied","Data":"0818314c133cdbeed4a17ad523cf5e856115715332a6fe688636fd00b4dd08a8"} Dec 01 08:59:54 crc kubenswrapper[4689]: I1201 08:59:54.422233 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f8db9c55-de1b-42b3-bfda-29d16626b13b","Type":"ContainerDied","Data":"f46fad7f4b73f8ccbc74beff9e24c30e1ecb6244810d57b22371f505c19ac4fe"} Dec 01 08:59:54 crc kubenswrapper[4689]: I1201 08:59:54.422243 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f8db9c55-de1b-42b3-bfda-29d16626b13b","Type":"ContainerDied","Data":"91ecbd866bc02517847542605cfae2801c8ede4dc587ab0caa9a44b6d54991dd"} Dec 01 08:59:54 crc kubenswrapper[4689]: I1201 08:59:54.422260 4689 scope.go:117] "RemoveContainer" containerID="0818314c133cdbeed4a17ad523cf5e856115715332a6fe688636fd00b4dd08a8" Dec 01 08:59:54 crc kubenswrapper[4689]: I1201 08:59:54.422254 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 01 08:59:54 crc kubenswrapper[4689]: I1201 08:59:54.454802 4689 scope.go:117] "RemoveContainer" containerID="f46fad7f4b73f8ccbc74beff9e24c30e1ecb6244810d57b22371f505c19ac4fe" Dec 01 08:59:54 crc kubenswrapper[4689]: I1201 08:59:54.475657 4689 scope.go:117] "RemoveContainer" containerID="0818314c133cdbeed4a17ad523cf5e856115715332a6fe688636fd00b4dd08a8" Dec 01 08:59:54 crc kubenswrapper[4689]: E1201 08:59:54.476258 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0818314c133cdbeed4a17ad523cf5e856115715332a6fe688636fd00b4dd08a8\": container with ID starting with 0818314c133cdbeed4a17ad523cf5e856115715332a6fe688636fd00b4dd08a8 not found: ID does not exist" containerID="0818314c133cdbeed4a17ad523cf5e856115715332a6fe688636fd00b4dd08a8" Dec 01 08:59:54 crc kubenswrapper[4689]: I1201 08:59:54.476296 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0818314c133cdbeed4a17ad523cf5e856115715332a6fe688636fd00b4dd08a8"} err="failed to get container status \"0818314c133cdbeed4a17ad523cf5e856115715332a6fe688636fd00b4dd08a8\": rpc error: code = NotFound desc = could not find container \"0818314c133cdbeed4a17ad523cf5e856115715332a6fe688636fd00b4dd08a8\": container with ID starting with 0818314c133cdbeed4a17ad523cf5e856115715332a6fe688636fd00b4dd08a8 not found: ID does not exist" Dec 01 08:59:54 crc kubenswrapper[4689]: I1201 08:59:54.476321 4689 scope.go:117] "RemoveContainer" containerID="f46fad7f4b73f8ccbc74beff9e24c30e1ecb6244810d57b22371f505c19ac4fe" Dec 01 08:59:54 crc kubenswrapper[4689]: E1201 08:59:54.478548 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f46fad7f4b73f8ccbc74beff9e24c30e1ecb6244810d57b22371f505c19ac4fe\": container with ID starting with f46fad7f4b73f8ccbc74beff9e24c30e1ecb6244810d57b22371f505c19ac4fe not found: ID does not exist" containerID="f46fad7f4b73f8ccbc74beff9e24c30e1ecb6244810d57b22371f505c19ac4fe" Dec 01 08:59:54 crc kubenswrapper[4689]: I1201 08:59:54.478619 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f46fad7f4b73f8ccbc74beff9e24c30e1ecb6244810d57b22371f505c19ac4fe"} err="failed to get container status \"f46fad7f4b73f8ccbc74beff9e24c30e1ecb6244810d57b22371f505c19ac4fe\": rpc error: code = NotFound desc = could not find container \"f46fad7f4b73f8ccbc74beff9e24c30e1ecb6244810d57b22371f505c19ac4fe\": container with ID starting with f46fad7f4b73f8ccbc74beff9e24c30e1ecb6244810d57b22371f505c19ac4fe not found: ID does not exist" Dec 01 08:59:54 crc kubenswrapper[4689]: I1201 08:59:54.478660 4689 scope.go:117] "RemoveContainer" containerID="0818314c133cdbeed4a17ad523cf5e856115715332a6fe688636fd00b4dd08a8" Dec 01 08:59:54 crc kubenswrapper[4689]: I1201 08:59:54.480740 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0818314c133cdbeed4a17ad523cf5e856115715332a6fe688636fd00b4dd08a8"} err="failed to get container status \"0818314c133cdbeed4a17ad523cf5e856115715332a6fe688636fd00b4dd08a8\": rpc error: code = NotFound desc = could not find container \"0818314c133cdbeed4a17ad523cf5e856115715332a6fe688636fd00b4dd08a8\": container with ID starting with 0818314c133cdbeed4a17ad523cf5e856115715332a6fe688636fd00b4dd08a8 not found: ID does not exist" Dec 01 08:59:54 crc kubenswrapper[4689]: I1201 08:59:54.480792 4689 scope.go:117] "RemoveContainer" containerID="f46fad7f4b73f8ccbc74beff9e24c30e1ecb6244810d57b22371f505c19ac4fe" Dec 01 08:59:54 crc kubenswrapper[4689]: I1201 08:59:54.481414 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f46fad7f4b73f8ccbc74beff9e24c30e1ecb6244810d57b22371f505c19ac4fe"} err="failed to get container status \"f46fad7f4b73f8ccbc74beff9e24c30e1ecb6244810d57b22371f505c19ac4fe\": rpc error: code = NotFound desc = could not find container \"f46fad7f4b73f8ccbc74beff9e24c30e1ecb6244810d57b22371f505c19ac4fe\": container with ID starting with f46fad7f4b73f8ccbc74beff9e24c30e1ecb6244810d57b22371f505c19ac4fe not found: ID does not exist" Dec 01 08:59:54 crc kubenswrapper[4689]: I1201 08:59:54.527352 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8db9c55-de1b-42b3-bfda-29d16626b13b-config-data\") pod \"f8db9c55-de1b-42b3-bfda-29d16626b13b\" (UID: \"f8db9c55-de1b-42b3-bfda-29d16626b13b\") " Dec 01 08:59:54 crc kubenswrapper[4689]: I1201 08:59:54.527460 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8db9c55-de1b-42b3-bfda-29d16626b13b-combined-ca-bundle\") pod \"f8db9c55-de1b-42b3-bfda-29d16626b13b\" (UID: \"f8db9c55-de1b-42b3-bfda-29d16626b13b\") " Dec 01 08:59:54 crc kubenswrapper[4689]: I1201 08:59:54.527505 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f8db9c55-de1b-42b3-bfda-29d16626b13b-logs\") pod \"f8db9c55-de1b-42b3-bfda-29d16626b13b\" (UID: \"f8db9c55-de1b-42b3-bfda-29d16626b13b\") " Dec 01 08:59:54 crc kubenswrapper[4689]: I1201 08:59:54.527566 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5p547\" (UniqueName: \"kubernetes.io/projected/f8db9c55-de1b-42b3-bfda-29d16626b13b-kube-api-access-5p547\") pod \"f8db9c55-de1b-42b3-bfda-29d16626b13b\" (UID: \"f8db9c55-de1b-42b3-bfda-29d16626b13b\") " Dec 01 08:59:54 crc kubenswrapper[4689]: I1201 08:59:54.528254 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8db9c55-de1b-42b3-bfda-29d16626b13b-logs" (OuterVolumeSpecName: "logs") pod "f8db9c55-de1b-42b3-bfda-29d16626b13b" (UID: "f8db9c55-de1b-42b3-bfda-29d16626b13b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 08:59:54 crc kubenswrapper[4689]: I1201 08:59:54.557216 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8db9c55-de1b-42b3-bfda-29d16626b13b-kube-api-access-5p547" (OuterVolumeSpecName: "kube-api-access-5p547") pod "f8db9c55-de1b-42b3-bfda-29d16626b13b" (UID: "f8db9c55-de1b-42b3-bfda-29d16626b13b"). InnerVolumeSpecName "kube-api-access-5p547". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:59:54 crc kubenswrapper[4689]: I1201 08:59:54.588511 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8db9c55-de1b-42b3-bfda-29d16626b13b-config-data" (OuterVolumeSpecName: "config-data") pod "f8db9c55-de1b-42b3-bfda-29d16626b13b" (UID: "f8db9c55-de1b-42b3-bfda-29d16626b13b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:59:54 crc kubenswrapper[4689]: I1201 08:59:54.613484 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8db9c55-de1b-42b3-bfda-29d16626b13b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f8db9c55-de1b-42b3-bfda-29d16626b13b" (UID: "f8db9c55-de1b-42b3-bfda-29d16626b13b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 08:59:54 crc kubenswrapper[4689]: I1201 08:59:54.629860 4689 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8db9c55-de1b-42b3-bfda-29d16626b13b-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 08:59:54 crc kubenswrapper[4689]: I1201 08:59:54.629895 4689 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8db9c55-de1b-42b3-bfda-29d16626b13b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 08:59:54 crc kubenswrapper[4689]: I1201 08:59:54.629905 4689 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f8db9c55-de1b-42b3-bfda-29d16626b13b-logs\") on node \"crc\" DevicePath \"\"" Dec 01 08:59:54 crc kubenswrapper[4689]: I1201 08:59:54.629915 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5p547\" (UniqueName: \"kubernetes.io/projected/f8db9c55-de1b-42b3-bfda-29d16626b13b-kube-api-access-5p547\") on node \"crc\" DevicePath \"\"" Dec 01 08:59:54 crc kubenswrapper[4689]: I1201 08:59:54.807900 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 01 08:59:54 crc kubenswrapper[4689]: I1201 08:59:54.824958 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 01 08:59:54 crc kubenswrapper[4689]: I1201 08:59:54.837149 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 01 08:59:54 crc kubenswrapper[4689]: E1201 08:59:54.837984 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8db9c55-de1b-42b3-bfda-29d16626b13b" containerName="nova-metadata-log" Dec 01 08:59:54 crc kubenswrapper[4689]: I1201 08:59:54.838089 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8db9c55-de1b-42b3-bfda-29d16626b13b" containerName="nova-metadata-log" Dec 01 08:59:54 crc kubenswrapper[4689]: E1201 08:59:54.838207 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8db9c55-de1b-42b3-bfda-29d16626b13b" containerName="nova-metadata-metadata" Dec 01 08:59:54 crc kubenswrapper[4689]: I1201 08:59:54.838282 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8db9c55-de1b-42b3-bfda-29d16626b13b" containerName="nova-metadata-metadata" Dec 01 08:59:54 crc kubenswrapper[4689]: I1201 08:59:54.838606 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8db9c55-de1b-42b3-bfda-29d16626b13b" containerName="nova-metadata-log" Dec 01 08:59:54 crc kubenswrapper[4689]: I1201 08:59:54.838711 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8db9c55-de1b-42b3-bfda-29d16626b13b" containerName="nova-metadata-metadata" Dec 01 08:59:54 crc kubenswrapper[4689]: I1201 08:59:54.840287 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 01 08:59:54 crc kubenswrapper[4689]: I1201 08:59:54.844015 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 01 08:59:54 crc kubenswrapper[4689]: I1201 08:59:54.846494 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 01 08:59:54 crc kubenswrapper[4689]: I1201 08:59:54.853317 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 01 08:59:54 crc kubenswrapper[4689]: I1201 08:59:54.940285 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/977cbf76-96dd-4627-acc3-8f1eaf8ca809-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"977cbf76-96dd-4627-acc3-8f1eaf8ca809\") " pod="openstack/nova-metadata-0" Dec 01 08:59:54 crc kubenswrapper[4689]: I1201 08:59:54.940644 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chjkd\" (UniqueName: \"kubernetes.io/projected/977cbf76-96dd-4627-acc3-8f1eaf8ca809-kube-api-access-chjkd\") pod \"nova-metadata-0\" (UID: \"977cbf76-96dd-4627-acc3-8f1eaf8ca809\") " pod="openstack/nova-metadata-0" Dec 01 08:59:54 crc kubenswrapper[4689]: I1201 08:59:54.940851 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/977cbf76-96dd-4627-acc3-8f1eaf8ca809-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"977cbf76-96dd-4627-acc3-8f1eaf8ca809\") " pod="openstack/nova-metadata-0" Dec 01 08:59:54 crc kubenswrapper[4689]: I1201 08:59:54.940980 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/977cbf76-96dd-4627-acc3-8f1eaf8ca809-config-data\") pod \"nova-metadata-0\" (UID: \"977cbf76-96dd-4627-acc3-8f1eaf8ca809\") " pod="openstack/nova-metadata-0" Dec 01 08:59:54 crc kubenswrapper[4689]: I1201 08:59:54.941073 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/977cbf76-96dd-4627-acc3-8f1eaf8ca809-logs\") pod \"nova-metadata-0\" (UID: \"977cbf76-96dd-4627-acc3-8f1eaf8ca809\") " pod="openstack/nova-metadata-0" Dec 01 08:59:55 crc kubenswrapper[4689]: I1201 08:59:55.048177 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/977cbf76-96dd-4627-acc3-8f1eaf8ca809-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"977cbf76-96dd-4627-acc3-8f1eaf8ca809\") " pod="openstack/nova-metadata-0" Dec 01 08:59:55 crc kubenswrapper[4689]: I1201 08:59:55.048520 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/977cbf76-96dd-4627-acc3-8f1eaf8ca809-config-data\") pod \"nova-metadata-0\" (UID: \"977cbf76-96dd-4627-acc3-8f1eaf8ca809\") " pod="openstack/nova-metadata-0" Dec 01 08:59:55 crc kubenswrapper[4689]: I1201 08:59:55.048591 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/977cbf76-96dd-4627-acc3-8f1eaf8ca809-logs\") pod \"nova-metadata-0\" (UID: \"977cbf76-96dd-4627-acc3-8f1eaf8ca809\") " pod="openstack/nova-metadata-0" Dec 01 08:59:55 crc kubenswrapper[4689]: I1201 08:59:55.048737 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/977cbf76-96dd-4627-acc3-8f1eaf8ca809-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"977cbf76-96dd-4627-acc3-8f1eaf8ca809\") " pod="openstack/nova-metadata-0" Dec 01 08:59:55 crc kubenswrapper[4689]: I1201 08:59:55.048761 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chjkd\" (UniqueName: \"kubernetes.io/projected/977cbf76-96dd-4627-acc3-8f1eaf8ca809-kube-api-access-chjkd\") pod \"nova-metadata-0\" (UID: \"977cbf76-96dd-4627-acc3-8f1eaf8ca809\") " pod="openstack/nova-metadata-0" Dec 01 08:59:55 crc kubenswrapper[4689]: I1201 08:59:55.051009 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/977cbf76-96dd-4627-acc3-8f1eaf8ca809-logs\") pod \"nova-metadata-0\" (UID: \"977cbf76-96dd-4627-acc3-8f1eaf8ca809\") " pod="openstack/nova-metadata-0" Dec 01 08:59:55 crc kubenswrapper[4689]: I1201 08:59:55.063747 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/977cbf76-96dd-4627-acc3-8f1eaf8ca809-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"977cbf76-96dd-4627-acc3-8f1eaf8ca809\") " pod="openstack/nova-metadata-0" Dec 01 08:59:55 crc kubenswrapper[4689]: I1201 08:59:55.064695 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/977cbf76-96dd-4627-acc3-8f1eaf8ca809-config-data\") pod \"nova-metadata-0\" (UID: \"977cbf76-96dd-4627-acc3-8f1eaf8ca809\") " pod="openstack/nova-metadata-0" Dec 01 08:59:55 crc kubenswrapper[4689]: I1201 08:59:55.067967 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/977cbf76-96dd-4627-acc3-8f1eaf8ca809-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"977cbf76-96dd-4627-acc3-8f1eaf8ca809\") " pod="openstack/nova-metadata-0" Dec 01 08:59:55 crc kubenswrapper[4689]: I1201 08:59:55.096735 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chjkd\" (UniqueName: \"kubernetes.io/projected/977cbf76-96dd-4627-acc3-8f1eaf8ca809-kube-api-access-chjkd\") pod \"nova-metadata-0\" (UID: \"977cbf76-96dd-4627-acc3-8f1eaf8ca809\") " pod="openstack/nova-metadata-0" Dec 01 08:59:55 crc kubenswrapper[4689]: I1201 08:59:55.107715 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8db9c55-de1b-42b3-bfda-29d16626b13b" path="/var/lib/kubelet/pods/f8db9c55-de1b-42b3-bfda-29d16626b13b/volumes" Dec 01 08:59:55 crc kubenswrapper[4689]: I1201 08:59:55.187686 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 01 08:59:55 crc kubenswrapper[4689]: I1201 08:59:55.665078 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 01 08:59:55 crc kubenswrapper[4689]: I1201 08:59:55.665427 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 01 08:59:55 crc kubenswrapper[4689]: I1201 08:59:55.711764 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 01 08:59:55 crc kubenswrapper[4689]: I1201 08:59:55.889335 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Dec 01 08:59:56 crc kubenswrapper[4689]: I1201 08:59:56.175986 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 01 08:59:56 crc kubenswrapper[4689]: I1201 08:59:56.176300 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 01 08:59:56 crc kubenswrapper[4689]: I1201 08:59:56.228659 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 01 08:59:56 crc kubenswrapper[4689]: I1201 08:59:56.442691 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"977cbf76-96dd-4627-acc3-8f1eaf8ca809","Type":"ContainerStarted","Data":"769e3f29ff95551a8170b45735cba8e0c11cde51be24e7c5b4bfe82cef5d8129"} Dec 01 08:59:56 crc kubenswrapper[4689]: I1201 08:59:56.442739 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"977cbf76-96dd-4627-acc3-8f1eaf8ca809","Type":"ContainerStarted","Data":"87f9200281b2118b2b5ab85bcf7134d4cac4395099c853e0db14a32fdc07a69a"} Dec 01 08:59:56 crc kubenswrapper[4689]: I1201 08:59:56.442752 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"977cbf76-96dd-4627-acc3-8f1eaf8ca809","Type":"ContainerStarted","Data":"5a95664b3c11c06c4cfb0792c0922fece8b08101d3956c7d847f51ccd6c1d373"} Dec 01 08:59:56 crc kubenswrapper[4689]: I1201 08:59:56.464872 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.4648507 podStartE2EDuration="2.4648507s" podCreationTimestamp="2025-12-01 08:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 08:59:56.462292331 +0000 UTC m=+1276.534580235" watchObservedRunningTime="2025-12-01 08:59:56.4648507 +0000 UTC m=+1276.537138604" Dec 01 08:59:56 crc kubenswrapper[4689]: I1201 08:59:56.486764 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 01 08:59:56 crc kubenswrapper[4689]: I1201 08:59:56.511524 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-845d6d6f59-59d8q" Dec 01 08:59:56 crc kubenswrapper[4689]: I1201 08:59:56.622793 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-mztfd"] Dec 01 08:59:56 crc kubenswrapper[4689]: I1201 08:59:56.623021 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5784cf869f-mztfd" podUID="5493cbb2-5880-48d7-81fe-46ab0e2dcb68" containerName="dnsmasq-dns" containerID="cri-o://5fa40b95ca6db1a7176c43590b21f5f5f7e88adb42e9791c29a5b8af6e002c67" gracePeriod=10 Dec 01 08:59:56 crc kubenswrapper[4689]: I1201 08:59:56.756514 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c4bef3fc-9bf2-4daf-a366-29c8129db360" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.187:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 01 08:59:56 crc kubenswrapper[4689]: I1201 08:59:56.756838 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c4bef3fc-9bf2-4daf-a366-29c8129db360" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.187:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 01 08:59:57 crc kubenswrapper[4689]: I1201 08:59:57.448752 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-mztfd" Dec 01 08:59:57 crc kubenswrapper[4689]: I1201 08:59:57.455710 4689 generic.go:334] "Generic (PLEG): container finished" podID="5493cbb2-5880-48d7-81fe-46ab0e2dcb68" containerID="5fa40b95ca6db1a7176c43590b21f5f5f7e88adb42e9791c29a5b8af6e002c67" exitCode=0 Dec 01 08:59:57 crc kubenswrapper[4689]: I1201 08:59:57.455793 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-mztfd" event={"ID":"5493cbb2-5880-48d7-81fe-46ab0e2dcb68","Type":"ContainerDied","Data":"5fa40b95ca6db1a7176c43590b21f5f5f7e88adb42e9791c29a5b8af6e002c67"} Dec 01 08:59:57 crc kubenswrapper[4689]: I1201 08:59:57.455867 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-mztfd" event={"ID":"5493cbb2-5880-48d7-81fe-46ab0e2dcb68","Type":"ContainerDied","Data":"4e0885fdc6c97cca525b2aee357d1601ee27c8d2142e202a426b2b699dd883b9"} Dec 01 08:59:57 crc kubenswrapper[4689]: I1201 08:59:57.455883 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-mztfd" Dec 01 08:59:57 crc kubenswrapper[4689]: I1201 08:59:57.455891 4689 scope.go:117] "RemoveContainer" containerID="5fa40b95ca6db1a7176c43590b21f5f5f7e88adb42e9791c29a5b8af6e002c67" Dec 01 08:59:57 crc kubenswrapper[4689]: I1201 08:59:57.501518 4689 scope.go:117] "RemoveContainer" containerID="187461bff597345915f767183f516af00b25c766d7e4a8b0cee9338e17325e74" Dec 01 08:59:57 crc kubenswrapper[4689]: I1201 08:59:57.534414 4689 scope.go:117] "RemoveContainer" containerID="5fa40b95ca6db1a7176c43590b21f5f5f7e88adb42e9791c29a5b8af6e002c67" Dec 01 08:59:57 crc kubenswrapper[4689]: E1201 08:59:57.538277 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5fa40b95ca6db1a7176c43590b21f5f5f7e88adb42e9791c29a5b8af6e002c67\": container with ID starting with 5fa40b95ca6db1a7176c43590b21f5f5f7e88adb42e9791c29a5b8af6e002c67 not found: ID does not exist" containerID="5fa40b95ca6db1a7176c43590b21f5f5f7e88adb42e9791c29a5b8af6e002c67" Dec 01 08:59:57 crc kubenswrapper[4689]: I1201 08:59:57.538345 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fa40b95ca6db1a7176c43590b21f5f5f7e88adb42e9791c29a5b8af6e002c67"} err="failed to get container status \"5fa40b95ca6db1a7176c43590b21f5f5f7e88adb42e9791c29a5b8af6e002c67\": rpc error: code = NotFound desc = could not find container \"5fa40b95ca6db1a7176c43590b21f5f5f7e88adb42e9791c29a5b8af6e002c67\": container with ID starting with 5fa40b95ca6db1a7176c43590b21f5f5f7e88adb42e9791c29a5b8af6e002c67 not found: ID does not exist" Dec 01 08:59:57 crc kubenswrapper[4689]: I1201 08:59:57.538426 4689 scope.go:117] "RemoveContainer" containerID="187461bff597345915f767183f516af00b25c766d7e4a8b0cee9338e17325e74" Dec 01 08:59:57 crc kubenswrapper[4689]: E1201 08:59:57.538879 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"187461bff597345915f767183f516af00b25c766d7e4a8b0cee9338e17325e74\": container with ID starting with 187461bff597345915f767183f516af00b25c766d7e4a8b0cee9338e17325e74 not found: ID does not exist" containerID="187461bff597345915f767183f516af00b25c766d7e4a8b0cee9338e17325e74" Dec 01 08:59:57 crc kubenswrapper[4689]: I1201 08:59:57.538936 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"187461bff597345915f767183f516af00b25c766d7e4a8b0cee9338e17325e74"} err="failed to get container status \"187461bff597345915f767183f516af00b25c766d7e4a8b0cee9338e17325e74\": rpc error: code = NotFound desc = could not find container \"187461bff597345915f767183f516af00b25c766d7e4a8b0cee9338e17325e74\": container with ID starting with 187461bff597345915f767183f516af00b25c766d7e4a8b0cee9338e17325e74 not found: ID does not exist" Dec 01 08:59:57 crc kubenswrapper[4689]: I1201 08:59:57.547937 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5493cbb2-5880-48d7-81fe-46ab0e2dcb68-dns-svc\") pod \"5493cbb2-5880-48d7-81fe-46ab0e2dcb68\" (UID: \"5493cbb2-5880-48d7-81fe-46ab0e2dcb68\") " Dec 01 08:59:57 crc kubenswrapper[4689]: I1201 08:59:57.548064 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5493cbb2-5880-48d7-81fe-46ab0e2dcb68-config\") pod \"5493cbb2-5880-48d7-81fe-46ab0e2dcb68\" (UID: \"5493cbb2-5880-48d7-81fe-46ab0e2dcb68\") " Dec 01 08:59:57 crc kubenswrapper[4689]: I1201 08:59:57.548226 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8tks\" (UniqueName: \"kubernetes.io/projected/5493cbb2-5880-48d7-81fe-46ab0e2dcb68-kube-api-access-h8tks\") pod \"5493cbb2-5880-48d7-81fe-46ab0e2dcb68\" (UID: \"5493cbb2-5880-48d7-81fe-46ab0e2dcb68\") " Dec 01 08:59:57 crc kubenswrapper[4689]: I1201 08:59:57.548634 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5493cbb2-5880-48d7-81fe-46ab0e2dcb68-ovsdbserver-nb\") pod \"5493cbb2-5880-48d7-81fe-46ab0e2dcb68\" (UID: \"5493cbb2-5880-48d7-81fe-46ab0e2dcb68\") " Dec 01 08:59:57 crc kubenswrapper[4689]: I1201 08:59:57.548866 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5493cbb2-5880-48d7-81fe-46ab0e2dcb68-ovsdbserver-sb\") pod \"5493cbb2-5880-48d7-81fe-46ab0e2dcb68\" (UID: \"5493cbb2-5880-48d7-81fe-46ab0e2dcb68\") " Dec 01 08:59:57 crc kubenswrapper[4689]: I1201 08:59:57.549002 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5493cbb2-5880-48d7-81fe-46ab0e2dcb68-dns-swift-storage-0\") pod \"5493cbb2-5880-48d7-81fe-46ab0e2dcb68\" (UID: \"5493cbb2-5880-48d7-81fe-46ab0e2dcb68\") " Dec 01 08:59:57 crc kubenswrapper[4689]: I1201 08:59:57.572801 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5493cbb2-5880-48d7-81fe-46ab0e2dcb68-kube-api-access-h8tks" (OuterVolumeSpecName: "kube-api-access-h8tks") pod "5493cbb2-5880-48d7-81fe-46ab0e2dcb68" (UID: "5493cbb2-5880-48d7-81fe-46ab0e2dcb68"). InnerVolumeSpecName "kube-api-access-h8tks". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 08:59:57 crc kubenswrapper[4689]: I1201 08:59:57.650391 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5493cbb2-5880-48d7-81fe-46ab0e2dcb68-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5493cbb2-5880-48d7-81fe-46ab0e2dcb68" (UID: "5493cbb2-5880-48d7-81fe-46ab0e2dcb68"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:59:57 crc kubenswrapper[4689]: I1201 08:59:57.659096 4689 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5493cbb2-5880-48d7-81fe-46ab0e2dcb68-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 01 08:59:57 crc kubenswrapper[4689]: I1201 08:59:57.659128 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h8tks\" (UniqueName: \"kubernetes.io/projected/5493cbb2-5880-48d7-81fe-46ab0e2dcb68-kube-api-access-h8tks\") on node \"crc\" DevicePath \"\"" Dec 01 08:59:57 crc kubenswrapper[4689]: I1201 08:59:57.719093 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5493cbb2-5880-48d7-81fe-46ab0e2dcb68-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5493cbb2-5880-48d7-81fe-46ab0e2dcb68" (UID: "5493cbb2-5880-48d7-81fe-46ab0e2dcb68"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:59:57 crc kubenswrapper[4689]: I1201 08:59:57.739873 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5493cbb2-5880-48d7-81fe-46ab0e2dcb68-config" (OuterVolumeSpecName: "config") pod "5493cbb2-5880-48d7-81fe-46ab0e2dcb68" (UID: "5493cbb2-5880-48d7-81fe-46ab0e2dcb68"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:59:57 crc kubenswrapper[4689]: I1201 08:59:57.756707 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5493cbb2-5880-48d7-81fe-46ab0e2dcb68-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5493cbb2-5880-48d7-81fe-46ab0e2dcb68" (UID: "5493cbb2-5880-48d7-81fe-46ab0e2dcb68"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:59:57 crc kubenswrapper[4689]: I1201 08:59:57.763578 4689 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5493cbb2-5880-48d7-81fe-46ab0e2dcb68-config\") on node \"crc\" DevicePath \"\"" Dec 01 08:59:57 crc kubenswrapper[4689]: I1201 08:59:57.763616 4689 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5493cbb2-5880-48d7-81fe-46ab0e2dcb68-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 01 08:59:57 crc kubenswrapper[4689]: I1201 08:59:57.763627 4689 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5493cbb2-5880-48d7-81fe-46ab0e2dcb68-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 01 08:59:57 crc kubenswrapper[4689]: I1201 08:59:57.763863 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5493cbb2-5880-48d7-81fe-46ab0e2dcb68-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "5493cbb2-5880-48d7-81fe-46ab0e2dcb68" (UID: "5493cbb2-5880-48d7-81fe-46ab0e2dcb68"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 08:59:57 crc kubenswrapper[4689]: I1201 08:59:57.864866 4689 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5493cbb2-5880-48d7-81fe-46ab0e2dcb68-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 01 08:59:58 crc kubenswrapper[4689]: I1201 08:59:58.090773 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-mztfd"] Dec 01 08:59:58 crc kubenswrapper[4689]: I1201 08:59:58.104076 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-mztfd"] Dec 01 08:59:59 crc kubenswrapper[4689]: I1201 08:59:59.061082 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5493cbb2-5880-48d7-81fe-46ab0e2dcb68" path="/var/lib/kubelet/pods/5493cbb2-5880-48d7-81fe-46ab0e2dcb68/volumes" Dec 01 08:59:59 crc kubenswrapper[4689]: I1201 08:59:59.475687 4689 generic.go:334] "Generic (PLEG): container finished" podID="08dd5230-a82f-43bc-9517-78b80ed7b39a" containerID="89f7562fa8966edf05d3e6681ce4143d625c7241d261a3c58263f01a0f70d79e" exitCode=0 Dec 01 08:59:59 crc kubenswrapper[4689]: I1201 08:59:59.475768 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-gf7l5" event={"ID":"08dd5230-a82f-43bc-9517-78b80ed7b39a","Type":"ContainerDied","Data":"89f7562fa8966edf05d3e6681ce4143d625c7241d261a3c58263f01a0f70d79e"} Dec 01 08:59:59 crc kubenswrapper[4689]: I1201 08:59:59.477739 4689 generic.go:334] "Generic (PLEG): container finished" podID="63fc0d00-4168-47eb-998a-d32962b46bad" containerID="61cc432ae69ff9c2c03725e948d044f0bdef4f09f870962a779ebfd9f635da7f" exitCode=0 Dec 01 08:59:59 crc kubenswrapper[4689]: I1201 08:59:59.477765 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-qds9f" event={"ID":"63fc0d00-4168-47eb-998a-d32962b46bad","Type":"ContainerDied","Data":"61cc432ae69ff9c2c03725e948d044f0bdef4f09f870962a779ebfd9f635da7f"} Dec 01 09:00:00 crc kubenswrapper[4689]: I1201 09:00:00.142905 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409660-gb5z4"] Dec 01 09:00:00 crc kubenswrapper[4689]: E1201 09:00:00.143835 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5493cbb2-5880-48d7-81fe-46ab0e2dcb68" containerName="init" Dec 01 09:00:00 crc kubenswrapper[4689]: I1201 09:00:00.143856 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="5493cbb2-5880-48d7-81fe-46ab0e2dcb68" containerName="init" Dec 01 09:00:00 crc kubenswrapper[4689]: E1201 09:00:00.143898 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5493cbb2-5880-48d7-81fe-46ab0e2dcb68" containerName="dnsmasq-dns" Dec 01 09:00:00 crc kubenswrapper[4689]: I1201 09:00:00.143908 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="5493cbb2-5880-48d7-81fe-46ab0e2dcb68" containerName="dnsmasq-dns" Dec 01 09:00:00 crc kubenswrapper[4689]: I1201 09:00:00.144254 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="5493cbb2-5880-48d7-81fe-46ab0e2dcb68" containerName="dnsmasq-dns" Dec 01 09:00:00 crc kubenswrapper[4689]: I1201 09:00:00.145808 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409660-gb5z4" Dec 01 09:00:00 crc kubenswrapper[4689]: I1201 09:00:00.149047 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 01 09:00:00 crc kubenswrapper[4689]: I1201 09:00:00.152720 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 01 09:00:00 crc kubenswrapper[4689]: I1201 09:00:00.160083 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409660-gb5z4"] Dec 01 09:00:00 crc kubenswrapper[4689]: I1201 09:00:00.189196 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 01 09:00:00 crc kubenswrapper[4689]: I1201 09:00:00.190213 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 01 09:00:00 crc kubenswrapper[4689]: I1201 09:00:00.223145 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b222f1da-8be5-48e4-acfd-0d2979cd16f9-secret-volume\") pod \"collect-profiles-29409660-gb5z4\" (UID: \"b222f1da-8be5-48e4-acfd-0d2979cd16f9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409660-gb5z4" Dec 01 09:00:00 crc kubenswrapper[4689]: I1201 09:00:00.223631 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdndd\" (UniqueName: \"kubernetes.io/projected/b222f1da-8be5-48e4-acfd-0d2979cd16f9-kube-api-access-fdndd\") pod \"collect-profiles-29409660-gb5z4\" (UID: \"b222f1da-8be5-48e4-acfd-0d2979cd16f9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409660-gb5z4" Dec 01 09:00:00 crc kubenswrapper[4689]: I1201 09:00:00.223896 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b222f1da-8be5-48e4-acfd-0d2979cd16f9-config-volume\") pod \"collect-profiles-29409660-gb5z4\" (UID: \"b222f1da-8be5-48e4-acfd-0d2979cd16f9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409660-gb5z4" Dec 01 09:00:00 crc kubenswrapper[4689]: I1201 09:00:00.325083 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b222f1da-8be5-48e4-acfd-0d2979cd16f9-config-volume\") pod \"collect-profiles-29409660-gb5z4\" (UID: \"b222f1da-8be5-48e4-acfd-0d2979cd16f9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409660-gb5z4" Dec 01 09:00:00 crc kubenswrapper[4689]: I1201 09:00:00.325141 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b222f1da-8be5-48e4-acfd-0d2979cd16f9-secret-volume\") pod \"collect-profiles-29409660-gb5z4\" (UID: \"b222f1da-8be5-48e4-acfd-0d2979cd16f9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409660-gb5z4" Dec 01 09:00:00 crc kubenswrapper[4689]: I1201 09:00:00.325202 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdndd\" (UniqueName: \"kubernetes.io/projected/b222f1da-8be5-48e4-acfd-0d2979cd16f9-kube-api-access-fdndd\") pod \"collect-profiles-29409660-gb5z4\" (UID: \"b222f1da-8be5-48e4-acfd-0d2979cd16f9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409660-gb5z4" Dec 01 09:00:00 crc kubenswrapper[4689]: I1201 09:00:00.326464 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b222f1da-8be5-48e4-acfd-0d2979cd16f9-config-volume\") pod \"collect-profiles-29409660-gb5z4\" (UID: \"b222f1da-8be5-48e4-acfd-0d2979cd16f9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409660-gb5z4" Dec 01 09:00:00 crc kubenswrapper[4689]: I1201 09:00:00.331699 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b222f1da-8be5-48e4-acfd-0d2979cd16f9-secret-volume\") pod \"collect-profiles-29409660-gb5z4\" (UID: \"b222f1da-8be5-48e4-acfd-0d2979cd16f9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409660-gb5z4" Dec 01 09:00:00 crc kubenswrapper[4689]: I1201 09:00:00.341604 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdndd\" (UniqueName: \"kubernetes.io/projected/b222f1da-8be5-48e4-acfd-0d2979cd16f9-kube-api-access-fdndd\") pod \"collect-profiles-29409660-gb5z4\" (UID: \"b222f1da-8be5-48e4-acfd-0d2979cd16f9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409660-gb5z4" Dec 01 09:00:00 crc kubenswrapper[4689]: I1201 09:00:00.466759 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409660-gb5z4" Dec 01 09:00:01 crc kubenswrapper[4689]: I1201 09:00:01.088079 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-qds9f" Dec 01 09:00:01 crc kubenswrapper[4689]: I1201 09:00:01.105120 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-gf7l5" Dec 01 09:00:01 crc kubenswrapper[4689]: I1201 09:00:01.167205 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409660-gb5z4"] Dec 01 09:00:01 crc kubenswrapper[4689]: W1201 09:00:01.175395 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb222f1da_8be5_48e4_acfd_0d2979cd16f9.slice/crio-0a7e9913010d7e147fb31ca8421e7a8b53eb26a1f20bcc93e0ad0f34794daafe WatchSource:0}: Error finding container 0a7e9913010d7e147fb31ca8421e7a8b53eb26a1f20bcc93e0ad0f34794daafe: Status 404 returned error can't find the container with id 0a7e9913010d7e147fb31ca8421e7a8b53eb26a1f20bcc93e0ad0f34794daafe Dec 01 09:00:01 crc kubenswrapper[4689]: I1201 09:00:01.253964 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63fc0d00-4168-47eb-998a-d32962b46bad-combined-ca-bundle\") pod \"63fc0d00-4168-47eb-998a-d32962b46bad\" (UID: \"63fc0d00-4168-47eb-998a-d32962b46bad\") " Dec 01 09:00:01 crc kubenswrapper[4689]: I1201 09:00:01.254322 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-prmsv\" (UniqueName: \"kubernetes.io/projected/08dd5230-a82f-43bc-9517-78b80ed7b39a-kube-api-access-prmsv\") pod \"08dd5230-a82f-43bc-9517-78b80ed7b39a\" (UID: \"08dd5230-a82f-43bc-9517-78b80ed7b39a\") " Dec 01 09:00:01 crc kubenswrapper[4689]: I1201 09:00:01.254481 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63fc0d00-4168-47eb-998a-d32962b46bad-config-data\") pod \"63fc0d00-4168-47eb-998a-d32962b46bad\" (UID: \"63fc0d00-4168-47eb-998a-d32962b46bad\") " Dec 01 09:00:01 crc kubenswrapper[4689]: I1201 09:00:01.254598 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63fc0d00-4168-47eb-998a-d32962b46bad-scripts\") pod \"63fc0d00-4168-47eb-998a-d32962b46bad\" (UID: \"63fc0d00-4168-47eb-998a-d32962b46bad\") " Dec 01 09:00:01 crc kubenswrapper[4689]: I1201 09:00:01.254699 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08dd5230-a82f-43bc-9517-78b80ed7b39a-config-data\") pod \"08dd5230-a82f-43bc-9517-78b80ed7b39a\" (UID: \"08dd5230-a82f-43bc-9517-78b80ed7b39a\") " Dec 01 09:00:01 crc kubenswrapper[4689]: I1201 09:00:01.254842 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08dd5230-a82f-43bc-9517-78b80ed7b39a-combined-ca-bundle\") pod \"08dd5230-a82f-43bc-9517-78b80ed7b39a\" (UID: \"08dd5230-a82f-43bc-9517-78b80ed7b39a\") " Dec 01 09:00:01 crc kubenswrapper[4689]: I1201 09:00:01.254930 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5kjkv\" (UniqueName: \"kubernetes.io/projected/63fc0d00-4168-47eb-998a-d32962b46bad-kube-api-access-5kjkv\") pod \"63fc0d00-4168-47eb-998a-d32962b46bad\" (UID: \"63fc0d00-4168-47eb-998a-d32962b46bad\") " Dec 01 09:00:01 crc kubenswrapper[4689]: I1201 09:00:01.255062 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08dd5230-a82f-43bc-9517-78b80ed7b39a-scripts\") pod \"08dd5230-a82f-43bc-9517-78b80ed7b39a\" (UID: \"08dd5230-a82f-43bc-9517-78b80ed7b39a\") " Dec 01 09:00:01 crc kubenswrapper[4689]: I1201 09:00:01.260478 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63fc0d00-4168-47eb-998a-d32962b46bad-kube-api-access-5kjkv" (OuterVolumeSpecName: "kube-api-access-5kjkv") pod "63fc0d00-4168-47eb-998a-d32962b46bad" (UID: "63fc0d00-4168-47eb-998a-d32962b46bad"). InnerVolumeSpecName "kube-api-access-5kjkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:00:01 crc kubenswrapper[4689]: I1201 09:00:01.260740 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08dd5230-a82f-43bc-9517-78b80ed7b39a-kube-api-access-prmsv" (OuterVolumeSpecName: "kube-api-access-prmsv") pod "08dd5230-a82f-43bc-9517-78b80ed7b39a" (UID: "08dd5230-a82f-43bc-9517-78b80ed7b39a"). InnerVolumeSpecName "kube-api-access-prmsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:00:01 crc kubenswrapper[4689]: I1201 09:00:01.262979 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08dd5230-a82f-43bc-9517-78b80ed7b39a-scripts" (OuterVolumeSpecName: "scripts") pod "08dd5230-a82f-43bc-9517-78b80ed7b39a" (UID: "08dd5230-a82f-43bc-9517-78b80ed7b39a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:00:01 crc kubenswrapper[4689]: I1201 09:00:01.267988 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63fc0d00-4168-47eb-998a-d32962b46bad-scripts" (OuterVolumeSpecName: "scripts") pod "63fc0d00-4168-47eb-998a-d32962b46bad" (UID: "63fc0d00-4168-47eb-998a-d32962b46bad"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:00:01 crc kubenswrapper[4689]: I1201 09:00:01.286010 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63fc0d00-4168-47eb-998a-d32962b46bad-config-data" (OuterVolumeSpecName: "config-data") pod "63fc0d00-4168-47eb-998a-d32962b46bad" (UID: "63fc0d00-4168-47eb-998a-d32962b46bad"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:00:01 crc kubenswrapper[4689]: I1201 09:00:01.295745 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63fc0d00-4168-47eb-998a-d32962b46bad-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "63fc0d00-4168-47eb-998a-d32962b46bad" (UID: "63fc0d00-4168-47eb-998a-d32962b46bad"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:00:01 crc kubenswrapper[4689]: I1201 09:00:01.296947 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08dd5230-a82f-43bc-9517-78b80ed7b39a-config-data" (OuterVolumeSpecName: "config-data") pod "08dd5230-a82f-43bc-9517-78b80ed7b39a" (UID: "08dd5230-a82f-43bc-9517-78b80ed7b39a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:00:01 crc kubenswrapper[4689]: I1201 09:00:01.319154 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08dd5230-a82f-43bc-9517-78b80ed7b39a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "08dd5230-a82f-43bc-9517-78b80ed7b39a" (UID: "08dd5230-a82f-43bc-9517-78b80ed7b39a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:00:01 crc kubenswrapper[4689]: I1201 09:00:01.356913 4689 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08dd5230-a82f-43bc-9517-78b80ed7b39a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 09:00:01 crc kubenswrapper[4689]: I1201 09:00:01.356954 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5kjkv\" (UniqueName: \"kubernetes.io/projected/63fc0d00-4168-47eb-998a-d32962b46bad-kube-api-access-5kjkv\") on node \"crc\" DevicePath \"\"" Dec 01 09:00:01 crc kubenswrapper[4689]: I1201 09:00:01.356968 4689 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08dd5230-a82f-43bc-9517-78b80ed7b39a-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 09:00:01 crc kubenswrapper[4689]: I1201 09:00:01.356977 4689 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63fc0d00-4168-47eb-998a-d32962b46bad-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 09:00:01 crc kubenswrapper[4689]: I1201 09:00:01.356985 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-prmsv\" (UniqueName: \"kubernetes.io/projected/08dd5230-a82f-43bc-9517-78b80ed7b39a-kube-api-access-prmsv\") on node \"crc\" DevicePath \"\"" Dec 01 09:00:01 crc kubenswrapper[4689]: I1201 09:00:01.356994 4689 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63fc0d00-4168-47eb-998a-d32962b46bad-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 09:00:01 crc kubenswrapper[4689]: I1201 09:00:01.357004 4689 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63fc0d00-4168-47eb-998a-d32962b46bad-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 09:00:01 crc kubenswrapper[4689]: I1201 09:00:01.357014 4689 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08dd5230-a82f-43bc-9517-78b80ed7b39a-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 09:00:01 crc kubenswrapper[4689]: I1201 09:00:01.509050 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29409660-gb5z4" event={"ID":"b222f1da-8be5-48e4-acfd-0d2979cd16f9","Type":"ContainerStarted","Data":"d4d8dcb2c79858da09495dc2092899c8fafb4be4d595dd1578c87c7627f8aba6"} Dec 01 09:00:01 crc kubenswrapper[4689]: I1201 09:00:01.509409 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29409660-gb5z4" event={"ID":"b222f1da-8be5-48e4-acfd-0d2979cd16f9","Type":"ContainerStarted","Data":"0a7e9913010d7e147fb31ca8421e7a8b53eb26a1f20bcc93e0ad0f34794daafe"} Dec 01 09:00:01 crc kubenswrapper[4689]: I1201 09:00:01.510758 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-qds9f" event={"ID":"63fc0d00-4168-47eb-998a-d32962b46bad","Type":"ContainerDied","Data":"db46b3d3e74f753c8c5e8b7d1dcabf732363bc050aa77b5e431a62f5b6164c09"} Dec 01 09:00:01 crc kubenswrapper[4689]: I1201 09:00:01.510764 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-qds9f" Dec 01 09:00:01 crc kubenswrapper[4689]: I1201 09:00:01.510784 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db46b3d3e74f753c8c5e8b7d1dcabf732363bc050aa77b5e431a62f5b6164c09" Dec 01 09:00:01 crc kubenswrapper[4689]: I1201 09:00:01.512442 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-gf7l5" event={"ID":"08dd5230-a82f-43bc-9517-78b80ed7b39a","Type":"ContainerDied","Data":"bcc94656b6d40a2e57c236cef3edd1f080dfe529a44808cdb96c03627414fb45"} Dec 01 09:00:01 crc kubenswrapper[4689]: I1201 09:00:01.512466 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bcc94656b6d40a2e57c236cef3edd1f080dfe529a44808cdb96c03627414fb45" Dec 01 09:00:01 crc kubenswrapper[4689]: I1201 09:00:01.512508 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-gf7l5" Dec 01 09:00:01 crc kubenswrapper[4689]: I1201 09:00:01.568946 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29409660-gb5z4" podStartSLOduration=1.568925729 podStartE2EDuration="1.568925729s" podCreationTimestamp="2025-12-01 09:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:00:01.561102607 +0000 UTC m=+1281.633390501" watchObservedRunningTime="2025-12-01 09:00:01.568925729 +0000 UTC m=+1281.641213633" Dec 01 09:00:01 crc kubenswrapper[4689]: I1201 09:00:01.615606 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 01 09:00:01 crc kubenswrapper[4689]: E1201 09:00:01.615981 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63fc0d00-4168-47eb-998a-d32962b46bad" containerName="nova-cell1-conductor-db-sync" Dec 01 09:00:01 crc kubenswrapper[4689]: I1201 09:00:01.615994 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="63fc0d00-4168-47eb-998a-d32962b46bad" containerName="nova-cell1-conductor-db-sync" Dec 01 09:00:01 crc kubenswrapper[4689]: E1201 09:00:01.616030 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08dd5230-a82f-43bc-9517-78b80ed7b39a" containerName="nova-manage" Dec 01 09:00:01 crc kubenswrapper[4689]: I1201 09:00:01.616039 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="08dd5230-a82f-43bc-9517-78b80ed7b39a" containerName="nova-manage" Dec 01 09:00:01 crc kubenswrapper[4689]: I1201 09:00:01.616223 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="08dd5230-a82f-43bc-9517-78b80ed7b39a" containerName="nova-manage" Dec 01 09:00:01 crc kubenswrapper[4689]: I1201 09:00:01.616245 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="63fc0d00-4168-47eb-998a-d32962b46bad" containerName="nova-cell1-conductor-db-sync" Dec 01 09:00:01 crc kubenswrapper[4689]: I1201 09:00:01.616881 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 01 09:00:01 crc kubenswrapper[4689]: I1201 09:00:01.618912 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Dec 01 09:00:01 crc kubenswrapper[4689]: I1201 09:00:01.632433 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 01 09:00:01 crc kubenswrapper[4689]: I1201 09:00:01.766288 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a7f30c7-ee71-44e3-9aed-b1e65916e8b7-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"4a7f30c7-ee71-44e3-9aed-b1e65916e8b7\") " pod="openstack/nova-cell1-conductor-0" Dec 01 09:00:01 crc kubenswrapper[4689]: I1201 09:00:01.766396 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhvmm\" (UniqueName: \"kubernetes.io/projected/4a7f30c7-ee71-44e3-9aed-b1e65916e8b7-kube-api-access-xhvmm\") pod \"nova-cell1-conductor-0\" (UID: \"4a7f30c7-ee71-44e3-9aed-b1e65916e8b7\") " pod="openstack/nova-cell1-conductor-0" Dec 01 09:00:01 crc kubenswrapper[4689]: I1201 09:00:01.766456 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a7f30c7-ee71-44e3-9aed-b1e65916e8b7-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"4a7f30c7-ee71-44e3-9aed-b1e65916e8b7\") " pod="openstack/nova-cell1-conductor-0" Dec 01 09:00:01 crc kubenswrapper[4689]: I1201 09:00:01.773712 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 01 09:00:01 crc kubenswrapper[4689]: I1201 09:00:01.774134 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c4bef3fc-9bf2-4daf-a366-29c8129db360" containerName="nova-api-log" containerID="cri-o://32c9256f3a21c69e378ab512f259fefac500a84846bba1151754d3d5a73c11b5" gracePeriod=30 Dec 01 09:00:01 crc kubenswrapper[4689]: I1201 09:00:01.774161 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c4bef3fc-9bf2-4daf-a366-29c8129db360" containerName="nova-api-api" containerID="cri-o://d0b73d1af1bd065da9ea2195a49a229e7b9d7064a16fc3dd8beead91a4603a8d" gracePeriod=30 Dec 01 09:00:01 crc kubenswrapper[4689]: I1201 09:00:01.781250 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 01 09:00:01 crc kubenswrapper[4689]: I1201 09:00:01.781538 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="d6040d5d-158a-4d64-89b6-3f17ad666c40" containerName="nova-scheduler-scheduler" containerID="cri-o://c335021333ea411feaf224203bdf87f3b7f3c9045229455ee1b84db774e393f2" gracePeriod=30 Dec 01 09:00:01 crc kubenswrapper[4689]: I1201 09:00:01.826860 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 01 09:00:01 crc kubenswrapper[4689]: I1201 09:00:01.827104 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="977cbf76-96dd-4627-acc3-8f1eaf8ca809" containerName="nova-metadata-log" containerID="cri-o://87f9200281b2118b2b5ab85bcf7134d4cac4395099c853e0db14a32fdc07a69a" gracePeriod=30 Dec 01 09:00:01 crc kubenswrapper[4689]: I1201 09:00:01.827206 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="977cbf76-96dd-4627-acc3-8f1eaf8ca809" containerName="nova-metadata-metadata" containerID="cri-o://769e3f29ff95551a8170b45735cba8e0c11cde51be24e7c5b4bfe82cef5d8129" gracePeriod=30 Dec 01 09:00:01 crc kubenswrapper[4689]: I1201 09:00:01.868711 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhvmm\" (UniqueName: \"kubernetes.io/projected/4a7f30c7-ee71-44e3-9aed-b1e65916e8b7-kube-api-access-xhvmm\") pod \"nova-cell1-conductor-0\" (UID: \"4a7f30c7-ee71-44e3-9aed-b1e65916e8b7\") " pod="openstack/nova-cell1-conductor-0" Dec 01 09:00:01 crc kubenswrapper[4689]: I1201 09:00:01.868793 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a7f30c7-ee71-44e3-9aed-b1e65916e8b7-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"4a7f30c7-ee71-44e3-9aed-b1e65916e8b7\") " pod="openstack/nova-cell1-conductor-0" Dec 01 09:00:01 crc kubenswrapper[4689]: I1201 09:00:01.869061 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a7f30c7-ee71-44e3-9aed-b1e65916e8b7-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"4a7f30c7-ee71-44e3-9aed-b1e65916e8b7\") " pod="openstack/nova-cell1-conductor-0" Dec 01 09:00:01 crc kubenswrapper[4689]: I1201 09:00:01.875103 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a7f30c7-ee71-44e3-9aed-b1e65916e8b7-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"4a7f30c7-ee71-44e3-9aed-b1e65916e8b7\") " pod="openstack/nova-cell1-conductor-0" Dec 01 09:00:01 crc kubenswrapper[4689]: I1201 09:00:01.878339 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a7f30c7-ee71-44e3-9aed-b1e65916e8b7-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"4a7f30c7-ee71-44e3-9aed-b1e65916e8b7\") " pod="openstack/nova-cell1-conductor-0" Dec 01 09:00:01 crc kubenswrapper[4689]: I1201 09:00:01.890250 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhvmm\" (UniqueName: \"kubernetes.io/projected/4a7f30c7-ee71-44e3-9aed-b1e65916e8b7-kube-api-access-xhvmm\") pod \"nova-cell1-conductor-0\" (UID: \"4a7f30c7-ee71-44e3-9aed-b1e65916e8b7\") " pod="openstack/nova-cell1-conductor-0" Dec 01 09:00:01 crc kubenswrapper[4689]: I1201 09:00:01.932452 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 01 09:00:02 crc kubenswrapper[4689]: I1201 09:00:02.438354 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5784cf869f-mztfd" podUID="5493cbb2-5880-48d7-81fe-46ab0e2dcb68" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.165:5353: i/o timeout" Dec 01 09:00:02 crc kubenswrapper[4689]: I1201 09:00:02.521707 4689 generic.go:334] "Generic (PLEG): container finished" podID="b222f1da-8be5-48e4-acfd-0d2979cd16f9" containerID="d4d8dcb2c79858da09495dc2092899c8fafb4be4d595dd1578c87c7627f8aba6" exitCode=0 Dec 01 09:00:02 crc kubenswrapper[4689]: I1201 09:00:02.521985 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29409660-gb5z4" event={"ID":"b222f1da-8be5-48e4-acfd-0d2979cd16f9","Type":"ContainerDied","Data":"d4d8dcb2c79858da09495dc2092899c8fafb4be4d595dd1578c87c7627f8aba6"} Dec 01 09:00:02 crc kubenswrapper[4689]: I1201 09:00:02.537680 4689 generic.go:334] "Generic (PLEG): container finished" podID="c4bef3fc-9bf2-4daf-a366-29c8129db360" containerID="32c9256f3a21c69e378ab512f259fefac500a84846bba1151754d3d5a73c11b5" exitCode=143 Dec 01 09:00:02 crc kubenswrapper[4689]: I1201 09:00:02.537721 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c4bef3fc-9bf2-4daf-a366-29c8129db360","Type":"ContainerDied","Data":"32c9256f3a21c69e378ab512f259fefac500a84846bba1151754d3d5a73c11b5"} Dec 01 09:00:02 crc kubenswrapper[4689]: I1201 09:00:02.544578 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 01 09:00:02 crc kubenswrapper[4689]: I1201 09:00:02.544654 4689 generic.go:334] "Generic (PLEG): container finished" podID="977cbf76-96dd-4627-acc3-8f1eaf8ca809" containerID="769e3f29ff95551a8170b45735cba8e0c11cde51be24e7c5b4bfe82cef5d8129" exitCode=0 Dec 01 09:00:02 crc kubenswrapper[4689]: I1201 09:00:02.544681 4689 generic.go:334] "Generic (PLEG): container finished" podID="977cbf76-96dd-4627-acc3-8f1eaf8ca809" containerID="87f9200281b2118b2b5ab85bcf7134d4cac4395099c853e0db14a32fdc07a69a" exitCode=143 Dec 01 09:00:02 crc kubenswrapper[4689]: I1201 09:00:02.544704 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"977cbf76-96dd-4627-acc3-8f1eaf8ca809","Type":"ContainerDied","Data":"769e3f29ff95551a8170b45735cba8e0c11cde51be24e7c5b4bfe82cef5d8129"} Dec 01 09:00:02 crc kubenswrapper[4689]: I1201 09:00:02.544732 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"977cbf76-96dd-4627-acc3-8f1eaf8ca809","Type":"ContainerDied","Data":"87f9200281b2118b2b5ab85bcf7134d4cac4395099c853e0db14a32fdc07a69a"} Dec 01 09:00:02 crc kubenswrapper[4689]: I1201 09:00:02.764878 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 01 09:00:02 crc kubenswrapper[4689]: I1201 09:00:02.888296 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/977cbf76-96dd-4627-acc3-8f1eaf8ca809-nova-metadata-tls-certs\") pod \"977cbf76-96dd-4627-acc3-8f1eaf8ca809\" (UID: \"977cbf76-96dd-4627-acc3-8f1eaf8ca809\") " Dec 01 09:00:02 crc kubenswrapper[4689]: I1201 09:00:02.888620 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/977cbf76-96dd-4627-acc3-8f1eaf8ca809-combined-ca-bundle\") pod \"977cbf76-96dd-4627-acc3-8f1eaf8ca809\" (UID: \"977cbf76-96dd-4627-acc3-8f1eaf8ca809\") " Dec 01 09:00:02 crc kubenswrapper[4689]: I1201 09:00:02.888751 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/977cbf76-96dd-4627-acc3-8f1eaf8ca809-config-data\") pod \"977cbf76-96dd-4627-acc3-8f1eaf8ca809\" (UID: \"977cbf76-96dd-4627-acc3-8f1eaf8ca809\") " Dec 01 09:00:02 crc kubenswrapper[4689]: I1201 09:00:02.888835 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-chjkd\" (UniqueName: \"kubernetes.io/projected/977cbf76-96dd-4627-acc3-8f1eaf8ca809-kube-api-access-chjkd\") pod \"977cbf76-96dd-4627-acc3-8f1eaf8ca809\" (UID: \"977cbf76-96dd-4627-acc3-8f1eaf8ca809\") " Dec 01 09:00:02 crc kubenswrapper[4689]: I1201 09:00:02.888937 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/977cbf76-96dd-4627-acc3-8f1eaf8ca809-logs\") pod \"977cbf76-96dd-4627-acc3-8f1eaf8ca809\" (UID: \"977cbf76-96dd-4627-acc3-8f1eaf8ca809\") " Dec 01 09:00:02 crc kubenswrapper[4689]: I1201 09:00:02.889640 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/977cbf76-96dd-4627-acc3-8f1eaf8ca809-logs" (OuterVolumeSpecName: "logs") pod "977cbf76-96dd-4627-acc3-8f1eaf8ca809" (UID: "977cbf76-96dd-4627-acc3-8f1eaf8ca809"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:00:02 crc kubenswrapper[4689]: I1201 09:00:02.909841 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/977cbf76-96dd-4627-acc3-8f1eaf8ca809-kube-api-access-chjkd" (OuterVolumeSpecName: "kube-api-access-chjkd") pod "977cbf76-96dd-4627-acc3-8f1eaf8ca809" (UID: "977cbf76-96dd-4627-acc3-8f1eaf8ca809"). InnerVolumeSpecName "kube-api-access-chjkd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:00:02 crc kubenswrapper[4689]: I1201 09:00:02.937662 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/977cbf76-96dd-4627-acc3-8f1eaf8ca809-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "977cbf76-96dd-4627-acc3-8f1eaf8ca809" (UID: "977cbf76-96dd-4627-acc3-8f1eaf8ca809"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:00:02 crc kubenswrapper[4689]: I1201 09:00:02.957631 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/977cbf76-96dd-4627-acc3-8f1eaf8ca809-config-data" (OuterVolumeSpecName: "config-data") pod "977cbf76-96dd-4627-acc3-8f1eaf8ca809" (UID: "977cbf76-96dd-4627-acc3-8f1eaf8ca809"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:00:02 crc kubenswrapper[4689]: I1201 09:00:02.958695 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/977cbf76-96dd-4627-acc3-8f1eaf8ca809-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "977cbf76-96dd-4627-acc3-8f1eaf8ca809" (UID: "977cbf76-96dd-4627-acc3-8f1eaf8ca809"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:00:02 crc kubenswrapper[4689]: I1201 09:00:02.991708 4689 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/977cbf76-96dd-4627-acc3-8f1eaf8ca809-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 09:00:02 crc kubenswrapper[4689]: I1201 09:00:02.991910 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-chjkd\" (UniqueName: \"kubernetes.io/projected/977cbf76-96dd-4627-acc3-8f1eaf8ca809-kube-api-access-chjkd\") on node \"crc\" DevicePath \"\"" Dec 01 09:00:02 crc kubenswrapper[4689]: I1201 09:00:02.991994 4689 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/977cbf76-96dd-4627-acc3-8f1eaf8ca809-logs\") on node \"crc\" DevicePath \"\"" Dec 01 09:00:02 crc kubenswrapper[4689]: I1201 09:00:02.992051 4689 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/977cbf76-96dd-4627-acc3-8f1eaf8ca809-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 01 09:00:02 crc kubenswrapper[4689]: I1201 09:00:02.992113 4689 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/977cbf76-96dd-4627-acc3-8f1eaf8ca809-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 09:00:03 crc kubenswrapper[4689]: I1201 09:00:03.371718 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 01 09:00:03 crc kubenswrapper[4689]: I1201 09:00:03.506194 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfddk\" (UniqueName: \"kubernetes.io/projected/d6040d5d-158a-4d64-89b6-3f17ad666c40-kube-api-access-cfddk\") pod \"d6040d5d-158a-4d64-89b6-3f17ad666c40\" (UID: \"d6040d5d-158a-4d64-89b6-3f17ad666c40\") " Dec 01 09:00:03 crc kubenswrapper[4689]: I1201 09:00:03.506472 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6040d5d-158a-4d64-89b6-3f17ad666c40-combined-ca-bundle\") pod \"d6040d5d-158a-4d64-89b6-3f17ad666c40\" (UID: \"d6040d5d-158a-4d64-89b6-3f17ad666c40\") " Dec 01 09:00:03 crc kubenswrapper[4689]: I1201 09:00:03.506533 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6040d5d-158a-4d64-89b6-3f17ad666c40-config-data\") pod \"d6040d5d-158a-4d64-89b6-3f17ad666c40\" (UID: \"d6040d5d-158a-4d64-89b6-3f17ad666c40\") " Dec 01 09:00:03 crc kubenswrapper[4689]: I1201 09:00:03.517232 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6040d5d-158a-4d64-89b6-3f17ad666c40-kube-api-access-cfddk" (OuterVolumeSpecName: "kube-api-access-cfddk") pod "d6040d5d-158a-4d64-89b6-3f17ad666c40" (UID: "d6040d5d-158a-4d64-89b6-3f17ad666c40"). InnerVolumeSpecName "kube-api-access-cfddk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:00:03 crc kubenswrapper[4689]: I1201 09:00:03.550538 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6040d5d-158a-4d64-89b6-3f17ad666c40-config-data" (OuterVolumeSpecName: "config-data") pod "d6040d5d-158a-4d64-89b6-3f17ad666c40" (UID: "d6040d5d-158a-4d64-89b6-3f17ad666c40"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:00:03 crc kubenswrapper[4689]: I1201 09:00:03.555928 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6040d5d-158a-4d64-89b6-3f17ad666c40-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d6040d5d-158a-4d64-89b6-3f17ad666c40" (UID: "d6040d5d-158a-4d64-89b6-3f17ad666c40"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:00:03 crc kubenswrapper[4689]: I1201 09:00:03.564198 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"4a7f30c7-ee71-44e3-9aed-b1e65916e8b7","Type":"ContainerStarted","Data":"6b094f55499fbcf30823f21329c73fe0a3807f4686fe9323e6b6173c2728b7d9"} Dec 01 09:00:03 crc kubenswrapper[4689]: I1201 09:00:03.564241 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"4a7f30c7-ee71-44e3-9aed-b1e65916e8b7","Type":"ContainerStarted","Data":"59054664a9cdee2bc1f23b6f7fccbc1964757a3fab8d496a27a20e73e0bc8dd1"} Dec 01 09:00:03 crc kubenswrapper[4689]: I1201 09:00:03.565446 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Dec 01 09:00:03 crc kubenswrapper[4689]: I1201 09:00:03.580653 4689 generic.go:334] "Generic (PLEG): container finished" podID="d6040d5d-158a-4d64-89b6-3f17ad666c40" containerID="c335021333ea411feaf224203bdf87f3b7f3c9045229455ee1b84db774e393f2" exitCode=0 Dec 01 09:00:03 crc kubenswrapper[4689]: I1201 09:00:03.580719 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d6040d5d-158a-4d64-89b6-3f17ad666c40","Type":"ContainerDied","Data":"c335021333ea411feaf224203bdf87f3b7f3c9045229455ee1b84db774e393f2"} Dec 01 09:00:03 crc kubenswrapper[4689]: I1201 09:00:03.580747 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d6040d5d-158a-4d64-89b6-3f17ad666c40","Type":"ContainerDied","Data":"3aa81cd41953c10eef702ee8d320e32c8bd43d423e4e2ca4c400dcddc954b74c"} Dec 01 09:00:03 crc kubenswrapper[4689]: I1201 09:00:03.580764 4689 scope.go:117] "RemoveContainer" containerID="c335021333ea411feaf224203bdf87f3b7f3c9045229455ee1b84db774e393f2" Dec 01 09:00:03 crc kubenswrapper[4689]: I1201 09:00:03.580885 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 01 09:00:03 crc kubenswrapper[4689]: I1201 09:00:03.602101 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 01 09:00:03 crc kubenswrapper[4689]: I1201 09:00:03.602230 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"977cbf76-96dd-4627-acc3-8f1eaf8ca809","Type":"ContainerDied","Data":"5a95664b3c11c06c4cfb0792c0922fece8b08101d3956c7d847f51ccd6c1d373"} Dec 01 09:00:03 crc kubenswrapper[4689]: I1201 09:00:03.605444 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.60543384 podStartE2EDuration="2.60543384s" podCreationTimestamp="2025-12-01 09:00:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:00:03.599066709 +0000 UTC m=+1283.671354603" watchObservedRunningTime="2025-12-01 09:00:03.60543384 +0000 UTC m=+1283.677721744" Dec 01 09:00:03 crc kubenswrapper[4689]: I1201 09:00:03.612269 4689 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6040d5d-158a-4d64-89b6-3f17ad666c40-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 09:00:03 crc kubenswrapper[4689]: I1201 09:00:03.612302 4689 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6040d5d-158a-4d64-89b6-3f17ad666c40-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 09:00:03 crc kubenswrapper[4689]: I1201 09:00:03.612314 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfddk\" (UniqueName: \"kubernetes.io/projected/d6040d5d-158a-4d64-89b6-3f17ad666c40-kube-api-access-cfddk\") on node \"crc\" DevicePath \"\"" Dec 01 09:00:03 crc kubenswrapper[4689]: I1201 09:00:03.676256 4689 scope.go:117] "RemoveContainer" containerID="c335021333ea411feaf224203bdf87f3b7f3c9045229455ee1b84db774e393f2" Dec 01 09:00:03 crc kubenswrapper[4689]: I1201 09:00:03.677671 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 01 09:00:03 crc kubenswrapper[4689]: E1201 09:00:03.683838 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c335021333ea411feaf224203bdf87f3b7f3c9045229455ee1b84db774e393f2\": container with ID starting with c335021333ea411feaf224203bdf87f3b7f3c9045229455ee1b84db774e393f2 not found: ID does not exist" containerID="c335021333ea411feaf224203bdf87f3b7f3c9045229455ee1b84db774e393f2" Dec 01 09:00:03 crc kubenswrapper[4689]: I1201 09:00:03.683877 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c335021333ea411feaf224203bdf87f3b7f3c9045229455ee1b84db774e393f2"} err="failed to get container status \"c335021333ea411feaf224203bdf87f3b7f3c9045229455ee1b84db774e393f2\": rpc error: code = NotFound desc = could not find container \"c335021333ea411feaf224203bdf87f3b7f3c9045229455ee1b84db774e393f2\": container with ID starting with c335021333ea411feaf224203bdf87f3b7f3c9045229455ee1b84db774e393f2 not found: ID does not exist" Dec 01 09:00:03 crc kubenswrapper[4689]: I1201 09:00:03.683896 4689 scope.go:117] "RemoveContainer" containerID="769e3f29ff95551a8170b45735cba8e0c11cde51be24e7c5b4bfe82cef5d8129" Dec 01 09:00:03 crc kubenswrapper[4689]: I1201 09:00:03.687330 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 01 09:00:03 crc kubenswrapper[4689]: I1201 09:00:03.706783 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 01 09:00:03 crc kubenswrapper[4689]: I1201 09:00:03.723820 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Dec 01 09:00:03 crc kubenswrapper[4689]: I1201 09:00:03.754642 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 01 09:00:03 crc kubenswrapper[4689]: E1201 09:00:03.755903 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="977cbf76-96dd-4627-acc3-8f1eaf8ca809" containerName="nova-metadata-log" Dec 01 09:00:03 crc kubenswrapper[4689]: I1201 09:00:03.755926 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="977cbf76-96dd-4627-acc3-8f1eaf8ca809" containerName="nova-metadata-log" Dec 01 09:00:03 crc kubenswrapper[4689]: E1201 09:00:03.755939 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="977cbf76-96dd-4627-acc3-8f1eaf8ca809" containerName="nova-metadata-metadata" Dec 01 09:00:03 crc kubenswrapper[4689]: I1201 09:00:03.755948 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="977cbf76-96dd-4627-acc3-8f1eaf8ca809" containerName="nova-metadata-metadata" Dec 01 09:00:03 crc kubenswrapper[4689]: E1201 09:00:03.755962 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6040d5d-158a-4d64-89b6-3f17ad666c40" containerName="nova-scheduler-scheduler" Dec 01 09:00:03 crc kubenswrapper[4689]: I1201 09:00:03.755969 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6040d5d-158a-4d64-89b6-3f17ad666c40" containerName="nova-scheduler-scheduler" Dec 01 09:00:03 crc kubenswrapper[4689]: I1201 09:00:03.756189 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="977cbf76-96dd-4627-acc3-8f1eaf8ca809" containerName="nova-metadata-metadata" Dec 01 09:00:03 crc kubenswrapper[4689]: I1201 09:00:03.756224 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="977cbf76-96dd-4627-acc3-8f1eaf8ca809" containerName="nova-metadata-log" Dec 01 09:00:03 crc kubenswrapper[4689]: I1201 09:00:03.756234 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6040d5d-158a-4d64-89b6-3f17ad666c40" containerName="nova-scheduler-scheduler" Dec 01 09:00:03 crc kubenswrapper[4689]: I1201 09:00:03.757756 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 01 09:00:03 crc kubenswrapper[4689]: I1201 09:00:03.761005 4689 scope.go:117] "RemoveContainer" containerID="87f9200281b2118b2b5ab85bcf7134d4cac4395099c853e0db14a32fdc07a69a" Dec 01 09:00:03 crc kubenswrapper[4689]: I1201 09:00:03.761463 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 01 09:00:03 crc kubenswrapper[4689]: I1201 09:00:03.774536 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 01 09:00:03 crc kubenswrapper[4689]: I1201 09:00:03.774709 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 01 09:00:03 crc kubenswrapper[4689]: I1201 09:00:03.776746 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 01 09:00:03 crc kubenswrapper[4689]: I1201 09:00:03.786307 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 01 09:00:03 crc kubenswrapper[4689]: I1201 09:00:03.807645 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 01 09:00:03 crc kubenswrapper[4689]: I1201 09:00:03.832061 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 01 09:00:03 crc kubenswrapper[4689]: I1201 09:00:03.920531 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/351bf336-7502-4bd1-be87-b032449e4b00-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"351bf336-7502-4bd1-be87-b032449e4b00\") " pod="openstack/nova-metadata-0" Dec 01 09:00:03 crc kubenswrapper[4689]: I1201 09:00:03.920895 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/351bf336-7502-4bd1-be87-b032449e4b00-config-data\") pod \"nova-metadata-0\" (UID: \"351bf336-7502-4bd1-be87-b032449e4b00\") " pod="openstack/nova-metadata-0" Dec 01 09:00:03 crc kubenswrapper[4689]: I1201 09:00:03.920926 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mxsh\" (UniqueName: \"kubernetes.io/projected/c2c4d31d-4c6b-447e-99ef-8d46cbdaf55d-kube-api-access-7mxsh\") pod \"nova-scheduler-0\" (UID: \"c2c4d31d-4c6b-447e-99ef-8d46cbdaf55d\") " pod="openstack/nova-scheduler-0" Dec 01 09:00:03 crc kubenswrapper[4689]: I1201 09:00:03.920959 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2c4d31d-4c6b-447e-99ef-8d46cbdaf55d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c2c4d31d-4c6b-447e-99ef-8d46cbdaf55d\") " pod="openstack/nova-scheduler-0" Dec 01 09:00:03 crc kubenswrapper[4689]: I1201 09:00:03.920999 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/351bf336-7502-4bd1-be87-b032449e4b00-logs\") pod \"nova-metadata-0\" (UID: \"351bf336-7502-4bd1-be87-b032449e4b00\") " pod="openstack/nova-metadata-0" Dec 01 09:00:03 crc kubenswrapper[4689]: I1201 09:00:03.921052 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2c4d31d-4c6b-447e-99ef-8d46cbdaf55d-config-data\") pod \"nova-scheduler-0\" (UID: \"c2c4d31d-4c6b-447e-99ef-8d46cbdaf55d\") " pod="openstack/nova-scheduler-0" Dec 01 09:00:03 crc kubenswrapper[4689]: I1201 09:00:03.921099 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzb96\" (UniqueName: \"kubernetes.io/projected/351bf336-7502-4bd1-be87-b032449e4b00-kube-api-access-pzb96\") pod \"nova-metadata-0\" (UID: \"351bf336-7502-4bd1-be87-b032449e4b00\") " pod="openstack/nova-metadata-0" Dec 01 09:00:03 crc kubenswrapper[4689]: I1201 09:00:03.921298 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/351bf336-7502-4bd1-be87-b032449e4b00-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"351bf336-7502-4bd1-be87-b032449e4b00\") " pod="openstack/nova-metadata-0" Dec 01 09:00:04 crc kubenswrapper[4689]: I1201 09:00:04.023608 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2c4d31d-4c6b-447e-99ef-8d46cbdaf55d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c2c4d31d-4c6b-447e-99ef-8d46cbdaf55d\") " pod="openstack/nova-scheduler-0" Dec 01 09:00:04 crc kubenswrapper[4689]: I1201 09:00:04.023948 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/351bf336-7502-4bd1-be87-b032449e4b00-logs\") pod \"nova-metadata-0\" (UID: \"351bf336-7502-4bd1-be87-b032449e4b00\") " pod="openstack/nova-metadata-0" Dec 01 09:00:04 crc kubenswrapper[4689]: I1201 09:00:04.024075 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2c4d31d-4c6b-447e-99ef-8d46cbdaf55d-config-data\") pod \"nova-scheduler-0\" (UID: \"c2c4d31d-4c6b-447e-99ef-8d46cbdaf55d\") " pod="openstack/nova-scheduler-0" Dec 01 09:00:04 crc kubenswrapper[4689]: I1201 09:00:04.024177 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzb96\" (UniqueName: \"kubernetes.io/projected/351bf336-7502-4bd1-be87-b032449e4b00-kube-api-access-pzb96\") pod \"nova-metadata-0\" (UID: \"351bf336-7502-4bd1-be87-b032449e4b00\") " pod="openstack/nova-metadata-0" Dec 01 09:00:04 crc kubenswrapper[4689]: I1201 09:00:04.024685 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/351bf336-7502-4bd1-be87-b032449e4b00-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"351bf336-7502-4bd1-be87-b032449e4b00\") " pod="openstack/nova-metadata-0" Dec 01 09:00:04 crc kubenswrapper[4689]: I1201 09:00:04.024895 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/351bf336-7502-4bd1-be87-b032449e4b00-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"351bf336-7502-4bd1-be87-b032449e4b00\") " pod="openstack/nova-metadata-0" Dec 01 09:00:04 crc kubenswrapper[4689]: I1201 09:00:04.025040 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/351bf336-7502-4bd1-be87-b032449e4b00-config-data\") pod \"nova-metadata-0\" (UID: \"351bf336-7502-4bd1-be87-b032449e4b00\") " pod="openstack/nova-metadata-0" Dec 01 09:00:04 crc kubenswrapper[4689]: I1201 09:00:04.025183 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mxsh\" (UniqueName: \"kubernetes.io/projected/c2c4d31d-4c6b-447e-99ef-8d46cbdaf55d-kube-api-access-7mxsh\") pod \"nova-scheduler-0\" (UID: \"c2c4d31d-4c6b-447e-99ef-8d46cbdaf55d\") " pod="openstack/nova-scheduler-0" Dec 01 09:00:04 crc kubenswrapper[4689]: I1201 09:00:04.028914 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/351bf336-7502-4bd1-be87-b032449e4b00-logs\") pod \"nova-metadata-0\" (UID: \"351bf336-7502-4bd1-be87-b032449e4b00\") " pod="openstack/nova-metadata-0" Dec 01 09:00:04 crc kubenswrapper[4689]: I1201 09:00:04.033185 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2c4d31d-4c6b-447e-99ef-8d46cbdaf55d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c2c4d31d-4c6b-447e-99ef-8d46cbdaf55d\") " pod="openstack/nova-scheduler-0" Dec 01 09:00:04 crc kubenswrapper[4689]: I1201 09:00:04.037087 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/351bf336-7502-4bd1-be87-b032449e4b00-config-data\") pod \"nova-metadata-0\" (UID: \"351bf336-7502-4bd1-be87-b032449e4b00\") " pod="openstack/nova-metadata-0" Dec 01 09:00:04 crc kubenswrapper[4689]: I1201 09:00:04.032461 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/351bf336-7502-4bd1-be87-b032449e4b00-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"351bf336-7502-4bd1-be87-b032449e4b00\") " pod="openstack/nova-metadata-0" Dec 01 09:00:04 crc kubenswrapper[4689]: I1201 09:00:04.042903 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2c4d31d-4c6b-447e-99ef-8d46cbdaf55d-config-data\") pod \"nova-scheduler-0\" (UID: \"c2c4d31d-4c6b-447e-99ef-8d46cbdaf55d\") " pod="openstack/nova-scheduler-0" Dec 01 09:00:04 crc kubenswrapper[4689]: I1201 09:00:04.043361 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/351bf336-7502-4bd1-be87-b032449e4b00-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"351bf336-7502-4bd1-be87-b032449e4b00\") " pod="openstack/nova-metadata-0" Dec 01 09:00:04 crc kubenswrapper[4689]: I1201 09:00:04.052397 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mxsh\" (UniqueName: \"kubernetes.io/projected/c2c4d31d-4c6b-447e-99ef-8d46cbdaf55d-kube-api-access-7mxsh\") pod \"nova-scheduler-0\" (UID: \"c2c4d31d-4c6b-447e-99ef-8d46cbdaf55d\") " pod="openstack/nova-scheduler-0" Dec 01 09:00:04 crc kubenswrapper[4689]: I1201 09:00:04.054932 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzb96\" (UniqueName: \"kubernetes.io/projected/351bf336-7502-4bd1-be87-b032449e4b00-kube-api-access-pzb96\") pod \"nova-metadata-0\" (UID: \"351bf336-7502-4bd1-be87-b032449e4b00\") " pod="openstack/nova-metadata-0" Dec 01 09:00:04 crc kubenswrapper[4689]: I1201 09:00:04.096108 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 01 09:00:04 crc kubenswrapper[4689]: I1201 09:00:04.112931 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 01 09:00:04 crc kubenswrapper[4689]: I1201 09:00:04.180272 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409660-gb5z4" Dec 01 09:00:04 crc kubenswrapper[4689]: I1201 09:00:04.330914 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b222f1da-8be5-48e4-acfd-0d2979cd16f9-secret-volume\") pod \"b222f1da-8be5-48e4-acfd-0d2979cd16f9\" (UID: \"b222f1da-8be5-48e4-acfd-0d2979cd16f9\") " Dec 01 09:00:04 crc kubenswrapper[4689]: I1201 09:00:04.331261 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdndd\" (UniqueName: \"kubernetes.io/projected/b222f1da-8be5-48e4-acfd-0d2979cd16f9-kube-api-access-fdndd\") pod \"b222f1da-8be5-48e4-acfd-0d2979cd16f9\" (UID: \"b222f1da-8be5-48e4-acfd-0d2979cd16f9\") " Dec 01 09:00:04 crc kubenswrapper[4689]: I1201 09:00:04.331403 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b222f1da-8be5-48e4-acfd-0d2979cd16f9-config-volume\") pod \"b222f1da-8be5-48e4-acfd-0d2979cd16f9\" (UID: \"b222f1da-8be5-48e4-acfd-0d2979cd16f9\") " Dec 01 09:00:04 crc kubenswrapper[4689]: I1201 09:00:04.333171 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b222f1da-8be5-48e4-acfd-0d2979cd16f9-config-volume" (OuterVolumeSpecName: "config-volume") pod "b222f1da-8be5-48e4-acfd-0d2979cd16f9" (UID: "b222f1da-8be5-48e4-acfd-0d2979cd16f9"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:00:04 crc kubenswrapper[4689]: I1201 09:00:04.337231 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b222f1da-8be5-48e4-acfd-0d2979cd16f9-kube-api-access-fdndd" (OuterVolumeSpecName: "kube-api-access-fdndd") pod "b222f1da-8be5-48e4-acfd-0d2979cd16f9" (UID: "b222f1da-8be5-48e4-acfd-0d2979cd16f9"). InnerVolumeSpecName "kube-api-access-fdndd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:00:04 crc kubenswrapper[4689]: I1201 09:00:04.344600 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b222f1da-8be5-48e4-acfd-0d2979cd16f9-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b222f1da-8be5-48e4-acfd-0d2979cd16f9" (UID: "b222f1da-8be5-48e4-acfd-0d2979cd16f9"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:00:04 crc kubenswrapper[4689]: I1201 09:00:04.433072 4689 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b222f1da-8be5-48e4-acfd-0d2979cd16f9-config-volume\") on node \"crc\" DevicePath \"\"" Dec 01 09:00:04 crc kubenswrapper[4689]: I1201 09:00:04.433101 4689 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b222f1da-8be5-48e4-acfd-0d2979cd16f9-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 01 09:00:04 crc kubenswrapper[4689]: I1201 09:00:04.433111 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fdndd\" (UniqueName: \"kubernetes.io/projected/b222f1da-8be5-48e4-acfd-0d2979cd16f9-kube-api-access-fdndd\") on node \"crc\" DevicePath \"\"" Dec 01 09:00:04 crc kubenswrapper[4689]: I1201 09:00:04.611711 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29409660-gb5z4" event={"ID":"b222f1da-8be5-48e4-acfd-0d2979cd16f9","Type":"ContainerDied","Data":"0a7e9913010d7e147fb31ca8421e7a8b53eb26a1f20bcc93e0ad0f34794daafe"} Dec 01 09:00:04 crc kubenswrapper[4689]: I1201 09:00:04.611747 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a7e9913010d7e147fb31ca8421e7a8b53eb26a1f20bcc93e0ad0f34794daafe" Dec 01 09:00:04 crc kubenswrapper[4689]: I1201 09:00:04.611815 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409660-gb5z4" Dec 01 09:00:04 crc kubenswrapper[4689]: I1201 09:00:04.654666 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 01 09:00:04 crc kubenswrapper[4689]: I1201 09:00:04.681413 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 01 09:00:04 crc kubenswrapper[4689]: W1201 09:00:04.698516 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod351bf336_7502_4bd1_be87_b032449e4b00.slice/crio-f2ea5b1b1b1e1d4587b132137b5593b4b4066c7f50c08c5034fa99a4f713967d WatchSource:0}: Error finding container f2ea5b1b1b1e1d4587b132137b5593b4b4066c7f50c08c5034fa99a4f713967d: Status 404 returned error can't find the container with id f2ea5b1b1b1e1d4587b132137b5593b4b4066c7f50c08c5034fa99a4f713967d Dec 01 09:00:05 crc kubenswrapper[4689]: I1201 09:00:05.058887 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="977cbf76-96dd-4627-acc3-8f1eaf8ca809" path="/var/lib/kubelet/pods/977cbf76-96dd-4627-acc3-8f1eaf8ca809/volumes" Dec 01 09:00:05 crc kubenswrapper[4689]: I1201 09:00:05.059885 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6040d5d-158a-4d64-89b6-3f17ad666c40" path="/var/lib/kubelet/pods/d6040d5d-158a-4d64-89b6-3f17ad666c40/volumes" Dec 01 09:00:05 crc kubenswrapper[4689]: I1201 09:00:05.417275 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 01 09:00:05 crc kubenswrapper[4689]: I1201 09:00:05.561082 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kqcjp\" (UniqueName: \"kubernetes.io/projected/c4bef3fc-9bf2-4daf-a366-29c8129db360-kube-api-access-kqcjp\") pod \"c4bef3fc-9bf2-4daf-a366-29c8129db360\" (UID: \"c4bef3fc-9bf2-4daf-a366-29c8129db360\") " Dec 01 09:00:05 crc kubenswrapper[4689]: I1201 09:00:05.561382 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4bef3fc-9bf2-4daf-a366-29c8129db360-config-data\") pod \"c4bef3fc-9bf2-4daf-a366-29c8129db360\" (UID: \"c4bef3fc-9bf2-4daf-a366-29c8129db360\") " Dec 01 09:00:05 crc kubenswrapper[4689]: I1201 09:00:05.561635 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4bef3fc-9bf2-4daf-a366-29c8129db360-logs\") pod \"c4bef3fc-9bf2-4daf-a366-29c8129db360\" (UID: \"c4bef3fc-9bf2-4daf-a366-29c8129db360\") " Dec 01 09:00:05 crc kubenswrapper[4689]: I1201 09:00:05.561793 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4bef3fc-9bf2-4daf-a366-29c8129db360-combined-ca-bundle\") pod \"c4bef3fc-9bf2-4daf-a366-29c8129db360\" (UID: \"c4bef3fc-9bf2-4daf-a366-29c8129db360\") " Dec 01 09:00:05 crc kubenswrapper[4689]: I1201 09:00:05.562091 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4bef3fc-9bf2-4daf-a366-29c8129db360-logs" (OuterVolumeSpecName: "logs") pod "c4bef3fc-9bf2-4daf-a366-29c8129db360" (UID: "c4bef3fc-9bf2-4daf-a366-29c8129db360"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:00:05 crc kubenswrapper[4689]: I1201 09:00:05.562586 4689 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4bef3fc-9bf2-4daf-a366-29c8129db360-logs\") on node \"crc\" DevicePath \"\"" Dec 01 09:00:05 crc kubenswrapper[4689]: I1201 09:00:05.572444 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4bef3fc-9bf2-4daf-a366-29c8129db360-kube-api-access-kqcjp" (OuterVolumeSpecName: "kube-api-access-kqcjp") pod "c4bef3fc-9bf2-4daf-a366-29c8129db360" (UID: "c4bef3fc-9bf2-4daf-a366-29c8129db360"). InnerVolumeSpecName "kube-api-access-kqcjp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:00:05 crc kubenswrapper[4689]: I1201 09:00:05.603637 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4bef3fc-9bf2-4daf-a366-29c8129db360-config-data" (OuterVolumeSpecName: "config-data") pod "c4bef3fc-9bf2-4daf-a366-29c8129db360" (UID: "c4bef3fc-9bf2-4daf-a366-29c8129db360"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:00:05 crc kubenswrapper[4689]: I1201 09:00:05.610977 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4bef3fc-9bf2-4daf-a366-29c8129db360-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c4bef3fc-9bf2-4daf-a366-29c8129db360" (UID: "c4bef3fc-9bf2-4daf-a366-29c8129db360"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:00:05 crc kubenswrapper[4689]: I1201 09:00:05.641185 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"351bf336-7502-4bd1-be87-b032449e4b00","Type":"ContainerStarted","Data":"2c17204e5869ae25d25347a66320b0cb109bc26657a4c319a0f448ea518b1c80"} Dec 01 09:00:05 crc kubenswrapper[4689]: I1201 09:00:05.642003 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"351bf336-7502-4bd1-be87-b032449e4b00","Type":"ContainerStarted","Data":"7ad020d8558bbbfa4b7c2d9bec00a922d8dede433e450fb0f2efd28697021dd4"} Dec 01 09:00:05 crc kubenswrapper[4689]: I1201 09:00:05.642118 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"351bf336-7502-4bd1-be87-b032449e4b00","Type":"ContainerStarted","Data":"f2ea5b1b1b1e1d4587b132137b5593b4b4066c7f50c08c5034fa99a4f713967d"} Dec 01 09:00:05 crc kubenswrapper[4689]: I1201 09:00:05.645087 4689 generic.go:334] "Generic (PLEG): container finished" podID="c4bef3fc-9bf2-4daf-a366-29c8129db360" containerID="d0b73d1af1bd065da9ea2195a49a229e7b9d7064a16fc3dd8beead91a4603a8d" exitCode=0 Dec 01 09:00:05 crc kubenswrapper[4689]: I1201 09:00:05.645187 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c4bef3fc-9bf2-4daf-a366-29c8129db360","Type":"ContainerDied","Data":"d0b73d1af1bd065da9ea2195a49a229e7b9d7064a16fc3dd8beead91a4603a8d"} Dec 01 09:00:05 crc kubenswrapper[4689]: I1201 09:00:05.645218 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c4bef3fc-9bf2-4daf-a366-29c8129db360","Type":"ContainerDied","Data":"2a8d72d3b633207c7345eaca11f8761d6ca5fdf6287256def1b35eb9bf006e4a"} Dec 01 09:00:05 crc kubenswrapper[4689]: I1201 09:00:05.645237 4689 scope.go:117] "RemoveContainer" containerID="d0b73d1af1bd065da9ea2195a49a229e7b9d7064a16fc3dd8beead91a4603a8d" Dec 01 09:00:05 crc kubenswrapper[4689]: I1201 09:00:05.645398 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 01 09:00:05 crc kubenswrapper[4689]: I1201 09:00:05.657411 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c2c4d31d-4c6b-447e-99ef-8d46cbdaf55d","Type":"ContainerStarted","Data":"5b628d1f5841af73397176ddc5006ab551751ad2346d113add58773343c530ba"} Dec 01 09:00:05 crc kubenswrapper[4689]: I1201 09:00:05.657823 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c2c4d31d-4c6b-447e-99ef-8d46cbdaf55d","Type":"ContainerStarted","Data":"f6ee744bfc413f3635aa932b4814d4aed70cabf2eedb2ab326e8a68a8b4c425c"} Dec 01 09:00:05 crc kubenswrapper[4689]: I1201 09:00:05.669157 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kqcjp\" (UniqueName: \"kubernetes.io/projected/c4bef3fc-9bf2-4daf-a366-29c8129db360-kube-api-access-kqcjp\") on node \"crc\" DevicePath \"\"" Dec 01 09:00:05 crc kubenswrapper[4689]: I1201 09:00:05.669391 4689 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4bef3fc-9bf2-4daf-a366-29c8129db360-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 09:00:05 crc kubenswrapper[4689]: I1201 09:00:05.669486 4689 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4bef3fc-9bf2-4daf-a366-29c8129db360-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 09:00:05 crc kubenswrapper[4689]: I1201 09:00:05.704991 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.7049714639999998 podStartE2EDuration="2.704971464s" podCreationTimestamp="2025-12-01 09:00:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:00:05.704109641 +0000 UTC m=+1285.776397545" watchObservedRunningTime="2025-12-01 09:00:05.704971464 +0000 UTC m=+1285.777259368" Dec 01 09:00:05 crc kubenswrapper[4689]: I1201 09:00:05.709080 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.709066875 podStartE2EDuration="2.709066875s" podCreationTimestamp="2025-12-01 09:00:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:00:05.68073308 +0000 UTC m=+1285.753020984" watchObservedRunningTime="2025-12-01 09:00:05.709066875 +0000 UTC m=+1285.781354779" Dec 01 09:00:05 crc kubenswrapper[4689]: I1201 09:00:05.731506 4689 scope.go:117] "RemoveContainer" containerID="32c9256f3a21c69e378ab512f259fefac500a84846bba1151754d3d5a73c11b5" Dec 01 09:00:05 crc kubenswrapper[4689]: I1201 09:00:05.751562 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 01 09:00:05 crc kubenswrapper[4689]: I1201 09:00:05.760519 4689 scope.go:117] "RemoveContainer" containerID="d0b73d1af1bd065da9ea2195a49a229e7b9d7064a16fc3dd8beead91a4603a8d" Dec 01 09:00:05 crc kubenswrapper[4689]: I1201 09:00:05.762584 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 01 09:00:05 crc kubenswrapper[4689]: E1201 09:00:05.762857 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0b73d1af1bd065da9ea2195a49a229e7b9d7064a16fc3dd8beead91a4603a8d\": container with ID starting with d0b73d1af1bd065da9ea2195a49a229e7b9d7064a16fc3dd8beead91a4603a8d not found: ID does not exist" containerID="d0b73d1af1bd065da9ea2195a49a229e7b9d7064a16fc3dd8beead91a4603a8d" Dec 01 09:00:05 crc kubenswrapper[4689]: I1201 09:00:05.762898 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0b73d1af1bd065da9ea2195a49a229e7b9d7064a16fc3dd8beead91a4603a8d"} err="failed to get container status \"d0b73d1af1bd065da9ea2195a49a229e7b9d7064a16fc3dd8beead91a4603a8d\": rpc error: code = NotFound desc = could not find container \"d0b73d1af1bd065da9ea2195a49a229e7b9d7064a16fc3dd8beead91a4603a8d\": container with ID starting with d0b73d1af1bd065da9ea2195a49a229e7b9d7064a16fc3dd8beead91a4603a8d not found: ID does not exist" Dec 01 09:00:05 crc kubenswrapper[4689]: I1201 09:00:05.762929 4689 scope.go:117] "RemoveContainer" containerID="32c9256f3a21c69e378ab512f259fefac500a84846bba1151754d3d5a73c11b5" Dec 01 09:00:05 crc kubenswrapper[4689]: E1201 09:00:05.763857 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32c9256f3a21c69e378ab512f259fefac500a84846bba1151754d3d5a73c11b5\": container with ID starting with 32c9256f3a21c69e378ab512f259fefac500a84846bba1151754d3d5a73c11b5 not found: ID does not exist" containerID="32c9256f3a21c69e378ab512f259fefac500a84846bba1151754d3d5a73c11b5" Dec 01 09:00:05 crc kubenswrapper[4689]: I1201 09:00:05.763894 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32c9256f3a21c69e378ab512f259fefac500a84846bba1151754d3d5a73c11b5"} err="failed to get container status \"32c9256f3a21c69e378ab512f259fefac500a84846bba1151754d3d5a73c11b5\": rpc error: code = NotFound desc = could not find container \"32c9256f3a21c69e378ab512f259fefac500a84846bba1151754d3d5a73c11b5\": container with ID starting with 32c9256f3a21c69e378ab512f259fefac500a84846bba1151754d3d5a73c11b5 not found: ID does not exist" Dec 01 09:00:05 crc kubenswrapper[4689]: I1201 09:00:05.777249 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 01 09:00:05 crc kubenswrapper[4689]: E1201 09:00:05.777756 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b222f1da-8be5-48e4-acfd-0d2979cd16f9" containerName="collect-profiles" Dec 01 09:00:05 crc kubenswrapper[4689]: I1201 09:00:05.777780 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="b222f1da-8be5-48e4-acfd-0d2979cd16f9" containerName="collect-profiles" Dec 01 09:00:05 crc kubenswrapper[4689]: E1201 09:00:05.777829 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4bef3fc-9bf2-4daf-a366-29c8129db360" containerName="nova-api-log" Dec 01 09:00:05 crc kubenswrapper[4689]: I1201 09:00:05.777837 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4bef3fc-9bf2-4daf-a366-29c8129db360" containerName="nova-api-log" Dec 01 09:00:05 crc kubenswrapper[4689]: E1201 09:00:05.777848 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4bef3fc-9bf2-4daf-a366-29c8129db360" containerName="nova-api-api" Dec 01 09:00:05 crc kubenswrapper[4689]: I1201 09:00:05.777854 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4bef3fc-9bf2-4daf-a366-29c8129db360" containerName="nova-api-api" Dec 01 09:00:05 crc kubenswrapper[4689]: I1201 09:00:05.778125 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4bef3fc-9bf2-4daf-a366-29c8129db360" containerName="nova-api-api" Dec 01 09:00:05 crc kubenswrapper[4689]: I1201 09:00:05.778140 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4bef3fc-9bf2-4daf-a366-29c8129db360" containerName="nova-api-log" Dec 01 09:00:05 crc kubenswrapper[4689]: I1201 09:00:05.778159 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="b222f1da-8be5-48e4-acfd-0d2979cd16f9" containerName="collect-profiles" Dec 01 09:00:05 crc kubenswrapper[4689]: I1201 09:00:05.779492 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 01 09:00:05 crc kubenswrapper[4689]: I1201 09:00:05.783906 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 01 09:00:05 crc kubenswrapper[4689]: I1201 09:00:05.789115 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 01 09:00:05 crc kubenswrapper[4689]: I1201 09:00:05.876981 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bcd3c9e-fe73-431b-999e-70f81990ffc8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3bcd3c9e-fe73-431b-999e-70f81990ffc8\") " pod="openstack/nova-api-0" Dec 01 09:00:05 crc kubenswrapper[4689]: I1201 09:00:05.877338 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7qrf\" (UniqueName: \"kubernetes.io/projected/3bcd3c9e-fe73-431b-999e-70f81990ffc8-kube-api-access-d7qrf\") pod \"nova-api-0\" (UID: \"3bcd3c9e-fe73-431b-999e-70f81990ffc8\") " pod="openstack/nova-api-0" Dec 01 09:00:05 crc kubenswrapper[4689]: I1201 09:00:05.877625 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3bcd3c9e-fe73-431b-999e-70f81990ffc8-logs\") pod \"nova-api-0\" (UID: \"3bcd3c9e-fe73-431b-999e-70f81990ffc8\") " pod="openstack/nova-api-0" Dec 01 09:00:05 crc kubenswrapper[4689]: I1201 09:00:05.877789 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bcd3c9e-fe73-431b-999e-70f81990ffc8-config-data\") pod \"nova-api-0\" (UID: \"3bcd3c9e-fe73-431b-999e-70f81990ffc8\") " pod="openstack/nova-api-0" Dec 01 09:00:05 crc kubenswrapper[4689]: I1201 09:00:05.979663 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7qrf\" (UniqueName: \"kubernetes.io/projected/3bcd3c9e-fe73-431b-999e-70f81990ffc8-kube-api-access-d7qrf\") pod \"nova-api-0\" (UID: \"3bcd3c9e-fe73-431b-999e-70f81990ffc8\") " pod="openstack/nova-api-0" Dec 01 09:00:05 crc kubenswrapper[4689]: I1201 09:00:05.979790 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3bcd3c9e-fe73-431b-999e-70f81990ffc8-logs\") pod \"nova-api-0\" (UID: \"3bcd3c9e-fe73-431b-999e-70f81990ffc8\") " pod="openstack/nova-api-0" Dec 01 09:00:05 crc kubenswrapper[4689]: I1201 09:00:05.979877 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bcd3c9e-fe73-431b-999e-70f81990ffc8-config-data\") pod \"nova-api-0\" (UID: \"3bcd3c9e-fe73-431b-999e-70f81990ffc8\") " pod="openstack/nova-api-0" Dec 01 09:00:05 crc kubenswrapper[4689]: I1201 09:00:05.979949 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bcd3c9e-fe73-431b-999e-70f81990ffc8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3bcd3c9e-fe73-431b-999e-70f81990ffc8\") " pod="openstack/nova-api-0" Dec 01 09:00:05 crc kubenswrapper[4689]: I1201 09:00:05.981031 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3bcd3c9e-fe73-431b-999e-70f81990ffc8-logs\") pod \"nova-api-0\" (UID: \"3bcd3c9e-fe73-431b-999e-70f81990ffc8\") " pod="openstack/nova-api-0" Dec 01 09:00:06 crc kubenswrapper[4689]: I1201 09:00:05.991087 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bcd3c9e-fe73-431b-999e-70f81990ffc8-config-data\") pod \"nova-api-0\" (UID: \"3bcd3c9e-fe73-431b-999e-70f81990ffc8\") " pod="openstack/nova-api-0" Dec 01 09:00:06 crc kubenswrapper[4689]: I1201 09:00:06.019006 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bcd3c9e-fe73-431b-999e-70f81990ffc8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3bcd3c9e-fe73-431b-999e-70f81990ffc8\") " pod="openstack/nova-api-0" Dec 01 09:00:06 crc kubenswrapper[4689]: I1201 09:00:06.028766 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7qrf\" (UniqueName: \"kubernetes.io/projected/3bcd3c9e-fe73-431b-999e-70f81990ffc8-kube-api-access-d7qrf\") pod \"nova-api-0\" (UID: \"3bcd3c9e-fe73-431b-999e-70f81990ffc8\") " pod="openstack/nova-api-0" Dec 01 09:00:06 crc kubenswrapper[4689]: I1201 09:00:06.118544 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 01 09:00:06 crc kubenswrapper[4689]: I1201 09:00:06.628678 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 01 09:00:06 crc kubenswrapper[4689]: I1201 09:00:06.671488 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3bcd3c9e-fe73-431b-999e-70f81990ffc8","Type":"ContainerStarted","Data":"f636fc76083a263d2d644b5efeaf81af0f961e97c05f7836c072ed45942808bb"} Dec 01 09:00:07 crc kubenswrapper[4689]: I1201 09:00:07.061694 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4bef3fc-9bf2-4daf-a366-29c8129db360" path="/var/lib/kubelet/pods/c4bef3fc-9bf2-4daf-a366-29c8129db360/volumes" Dec 01 09:00:07 crc kubenswrapper[4689]: I1201 09:00:07.314633 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 01 09:00:07 crc kubenswrapper[4689]: I1201 09:00:07.691306 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3bcd3c9e-fe73-431b-999e-70f81990ffc8","Type":"ContainerStarted","Data":"887e7f6c9bed4a9e564d18b0538859d7a61393e12255e9dbb8cfe1506d294606"} Dec 01 09:00:07 crc kubenswrapper[4689]: I1201 09:00:07.691379 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3bcd3c9e-fe73-431b-999e-70f81990ffc8","Type":"ContainerStarted","Data":"284d6c5d242588714740ab5e37498a7e51c66a6329a1d26055a0db9498011829"} Dec 01 09:00:07 crc kubenswrapper[4689]: I1201 09:00:07.716984 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.716962094 podStartE2EDuration="2.716962094s" podCreationTimestamp="2025-12-01 09:00:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:00:07.710495188 +0000 UTC m=+1287.782783092" watchObservedRunningTime="2025-12-01 09:00:07.716962094 +0000 UTC m=+1287.789249998" Dec 01 09:00:09 crc kubenswrapper[4689]: I1201 09:00:09.096928 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 01 09:00:09 crc kubenswrapper[4689]: I1201 09:00:09.097253 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 01 09:00:09 crc kubenswrapper[4689]: I1201 09:00:09.114456 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 01 09:00:09 crc kubenswrapper[4689]: I1201 09:00:09.147398 4689 patch_prober.go:28] interesting pod/machine-config-daemon-hmdnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 09:00:09 crc kubenswrapper[4689]: I1201 09:00:09.147464 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 09:00:11 crc kubenswrapper[4689]: I1201 09:00:11.960285 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Dec 01 09:00:14 crc kubenswrapper[4689]: I1201 09:00:14.097259 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 01 09:00:14 crc kubenswrapper[4689]: I1201 09:00:14.097680 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 01 09:00:14 crc kubenswrapper[4689]: I1201 09:00:14.113921 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 01 09:00:14 crc kubenswrapper[4689]: I1201 09:00:14.167895 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 01 09:00:14 crc kubenswrapper[4689]: I1201 09:00:14.786003 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 01 09:00:15 crc kubenswrapper[4689]: I1201 09:00:15.114544 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="351bf336-7502-4bd1-be87-b032449e4b00" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.196:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 01 09:00:15 crc kubenswrapper[4689]: I1201 09:00:15.114859 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="351bf336-7502-4bd1-be87-b032449e4b00" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.196:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 01 09:00:16 crc kubenswrapper[4689]: I1201 09:00:16.120117 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 01 09:00:16 crc kubenswrapper[4689]: I1201 09:00:16.120183 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 01 09:00:17 crc kubenswrapper[4689]: I1201 09:00:17.201564 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3bcd3c9e-fe73-431b-999e-70f81990ffc8" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.198:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 01 09:00:17 crc kubenswrapper[4689]: I1201 09:00:17.201625 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3bcd3c9e-fe73-431b-999e-70f81990ffc8" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.198:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 01 09:00:23 crc kubenswrapper[4689]: I1201 09:00:23.821280 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 01 09:00:23 crc kubenswrapper[4689]: I1201 09:00:23.841944 4689 generic.go:334] "Generic (PLEG): container finished" podID="fd6b584a-d753-4c05-a893-b160f9109965" containerID="40dc725143e3e76b8dd21236f927d1798ba62d5eb8337fb070a3e93226aca31f" exitCode=137 Dec 01 09:00:23 crc kubenswrapper[4689]: I1201 09:00:23.842000 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"fd6b584a-d753-4c05-a893-b160f9109965","Type":"ContainerDied","Data":"40dc725143e3e76b8dd21236f927d1798ba62d5eb8337fb070a3e93226aca31f"} Dec 01 09:00:23 crc kubenswrapper[4689]: I1201 09:00:23.842035 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"fd6b584a-d753-4c05-a893-b160f9109965","Type":"ContainerDied","Data":"3b3f8552718664b5bc3ea1db52c05874c3c9b8e30a7b608cb9e78fa769814532"} Dec 01 09:00:23 crc kubenswrapper[4689]: I1201 09:00:23.842055 4689 scope.go:117] "RemoveContainer" containerID="40dc725143e3e76b8dd21236f927d1798ba62d5eb8337fb070a3e93226aca31f" Dec 01 09:00:23 crc kubenswrapper[4689]: I1201 09:00:23.842215 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 01 09:00:23 crc kubenswrapper[4689]: I1201 09:00:23.873787 4689 scope.go:117] "RemoveContainer" containerID="40dc725143e3e76b8dd21236f927d1798ba62d5eb8337fb070a3e93226aca31f" Dec 01 09:00:23 crc kubenswrapper[4689]: E1201 09:00:23.874486 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40dc725143e3e76b8dd21236f927d1798ba62d5eb8337fb070a3e93226aca31f\": container with ID starting with 40dc725143e3e76b8dd21236f927d1798ba62d5eb8337fb070a3e93226aca31f not found: ID does not exist" containerID="40dc725143e3e76b8dd21236f927d1798ba62d5eb8337fb070a3e93226aca31f" Dec 01 09:00:23 crc kubenswrapper[4689]: I1201 09:00:23.874547 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40dc725143e3e76b8dd21236f927d1798ba62d5eb8337fb070a3e93226aca31f"} err="failed to get container status \"40dc725143e3e76b8dd21236f927d1798ba62d5eb8337fb070a3e93226aca31f\": rpc error: code = NotFound desc = could not find container \"40dc725143e3e76b8dd21236f927d1798ba62d5eb8337fb070a3e93226aca31f\": container with ID starting with 40dc725143e3e76b8dd21236f927d1798ba62d5eb8337fb070a3e93226aca31f not found: ID does not exist" Dec 01 09:00:23 crc kubenswrapper[4689]: I1201 09:00:23.937555 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd6b584a-d753-4c05-a893-b160f9109965-config-data\") pod \"fd6b584a-d753-4c05-a893-b160f9109965\" (UID: \"fd6b584a-d753-4c05-a893-b160f9109965\") " Dec 01 09:00:23 crc kubenswrapper[4689]: I1201 09:00:23.938069 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd6b584a-d753-4c05-a893-b160f9109965-combined-ca-bundle\") pod \"fd6b584a-d753-4c05-a893-b160f9109965\" (UID: \"fd6b584a-d753-4c05-a893-b160f9109965\") " Dec 01 09:00:23 crc kubenswrapper[4689]: I1201 09:00:23.938197 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h5wnd\" (UniqueName: \"kubernetes.io/projected/fd6b584a-d753-4c05-a893-b160f9109965-kube-api-access-h5wnd\") pod \"fd6b584a-d753-4c05-a893-b160f9109965\" (UID: \"fd6b584a-d753-4c05-a893-b160f9109965\") " Dec 01 09:00:23 crc kubenswrapper[4689]: I1201 09:00:23.953952 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd6b584a-d753-4c05-a893-b160f9109965-kube-api-access-h5wnd" (OuterVolumeSpecName: "kube-api-access-h5wnd") pod "fd6b584a-d753-4c05-a893-b160f9109965" (UID: "fd6b584a-d753-4c05-a893-b160f9109965"). InnerVolumeSpecName "kube-api-access-h5wnd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:00:23 crc kubenswrapper[4689]: I1201 09:00:23.975180 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd6b584a-d753-4c05-a893-b160f9109965-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fd6b584a-d753-4c05-a893-b160f9109965" (UID: "fd6b584a-d753-4c05-a893-b160f9109965"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:00:24 crc kubenswrapper[4689]: I1201 09:00:23.998836 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd6b584a-d753-4c05-a893-b160f9109965-config-data" (OuterVolumeSpecName: "config-data") pod "fd6b584a-d753-4c05-a893-b160f9109965" (UID: "fd6b584a-d753-4c05-a893-b160f9109965"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:00:24 crc kubenswrapper[4689]: I1201 09:00:24.040074 4689 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd6b584a-d753-4c05-a893-b160f9109965-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 09:00:24 crc kubenswrapper[4689]: I1201 09:00:24.040330 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h5wnd\" (UniqueName: \"kubernetes.io/projected/fd6b584a-d753-4c05-a893-b160f9109965-kube-api-access-h5wnd\") on node \"crc\" DevicePath \"\"" Dec 01 09:00:24 crc kubenswrapper[4689]: I1201 09:00:24.040429 4689 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd6b584a-d753-4c05-a893-b160f9109965-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 09:00:24 crc kubenswrapper[4689]: I1201 09:00:24.101733 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 01 09:00:24 crc kubenswrapper[4689]: I1201 09:00:24.105524 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 01 09:00:24 crc kubenswrapper[4689]: I1201 09:00:24.111047 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 01 09:00:24 crc kubenswrapper[4689]: I1201 09:00:24.179772 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 01 09:00:24 crc kubenswrapper[4689]: I1201 09:00:24.197547 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 01 09:00:24 crc kubenswrapper[4689]: I1201 09:00:24.242077 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 01 09:00:24 crc kubenswrapper[4689]: E1201 09:00:24.243210 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd6b584a-d753-4c05-a893-b160f9109965" containerName="nova-cell1-novncproxy-novncproxy" Dec 01 09:00:24 crc kubenswrapper[4689]: I1201 09:00:24.243235 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd6b584a-d753-4c05-a893-b160f9109965" containerName="nova-cell1-novncproxy-novncproxy" Dec 01 09:00:24 crc kubenswrapper[4689]: I1201 09:00:24.243662 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd6b584a-d753-4c05-a893-b160f9109965" containerName="nova-cell1-novncproxy-novncproxy" Dec 01 09:00:24 crc kubenswrapper[4689]: I1201 09:00:24.244608 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 01 09:00:24 crc kubenswrapper[4689]: I1201 09:00:24.251974 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 01 09:00:24 crc kubenswrapper[4689]: I1201 09:00:24.252674 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Dec 01 09:00:24 crc kubenswrapper[4689]: I1201 09:00:24.252715 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Dec 01 09:00:24 crc kubenswrapper[4689]: I1201 09:00:24.252766 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Dec 01 09:00:24 crc kubenswrapper[4689]: I1201 09:00:24.346668 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1e959a4-6ab1-4c6c-86e4-8e319fc8806a-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"d1e959a4-6ab1-4c6c-86e4-8e319fc8806a\") " pod="openstack/nova-cell1-novncproxy-0" Dec 01 09:00:24 crc kubenswrapper[4689]: I1201 09:00:24.346761 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1e959a4-6ab1-4c6c-86e4-8e319fc8806a-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"d1e959a4-6ab1-4c6c-86e4-8e319fc8806a\") " pod="openstack/nova-cell1-novncproxy-0" Dec 01 09:00:24 crc kubenswrapper[4689]: I1201 09:00:24.346789 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2ssd\" (UniqueName: \"kubernetes.io/projected/d1e959a4-6ab1-4c6c-86e4-8e319fc8806a-kube-api-access-n2ssd\") pod \"nova-cell1-novncproxy-0\" (UID: \"d1e959a4-6ab1-4c6c-86e4-8e319fc8806a\") " pod="openstack/nova-cell1-novncproxy-0" Dec 01 09:00:24 crc kubenswrapper[4689]: I1201 09:00:24.346810 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1e959a4-6ab1-4c6c-86e4-8e319fc8806a-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"d1e959a4-6ab1-4c6c-86e4-8e319fc8806a\") " pod="openstack/nova-cell1-novncproxy-0" Dec 01 09:00:24 crc kubenswrapper[4689]: I1201 09:00:24.346831 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1e959a4-6ab1-4c6c-86e4-8e319fc8806a-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"d1e959a4-6ab1-4c6c-86e4-8e319fc8806a\") " pod="openstack/nova-cell1-novncproxy-0" Dec 01 09:00:24 crc kubenswrapper[4689]: I1201 09:00:24.448872 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1e959a4-6ab1-4c6c-86e4-8e319fc8806a-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"d1e959a4-6ab1-4c6c-86e4-8e319fc8806a\") " pod="openstack/nova-cell1-novncproxy-0" Dec 01 09:00:24 crc kubenswrapper[4689]: I1201 09:00:24.448932 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2ssd\" (UniqueName: \"kubernetes.io/projected/d1e959a4-6ab1-4c6c-86e4-8e319fc8806a-kube-api-access-n2ssd\") pod \"nova-cell1-novncproxy-0\" (UID: \"d1e959a4-6ab1-4c6c-86e4-8e319fc8806a\") " pod="openstack/nova-cell1-novncproxy-0" Dec 01 09:00:24 crc kubenswrapper[4689]: I1201 09:00:24.448956 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1e959a4-6ab1-4c6c-86e4-8e319fc8806a-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"d1e959a4-6ab1-4c6c-86e4-8e319fc8806a\") " pod="openstack/nova-cell1-novncproxy-0" Dec 01 09:00:24 crc kubenswrapper[4689]: I1201 09:00:24.448973 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1e959a4-6ab1-4c6c-86e4-8e319fc8806a-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"d1e959a4-6ab1-4c6c-86e4-8e319fc8806a\") " pod="openstack/nova-cell1-novncproxy-0" Dec 01 09:00:24 crc kubenswrapper[4689]: I1201 09:00:24.449059 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1e959a4-6ab1-4c6c-86e4-8e319fc8806a-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"d1e959a4-6ab1-4c6c-86e4-8e319fc8806a\") " pod="openstack/nova-cell1-novncproxy-0" Dec 01 09:00:24 crc kubenswrapper[4689]: I1201 09:00:24.453003 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1e959a4-6ab1-4c6c-86e4-8e319fc8806a-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"d1e959a4-6ab1-4c6c-86e4-8e319fc8806a\") " pod="openstack/nova-cell1-novncproxy-0" Dec 01 09:00:24 crc kubenswrapper[4689]: I1201 09:00:24.454729 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1e959a4-6ab1-4c6c-86e4-8e319fc8806a-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"d1e959a4-6ab1-4c6c-86e4-8e319fc8806a\") " pod="openstack/nova-cell1-novncproxy-0" Dec 01 09:00:24 crc kubenswrapper[4689]: I1201 09:00:24.455280 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1e959a4-6ab1-4c6c-86e4-8e319fc8806a-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"d1e959a4-6ab1-4c6c-86e4-8e319fc8806a\") " pod="openstack/nova-cell1-novncproxy-0" Dec 01 09:00:24 crc kubenswrapper[4689]: I1201 09:00:24.455834 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1e959a4-6ab1-4c6c-86e4-8e319fc8806a-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"d1e959a4-6ab1-4c6c-86e4-8e319fc8806a\") " pod="openstack/nova-cell1-novncproxy-0" Dec 01 09:00:24 crc kubenswrapper[4689]: I1201 09:00:24.467279 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2ssd\" (UniqueName: \"kubernetes.io/projected/d1e959a4-6ab1-4c6c-86e4-8e319fc8806a-kube-api-access-n2ssd\") pod \"nova-cell1-novncproxy-0\" (UID: \"d1e959a4-6ab1-4c6c-86e4-8e319fc8806a\") " pod="openstack/nova-cell1-novncproxy-0" Dec 01 09:00:24 crc kubenswrapper[4689]: I1201 09:00:24.564195 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 01 09:00:24 crc kubenswrapper[4689]: I1201 09:00:24.866478 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 01 09:00:25 crc kubenswrapper[4689]: I1201 09:00:25.030508 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 01 09:00:25 crc kubenswrapper[4689]: W1201 09:00:25.048831 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd1e959a4_6ab1_4c6c_86e4_8e319fc8806a.slice/crio-d6af5b4b18110973f3f2949476d68b0f6dae57cd551c63175c300390a9ac942b WatchSource:0}: Error finding container d6af5b4b18110973f3f2949476d68b0f6dae57cd551c63175c300390a9ac942b: Status 404 returned error can't find the container with id d6af5b4b18110973f3f2949476d68b0f6dae57cd551c63175c300390a9ac942b Dec 01 09:00:25 crc kubenswrapper[4689]: I1201 09:00:25.084278 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd6b584a-d753-4c05-a893-b160f9109965" path="/var/lib/kubelet/pods/fd6b584a-d753-4c05-a893-b160f9109965/volumes" Dec 01 09:00:25 crc kubenswrapper[4689]: I1201 09:00:25.869443 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"d1e959a4-6ab1-4c6c-86e4-8e319fc8806a","Type":"ContainerStarted","Data":"d9ace9c715f676490685374a30c53cacb18d6b73d46439802726d8dfe8db06b4"} Dec 01 09:00:25 crc kubenswrapper[4689]: I1201 09:00:25.869722 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"d1e959a4-6ab1-4c6c-86e4-8e319fc8806a","Type":"ContainerStarted","Data":"d6af5b4b18110973f3f2949476d68b0f6dae57cd551c63175c300390a9ac942b"} Dec 01 09:00:25 crc kubenswrapper[4689]: I1201 09:00:25.904961 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=1.9049421180000001 podStartE2EDuration="1.904942118s" podCreationTimestamp="2025-12-01 09:00:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:00:25.897188008 +0000 UTC m=+1305.969475912" watchObservedRunningTime="2025-12-01 09:00:25.904942118 +0000 UTC m=+1305.977230012" Dec 01 09:00:26 crc kubenswrapper[4689]: I1201 09:00:26.127239 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 01 09:00:26 crc kubenswrapper[4689]: I1201 09:00:26.127604 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 01 09:00:26 crc kubenswrapper[4689]: I1201 09:00:26.133861 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 01 09:00:26 crc kubenswrapper[4689]: I1201 09:00:26.138345 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 01 09:00:26 crc kubenswrapper[4689]: I1201 09:00:26.877961 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 01 09:00:26 crc kubenswrapper[4689]: I1201 09:00:26.883005 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 01 09:00:27 crc kubenswrapper[4689]: I1201 09:00:27.171465 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-ks5fk"] Dec 01 09:00:27 crc kubenswrapper[4689]: I1201 09:00:27.173422 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59cf4bdb65-ks5fk" Dec 01 09:00:27 crc kubenswrapper[4689]: I1201 09:00:27.202825 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-ks5fk"] Dec 01 09:00:27 crc kubenswrapper[4689]: I1201 09:00:27.315037 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a85b360e-a5e5-4769-bc64-7ccebba08bd1-dns-svc\") pod \"dnsmasq-dns-59cf4bdb65-ks5fk\" (UID: \"a85b360e-a5e5-4769-bc64-7ccebba08bd1\") " pod="openstack/dnsmasq-dns-59cf4bdb65-ks5fk" Dec 01 09:00:27 crc kubenswrapper[4689]: I1201 09:00:27.315076 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a85b360e-a5e5-4769-bc64-7ccebba08bd1-ovsdbserver-nb\") pod \"dnsmasq-dns-59cf4bdb65-ks5fk\" (UID: \"a85b360e-a5e5-4769-bc64-7ccebba08bd1\") " pod="openstack/dnsmasq-dns-59cf4bdb65-ks5fk" Dec 01 09:00:27 crc kubenswrapper[4689]: I1201 09:00:27.315110 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhtwl\" (UniqueName: \"kubernetes.io/projected/a85b360e-a5e5-4769-bc64-7ccebba08bd1-kube-api-access-bhtwl\") pod \"dnsmasq-dns-59cf4bdb65-ks5fk\" (UID: \"a85b360e-a5e5-4769-bc64-7ccebba08bd1\") " pod="openstack/dnsmasq-dns-59cf4bdb65-ks5fk" Dec 01 09:00:27 crc kubenswrapper[4689]: I1201 09:00:27.315128 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a85b360e-a5e5-4769-bc64-7ccebba08bd1-config\") pod \"dnsmasq-dns-59cf4bdb65-ks5fk\" (UID: \"a85b360e-a5e5-4769-bc64-7ccebba08bd1\") " pod="openstack/dnsmasq-dns-59cf4bdb65-ks5fk" Dec 01 09:00:27 crc kubenswrapper[4689]: I1201 09:00:27.315208 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a85b360e-a5e5-4769-bc64-7ccebba08bd1-ovsdbserver-sb\") pod \"dnsmasq-dns-59cf4bdb65-ks5fk\" (UID: \"a85b360e-a5e5-4769-bc64-7ccebba08bd1\") " pod="openstack/dnsmasq-dns-59cf4bdb65-ks5fk" Dec 01 09:00:27 crc kubenswrapper[4689]: I1201 09:00:27.315537 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a85b360e-a5e5-4769-bc64-7ccebba08bd1-dns-swift-storage-0\") pod \"dnsmasq-dns-59cf4bdb65-ks5fk\" (UID: \"a85b360e-a5e5-4769-bc64-7ccebba08bd1\") " pod="openstack/dnsmasq-dns-59cf4bdb65-ks5fk" Dec 01 09:00:27 crc kubenswrapper[4689]: I1201 09:00:27.417266 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhtwl\" (UniqueName: \"kubernetes.io/projected/a85b360e-a5e5-4769-bc64-7ccebba08bd1-kube-api-access-bhtwl\") pod \"dnsmasq-dns-59cf4bdb65-ks5fk\" (UID: \"a85b360e-a5e5-4769-bc64-7ccebba08bd1\") " pod="openstack/dnsmasq-dns-59cf4bdb65-ks5fk" Dec 01 09:00:27 crc kubenswrapper[4689]: I1201 09:00:27.417307 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a85b360e-a5e5-4769-bc64-7ccebba08bd1-config\") pod \"dnsmasq-dns-59cf4bdb65-ks5fk\" (UID: \"a85b360e-a5e5-4769-bc64-7ccebba08bd1\") " pod="openstack/dnsmasq-dns-59cf4bdb65-ks5fk" Dec 01 09:00:27 crc kubenswrapper[4689]: I1201 09:00:27.417329 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a85b360e-a5e5-4769-bc64-7ccebba08bd1-ovsdbserver-sb\") pod \"dnsmasq-dns-59cf4bdb65-ks5fk\" (UID: \"a85b360e-a5e5-4769-bc64-7ccebba08bd1\") " pod="openstack/dnsmasq-dns-59cf4bdb65-ks5fk" Dec 01 09:00:27 crc kubenswrapper[4689]: I1201 09:00:27.417430 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a85b360e-a5e5-4769-bc64-7ccebba08bd1-dns-swift-storage-0\") pod \"dnsmasq-dns-59cf4bdb65-ks5fk\" (UID: \"a85b360e-a5e5-4769-bc64-7ccebba08bd1\") " pod="openstack/dnsmasq-dns-59cf4bdb65-ks5fk" Dec 01 09:00:27 crc kubenswrapper[4689]: I1201 09:00:27.417523 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a85b360e-a5e5-4769-bc64-7ccebba08bd1-dns-svc\") pod \"dnsmasq-dns-59cf4bdb65-ks5fk\" (UID: \"a85b360e-a5e5-4769-bc64-7ccebba08bd1\") " pod="openstack/dnsmasq-dns-59cf4bdb65-ks5fk" Dec 01 09:00:27 crc kubenswrapper[4689]: I1201 09:00:27.417544 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a85b360e-a5e5-4769-bc64-7ccebba08bd1-ovsdbserver-nb\") pod \"dnsmasq-dns-59cf4bdb65-ks5fk\" (UID: \"a85b360e-a5e5-4769-bc64-7ccebba08bd1\") " pod="openstack/dnsmasq-dns-59cf4bdb65-ks5fk" Dec 01 09:00:27 crc kubenswrapper[4689]: I1201 09:00:27.418590 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a85b360e-a5e5-4769-bc64-7ccebba08bd1-dns-swift-storage-0\") pod \"dnsmasq-dns-59cf4bdb65-ks5fk\" (UID: \"a85b360e-a5e5-4769-bc64-7ccebba08bd1\") " pod="openstack/dnsmasq-dns-59cf4bdb65-ks5fk" Dec 01 09:00:27 crc kubenswrapper[4689]: I1201 09:00:27.418711 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a85b360e-a5e5-4769-bc64-7ccebba08bd1-config\") pod \"dnsmasq-dns-59cf4bdb65-ks5fk\" (UID: \"a85b360e-a5e5-4769-bc64-7ccebba08bd1\") " pod="openstack/dnsmasq-dns-59cf4bdb65-ks5fk" Dec 01 09:00:27 crc kubenswrapper[4689]: I1201 09:00:27.419251 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a85b360e-a5e5-4769-bc64-7ccebba08bd1-ovsdbserver-nb\") pod \"dnsmasq-dns-59cf4bdb65-ks5fk\" (UID: \"a85b360e-a5e5-4769-bc64-7ccebba08bd1\") " pod="openstack/dnsmasq-dns-59cf4bdb65-ks5fk" Dec 01 09:00:27 crc kubenswrapper[4689]: I1201 09:00:27.419274 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a85b360e-a5e5-4769-bc64-7ccebba08bd1-ovsdbserver-sb\") pod \"dnsmasq-dns-59cf4bdb65-ks5fk\" (UID: \"a85b360e-a5e5-4769-bc64-7ccebba08bd1\") " pod="openstack/dnsmasq-dns-59cf4bdb65-ks5fk" Dec 01 09:00:27 crc kubenswrapper[4689]: I1201 09:00:27.419291 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a85b360e-a5e5-4769-bc64-7ccebba08bd1-dns-svc\") pod \"dnsmasq-dns-59cf4bdb65-ks5fk\" (UID: \"a85b360e-a5e5-4769-bc64-7ccebba08bd1\") " pod="openstack/dnsmasq-dns-59cf4bdb65-ks5fk" Dec 01 09:00:27 crc kubenswrapper[4689]: I1201 09:00:27.448697 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhtwl\" (UniqueName: \"kubernetes.io/projected/a85b360e-a5e5-4769-bc64-7ccebba08bd1-kube-api-access-bhtwl\") pod \"dnsmasq-dns-59cf4bdb65-ks5fk\" (UID: \"a85b360e-a5e5-4769-bc64-7ccebba08bd1\") " pod="openstack/dnsmasq-dns-59cf4bdb65-ks5fk" Dec 01 09:00:27 crc kubenswrapper[4689]: I1201 09:00:27.497953 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59cf4bdb65-ks5fk" Dec 01 09:00:28 crc kubenswrapper[4689]: I1201 09:00:28.114857 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-ks5fk"] Dec 01 09:00:28 crc kubenswrapper[4689]: I1201 09:00:28.904157 4689 generic.go:334] "Generic (PLEG): container finished" podID="a85b360e-a5e5-4769-bc64-7ccebba08bd1" containerID="76bfec9994a58e803d9884acd5b9ff9e9dcdfdeeabb617bcd1d4c51dd7d37aed" exitCode=0 Dec 01 09:00:28 crc kubenswrapper[4689]: I1201 09:00:28.904599 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-ks5fk" event={"ID":"a85b360e-a5e5-4769-bc64-7ccebba08bd1","Type":"ContainerDied","Data":"76bfec9994a58e803d9884acd5b9ff9e9dcdfdeeabb617bcd1d4c51dd7d37aed"} Dec 01 09:00:28 crc kubenswrapper[4689]: I1201 09:00:28.904883 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-ks5fk" event={"ID":"a85b360e-a5e5-4769-bc64-7ccebba08bd1","Type":"ContainerStarted","Data":"8c94115d4e552b1a069f91599a3b48c0de11c87ec5cf5fe45d816bbd6f1741ba"} Dec 01 09:00:29 crc kubenswrapper[4689]: I1201 09:00:29.564592 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Dec 01 09:00:29 crc kubenswrapper[4689]: I1201 09:00:29.917025 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-ks5fk" event={"ID":"a85b360e-a5e5-4769-bc64-7ccebba08bd1","Type":"ContainerStarted","Data":"41d8cc8ee569545f61c9e57189483e8a3aeed7f310f24311f854b5cb0df6df1d"} Dec 01 09:00:29 crc kubenswrapper[4689]: I1201 09:00:29.917494 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-59cf4bdb65-ks5fk" Dec 01 09:00:29 crc kubenswrapper[4689]: I1201 09:00:29.945200 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-59cf4bdb65-ks5fk" podStartSLOduration=2.945174164 podStartE2EDuration="2.945174164s" podCreationTimestamp="2025-12-01 09:00:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:00:29.934552387 +0000 UTC m=+1310.006840291" watchObservedRunningTime="2025-12-01 09:00:29.945174164 +0000 UTC m=+1310.017462078" Dec 01 09:00:30 crc kubenswrapper[4689]: I1201 09:00:30.047907 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 01 09:00:30 crc kubenswrapper[4689]: I1201 09:00:30.052177 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="3bcd3c9e-fe73-431b-999e-70f81990ffc8" containerName="nova-api-log" containerID="cri-o://284d6c5d242588714740ab5e37498a7e51c66a6329a1d26055a0db9498011829" gracePeriod=30 Dec 01 09:00:30 crc kubenswrapper[4689]: I1201 09:00:30.053566 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="3bcd3c9e-fe73-431b-999e-70f81990ffc8" containerName="nova-api-api" containerID="cri-o://887e7f6c9bed4a9e564d18b0538859d7a61393e12255e9dbb8cfe1506d294606" gracePeriod=30 Dec 01 09:00:30 crc kubenswrapper[4689]: I1201 09:00:30.435147 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 01 09:00:30 crc kubenswrapper[4689]: I1201 09:00:30.436083 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b80ca10a-fd51-4c1c-b6ed-ca470af97459" containerName="ceilometer-central-agent" containerID="cri-o://ae05c105819f7b8b5d0855d5d06b27ba8a91eb34cde922ba4feca5fdf9f4e5c9" gracePeriod=30 Dec 01 09:00:30 crc kubenswrapper[4689]: I1201 09:00:30.436699 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b80ca10a-fd51-4c1c-b6ed-ca470af97459" containerName="proxy-httpd" containerID="cri-o://cd17cffe682a76fad9d61d955d69ed2977145dda1dc83815feb448dda30ac918" gracePeriod=30 Dec 01 09:00:30 crc kubenswrapper[4689]: I1201 09:00:30.438176 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b80ca10a-fd51-4c1c-b6ed-ca470af97459" containerName="ceilometer-notification-agent" containerID="cri-o://be8cc33258dfc0a1bf1d4f5c12eca9a0ee0f07e8ad7da84c3ca1bec9bbb28daa" gracePeriod=30 Dec 01 09:00:30 crc kubenswrapper[4689]: I1201 09:00:30.438277 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b80ca10a-fd51-4c1c-b6ed-ca470af97459" containerName="sg-core" containerID="cri-o://2247cac836a1a5d7af8f2e4b0043ccf541b27ba3ff89f4921e4f1e6a5d6c4f63" gracePeriod=30 Dec 01 09:00:30 crc kubenswrapper[4689]: I1201 09:00:30.927110 4689 generic.go:334] "Generic (PLEG): container finished" podID="3bcd3c9e-fe73-431b-999e-70f81990ffc8" containerID="284d6c5d242588714740ab5e37498a7e51c66a6329a1d26055a0db9498011829" exitCode=143 Dec 01 09:00:30 crc kubenswrapper[4689]: I1201 09:00:30.927186 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3bcd3c9e-fe73-431b-999e-70f81990ffc8","Type":"ContainerDied","Data":"284d6c5d242588714740ab5e37498a7e51c66a6329a1d26055a0db9498011829"} Dec 01 09:00:30 crc kubenswrapper[4689]: I1201 09:00:30.930060 4689 generic.go:334] "Generic (PLEG): container finished" podID="b80ca10a-fd51-4c1c-b6ed-ca470af97459" containerID="cd17cffe682a76fad9d61d955d69ed2977145dda1dc83815feb448dda30ac918" exitCode=0 Dec 01 09:00:30 crc kubenswrapper[4689]: I1201 09:00:30.930091 4689 generic.go:334] "Generic (PLEG): container finished" podID="b80ca10a-fd51-4c1c-b6ed-ca470af97459" containerID="2247cac836a1a5d7af8f2e4b0043ccf541b27ba3ff89f4921e4f1e6a5d6c4f63" exitCode=2 Dec 01 09:00:30 crc kubenswrapper[4689]: I1201 09:00:30.930134 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b80ca10a-fd51-4c1c-b6ed-ca470af97459","Type":"ContainerDied","Data":"cd17cffe682a76fad9d61d955d69ed2977145dda1dc83815feb448dda30ac918"} Dec 01 09:00:30 crc kubenswrapper[4689]: I1201 09:00:30.930155 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b80ca10a-fd51-4c1c-b6ed-ca470af97459","Type":"ContainerDied","Data":"2247cac836a1a5d7af8f2e4b0043ccf541b27ba3ff89f4921e4f1e6a5d6c4f63"} Dec 01 09:00:31 crc kubenswrapper[4689]: I1201 09:00:31.950112 4689 generic.go:334] "Generic (PLEG): container finished" podID="b80ca10a-fd51-4c1c-b6ed-ca470af97459" containerID="be8cc33258dfc0a1bf1d4f5c12eca9a0ee0f07e8ad7da84c3ca1bec9bbb28daa" exitCode=0 Dec 01 09:00:31 crc kubenswrapper[4689]: I1201 09:00:31.950540 4689 generic.go:334] "Generic (PLEG): container finished" podID="b80ca10a-fd51-4c1c-b6ed-ca470af97459" containerID="ae05c105819f7b8b5d0855d5d06b27ba8a91eb34cde922ba4feca5fdf9f4e5c9" exitCode=0 Dec 01 09:00:31 crc kubenswrapper[4689]: I1201 09:00:31.950583 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b80ca10a-fd51-4c1c-b6ed-ca470af97459","Type":"ContainerDied","Data":"be8cc33258dfc0a1bf1d4f5c12eca9a0ee0f07e8ad7da84c3ca1bec9bbb28daa"} Dec 01 09:00:31 crc kubenswrapper[4689]: I1201 09:00:31.950615 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b80ca10a-fd51-4c1c-b6ed-ca470af97459","Type":"ContainerDied","Data":"ae05c105819f7b8b5d0855d5d06b27ba8a91eb34cde922ba4feca5fdf9f4e5c9"} Dec 01 09:00:32 crc kubenswrapper[4689]: I1201 09:00:32.337529 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 09:00:32 crc kubenswrapper[4689]: I1201 09:00:32.520209 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-56wc4\" (UniqueName: \"kubernetes.io/projected/b80ca10a-fd51-4c1c-b6ed-ca470af97459-kube-api-access-56wc4\") pod \"b80ca10a-fd51-4c1c-b6ed-ca470af97459\" (UID: \"b80ca10a-fd51-4c1c-b6ed-ca470af97459\") " Dec 01 09:00:32 crc kubenswrapper[4689]: I1201 09:00:32.520304 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b80ca10a-fd51-4c1c-b6ed-ca470af97459-log-httpd\") pod \"b80ca10a-fd51-4c1c-b6ed-ca470af97459\" (UID: \"b80ca10a-fd51-4c1c-b6ed-ca470af97459\") " Dec 01 09:00:32 crc kubenswrapper[4689]: I1201 09:00:32.520397 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b80ca10a-fd51-4c1c-b6ed-ca470af97459-combined-ca-bundle\") pod \"b80ca10a-fd51-4c1c-b6ed-ca470af97459\" (UID: \"b80ca10a-fd51-4c1c-b6ed-ca470af97459\") " Dec 01 09:00:32 crc kubenswrapper[4689]: I1201 09:00:32.520469 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b80ca10a-fd51-4c1c-b6ed-ca470af97459-ceilometer-tls-certs\") pod \"b80ca10a-fd51-4c1c-b6ed-ca470af97459\" (UID: \"b80ca10a-fd51-4c1c-b6ed-ca470af97459\") " Dec 01 09:00:32 crc kubenswrapper[4689]: I1201 09:00:32.520515 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b80ca10a-fd51-4c1c-b6ed-ca470af97459-sg-core-conf-yaml\") pod \"b80ca10a-fd51-4c1c-b6ed-ca470af97459\" (UID: \"b80ca10a-fd51-4c1c-b6ed-ca470af97459\") " Dec 01 09:00:32 crc kubenswrapper[4689]: I1201 09:00:32.520597 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b80ca10a-fd51-4c1c-b6ed-ca470af97459-config-data\") pod \"b80ca10a-fd51-4c1c-b6ed-ca470af97459\" (UID: \"b80ca10a-fd51-4c1c-b6ed-ca470af97459\") " Dec 01 09:00:32 crc kubenswrapper[4689]: I1201 09:00:32.520635 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b80ca10a-fd51-4c1c-b6ed-ca470af97459-run-httpd\") pod \"b80ca10a-fd51-4c1c-b6ed-ca470af97459\" (UID: \"b80ca10a-fd51-4c1c-b6ed-ca470af97459\") " Dec 01 09:00:32 crc kubenswrapper[4689]: I1201 09:00:32.520672 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b80ca10a-fd51-4c1c-b6ed-ca470af97459-scripts\") pod \"b80ca10a-fd51-4c1c-b6ed-ca470af97459\" (UID: \"b80ca10a-fd51-4c1c-b6ed-ca470af97459\") " Dec 01 09:00:32 crc kubenswrapper[4689]: I1201 09:00:32.531757 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b80ca10a-fd51-4c1c-b6ed-ca470af97459-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b80ca10a-fd51-4c1c-b6ed-ca470af97459" (UID: "b80ca10a-fd51-4c1c-b6ed-ca470af97459"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:00:32 crc kubenswrapper[4689]: I1201 09:00:32.534671 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b80ca10a-fd51-4c1c-b6ed-ca470af97459-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b80ca10a-fd51-4c1c-b6ed-ca470af97459" (UID: "b80ca10a-fd51-4c1c-b6ed-ca470af97459"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:00:32 crc kubenswrapper[4689]: I1201 09:00:32.535024 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b80ca10a-fd51-4c1c-b6ed-ca470af97459-scripts" (OuterVolumeSpecName: "scripts") pod "b80ca10a-fd51-4c1c-b6ed-ca470af97459" (UID: "b80ca10a-fd51-4c1c-b6ed-ca470af97459"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:00:32 crc kubenswrapper[4689]: I1201 09:00:32.544743 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b80ca10a-fd51-4c1c-b6ed-ca470af97459-kube-api-access-56wc4" (OuterVolumeSpecName: "kube-api-access-56wc4") pod "b80ca10a-fd51-4c1c-b6ed-ca470af97459" (UID: "b80ca10a-fd51-4c1c-b6ed-ca470af97459"). InnerVolumeSpecName "kube-api-access-56wc4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:00:32 crc kubenswrapper[4689]: I1201 09:00:32.622796 4689 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b80ca10a-fd51-4c1c-b6ed-ca470af97459-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 01 09:00:32 crc kubenswrapper[4689]: I1201 09:00:32.622830 4689 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b80ca10a-fd51-4c1c-b6ed-ca470af97459-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 09:00:32 crc kubenswrapper[4689]: I1201 09:00:32.622840 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-56wc4\" (UniqueName: \"kubernetes.io/projected/b80ca10a-fd51-4c1c-b6ed-ca470af97459-kube-api-access-56wc4\") on node \"crc\" DevicePath \"\"" Dec 01 09:00:32 crc kubenswrapper[4689]: I1201 09:00:32.622849 4689 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b80ca10a-fd51-4c1c-b6ed-ca470af97459-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 01 09:00:32 crc kubenswrapper[4689]: I1201 09:00:32.642216 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b80ca10a-fd51-4c1c-b6ed-ca470af97459-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b80ca10a-fd51-4c1c-b6ed-ca470af97459" (UID: "b80ca10a-fd51-4c1c-b6ed-ca470af97459"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:00:32 crc kubenswrapper[4689]: I1201 09:00:32.645535 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b80ca10a-fd51-4c1c-b6ed-ca470af97459-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "b80ca10a-fd51-4c1c-b6ed-ca470af97459" (UID: "b80ca10a-fd51-4c1c-b6ed-ca470af97459"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:00:32 crc kubenswrapper[4689]: I1201 09:00:32.703612 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b80ca10a-fd51-4c1c-b6ed-ca470af97459-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b80ca10a-fd51-4c1c-b6ed-ca470af97459" (UID: "b80ca10a-fd51-4c1c-b6ed-ca470af97459"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:00:32 crc kubenswrapper[4689]: I1201 09:00:32.725019 4689 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b80ca10a-fd51-4c1c-b6ed-ca470af97459-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 01 09:00:32 crc kubenswrapper[4689]: I1201 09:00:32.725616 4689 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b80ca10a-fd51-4c1c-b6ed-ca470af97459-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 09:00:32 crc kubenswrapper[4689]: I1201 09:00:32.725686 4689 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b80ca10a-fd51-4c1c-b6ed-ca470af97459-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 01 09:00:32 crc kubenswrapper[4689]: I1201 09:00:32.729699 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b80ca10a-fd51-4c1c-b6ed-ca470af97459-config-data" (OuterVolumeSpecName: "config-data") pod "b80ca10a-fd51-4c1c-b6ed-ca470af97459" (UID: "b80ca10a-fd51-4c1c-b6ed-ca470af97459"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:00:32 crc kubenswrapper[4689]: I1201 09:00:32.827223 4689 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b80ca10a-fd51-4c1c-b6ed-ca470af97459-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 09:00:32 crc kubenswrapper[4689]: I1201 09:00:32.960193 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b80ca10a-fd51-4c1c-b6ed-ca470af97459","Type":"ContainerDied","Data":"1f9ecc37647420096cf8d05c17e3f0511511c21b496c1746e7448c22fc7f24e9"} Dec 01 09:00:32 crc kubenswrapper[4689]: I1201 09:00:32.960268 4689 scope.go:117] "RemoveContainer" containerID="cd17cffe682a76fad9d61d955d69ed2977145dda1dc83815feb448dda30ac918" Dec 01 09:00:32 crc kubenswrapper[4689]: I1201 09:00:32.961184 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 09:00:32 crc kubenswrapper[4689]: I1201 09:00:32.991051 4689 scope.go:117] "RemoveContainer" containerID="2247cac836a1a5d7af8f2e4b0043ccf541b27ba3ff89f4921e4f1e6a5d6c4f63" Dec 01 09:00:32 crc kubenswrapper[4689]: I1201 09:00:32.999039 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 01 09:00:33 crc kubenswrapper[4689]: I1201 09:00:33.022733 4689 scope.go:117] "RemoveContainer" containerID="be8cc33258dfc0a1bf1d4f5c12eca9a0ee0f07e8ad7da84c3ca1bec9bbb28daa" Dec 01 09:00:33 crc kubenswrapper[4689]: I1201 09:00:33.024749 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 01 09:00:33 crc kubenswrapper[4689]: I1201 09:00:33.038423 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 01 09:00:33 crc kubenswrapper[4689]: E1201 09:00:33.038839 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b80ca10a-fd51-4c1c-b6ed-ca470af97459" containerName="sg-core" Dec 01 09:00:33 crc kubenswrapper[4689]: I1201 09:00:33.038858 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="b80ca10a-fd51-4c1c-b6ed-ca470af97459" containerName="sg-core" Dec 01 09:00:33 crc kubenswrapper[4689]: E1201 09:00:33.038869 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b80ca10a-fd51-4c1c-b6ed-ca470af97459" containerName="proxy-httpd" Dec 01 09:00:33 crc kubenswrapper[4689]: I1201 09:00:33.038876 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="b80ca10a-fd51-4c1c-b6ed-ca470af97459" containerName="proxy-httpd" Dec 01 09:00:33 crc kubenswrapper[4689]: E1201 09:00:33.038886 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b80ca10a-fd51-4c1c-b6ed-ca470af97459" containerName="ceilometer-notification-agent" Dec 01 09:00:33 crc kubenswrapper[4689]: I1201 09:00:33.038891 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="b80ca10a-fd51-4c1c-b6ed-ca470af97459" containerName="ceilometer-notification-agent" Dec 01 09:00:33 crc kubenswrapper[4689]: E1201 09:00:33.038907 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b80ca10a-fd51-4c1c-b6ed-ca470af97459" containerName="ceilometer-central-agent" Dec 01 09:00:33 crc kubenswrapper[4689]: I1201 09:00:33.038913 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="b80ca10a-fd51-4c1c-b6ed-ca470af97459" containerName="ceilometer-central-agent" Dec 01 09:00:33 crc kubenswrapper[4689]: I1201 09:00:33.039089 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="b80ca10a-fd51-4c1c-b6ed-ca470af97459" containerName="proxy-httpd" Dec 01 09:00:33 crc kubenswrapper[4689]: I1201 09:00:33.039102 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="b80ca10a-fd51-4c1c-b6ed-ca470af97459" containerName="sg-core" Dec 01 09:00:33 crc kubenswrapper[4689]: I1201 09:00:33.039117 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="b80ca10a-fd51-4c1c-b6ed-ca470af97459" containerName="ceilometer-central-agent" Dec 01 09:00:33 crc kubenswrapper[4689]: I1201 09:00:33.039135 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="b80ca10a-fd51-4c1c-b6ed-ca470af97459" containerName="ceilometer-notification-agent" Dec 01 09:00:33 crc kubenswrapper[4689]: I1201 09:00:33.040755 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 09:00:33 crc kubenswrapper[4689]: I1201 09:00:33.047325 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 01 09:00:33 crc kubenswrapper[4689]: I1201 09:00:33.047341 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 01 09:00:33 crc kubenswrapper[4689]: I1201 09:00:33.047477 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Dec 01 09:00:33 crc kubenswrapper[4689]: I1201 09:00:33.055269 4689 scope.go:117] "RemoveContainer" containerID="ae05c105819f7b8b5d0855d5d06b27ba8a91eb34cde922ba4feca5fdf9f4e5c9" Dec 01 09:00:33 crc kubenswrapper[4689]: I1201 09:00:33.063395 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b80ca10a-fd51-4c1c-b6ed-ca470af97459" path="/var/lib/kubelet/pods/b80ca10a-fd51-4c1c-b6ed-ca470af97459/volumes" Dec 01 09:00:33 crc kubenswrapper[4689]: I1201 09:00:33.068947 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 01 09:00:33 crc kubenswrapper[4689]: I1201 09:00:33.290348 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/913d1dab-72d0-4f7b-bea3-78aabac0d13f-config-data\") pod \"ceilometer-0\" (UID: \"913d1dab-72d0-4f7b-bea3-78aabac0d13f\") " pod="openstack/ceilometer-0" Dec 01 09:00:33 crc kubenswrapper[4689]: I1201 09:00:33.290408 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/913d1dab-72d0-4f7b-bea3-78aabac0d13f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"913d1dab-72d0-4f7b-bea3-78aabac0d13f\") " pod="openstack/ceilometer-0" Dec 01 09:00:33 crc kubenswrapper[4689]: I1201 09:00:33.290441 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/913d1dab-72d0-4f7b-bea3-78aabac0d13f-run-httpd\") pod \"ceilometer-0\" (UID: \"913d1dab-72d0-4f7b-bea3-78aabac0d13f\") " pod="openstack/ceilometer-0" Dec 01 09:00:33 crc kubenswrapper[4689]: I1201 09:00:33.290470 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/913d1dab-72d0-4f7b-bea3-78aabac0d13f-scripts\") pod \"ceilometer-0\" (UID: \"913d1dab-72d0-4f7b-bea3-78aabac0d13f\") " pod="openstack/ceilometer-0" Dec 01 09:00:33 crc kubenswrapper[4689]: I1201 09:00:33.290502 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/913d1dab-72d0-4f7b-bea3-78aabac0d13f-log-httpd\") pod \"ceilometer-0\" (UID: \"913d1dab-72d0-4f7b-bea3-78aabac0d13f\") " pod="openstack/ceilometer-0" Dec 01 09:00:33 crc kubenswrapper[4689]: I1201 09:00:33.290533 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/913d1dab-72d0-4f7b-bea3-78aabac0d13f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"913d1dab-72d0-4f7b-bea3-78aabac0d13f\") " pod="openstack/ceilometer-0" Dec 01 09:00:33 crc kubenswrapper[4689]: I1201 09:00:33.290549 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/913d1dab-72d0-4f7b-bea3-78aabac0d13f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"913d1dab-72d0-4f7b-bea3-78aabac0d13f\") " pod="openstack/ceilometer-0" Dec 01 09:00:33 crc kubenswrapper[4689]: I1201 09:00:33.290812 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbh96\" (UniqueName: \"kubernetes.io/projected/913d1dab-72d0-4f7b-bea3-78aabac0d13f-kube-api-access-nbh96\") pod \"ceilometer-0\" (UID: \"913d1dab-72d0-4f7b-bea3-78aabac0d13f\") " pod="openstack/ceilometer-0" Dec 01 09:00:33 crc kubenswrapper[4689]: I1201 09:00:33.392215 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/913d1dab-72d0-4f7b-bea3-78aabac0d13f-config-data\") pod \"ceilometer-0\" (UID: \"913d1dab-72d0-4f7b-bea3-78aabac0d13f\") " pod="openstack/ceilometer-0" Dec 01 09:00:33 crc kubenswrapper[4689]: I1201 09:00:33.392547 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/913d1dab-72d0-4f7b-bea3-78aabac0d13f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"913d1dab-72d0-4f7b-bea3-78aabac0d13f\") " pod="openstack/ceilometer-0" Dec 01 09:00:33 crc kubenswrapper[4689]: I1201 09:00:33.392569 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/913d1dab-72d0-4f7b-bea3-78aabac0d13f-run-httpd\") pod \"ceilometer-0\" (UID: \"913d1dab-72d0-4f7b-bea3-78aabac0d13f\") " pod="openstack/ceilometer-0" Dec 01 09:00:33 crc kubenswrapper[4689]: I1201 09:00:33.392605 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/913d1dab-72d0-4f7b-bea3-78aabac0d13f-scripts\") pod \"ceilometer-0\" (UID: \"913d1dab-72d0-4f7b-bea3-78aabac0d13f\") " pod="openstack/ceilometer-0" Dec 01 09:00:33 crc kubenswrapper[4689]: I1201 09:00:33.392638 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/913d1dab-72d0-4f7b-bea3-78aabac0d13f-log-httpd\") pod \"ceilometer-0\" (UID: \"913d1dab-72d0-4f7b-bea3-78aabac0d13f\") " pod="openstack/ceilometer-0" Dec 01 09:00:33 crc kubenswrapper[4689]: I1201 09:00:33.392675 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/913d1dab-72d0-4f7b-bea3-78aabac0d13f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"913d1dab-72d0-4f7b-bea3-78aabac0d13f\") " pod="openstack/ceilometer-0" Dec 01 09:00:33 crc kubenswrapper[4689]: I1201 09:00:33.392690 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/913d1dab-72d0-4f7b-bea3-78aabac0d13f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"913d1dab-72d0-4f7b-bea3-78aabac0d13f\") " pod="openstack/ceilometer-0" Dec 01 09:00:33 crc kubenswrapper[4689]: I1201 09:00:33.392770 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbh96\" (UniqueName: \"kubernetes.io/projected/913d1dab-72d0-4f7b-bea3-78aabac0d13f-kube-api-access-nbh96\") pod \"ceilometer-0\" (UID: \"913d1dab-72d0-4f7b-bea3-78aabac0d13f\") " pod="openstack/ceilometer-0" Dec 01 09:00:33 crc kubenswrapper[4689]: I1201 09:00:33.394844 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/913d1dab-72d0-4f7b-bea3-78aabac0d13f-log-httpd\") pod \"ceilometer-0\" (UID: \"913d1dab-72d0-4f7b-bea3-78aabac0d13f\") " pod="openstack/ceilometer-0" Dec 01 09:00:33 crc kubenswrapper[4689]: I1201 09:00:33.395203 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/913d1dab-72d0-4f7b-bea3-78aabac0d13f-run-httpd\") pod \"ceilometer-0\" (UID: \"913d1dab-72d0-4f7b-bea3-78aabac0d13f\") " pod="openstack/ceilometer-0" Dec 01 09:00:33 crc kubenswrapper[4689]: I1201 09:00:33.403591 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/913d1dab-72d0-4f7b-bea3-78aabac0d13f-scripts\") pod \"ceilometer-0\" (UID: \"913d1dab-72d0-4f7b-bea3-78aabac0d13f\") " pod="openstack/ceilometer-0" Dec 01 09:00:33 crc kubenswrapper[4689]: I1201 09:00:33.408684 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/913d1dab-72d0-4f7b-bea3-78aabac0d13f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"913d1dab-72d0-4f7b-bea3-78aabac0d13f\") " pod="openstack/ceilometer-0" Dec 01 09:00:33 crc kubenswrapper[4689]: I1201 09:00:33.408964 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/913d1dab-72d0-4f7b-bea3-78aabac0d13f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"913d1dab-72d0-4f7b-bea3-78aabac0d13f\") " pod="openstack/ceilometer-0" Dec 01 09:00:33 crc kubenswrapper[4689]: I1201 09:00:33.410178 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/913d1dab-72d0-4f7b-bea3-78aabac0d13f-config-data\") pod \"ceilometer-0\" (UID: \"913d1dab-72d0-4f7b-bea3-78aabac0d13f\") " pod="openstack/ceilometer-0" Dec 01 09:00:33 crc kubenswrapper[4689]: I1201 09:00:33.420084 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbh96\" (UniqueName: \"kubernetes.io/projected/913d1dab-72d0-4f7b-bea3-78aabac0d13f-kube-api-access-nbh96\") pod \"ceilometer-0\" (UID: \"913d1dab-72d0-4f7b-bea3-78aabac0d13f\") " pod="openstack/ceilometer-0" Dec 01 09:00:33 crc kubenswrapper[4689]: I1201 09:00:33.420707 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/913d1dab-72d0-4f7b-bea3-78aabac0d13f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"913d1dab-72d0-4f7b-bea3-78aabac0d13f\") " pod="openstack/ceilometer-0" Dec 01 09:00:33 crc kubenswrapper[4689]: I1201 09:00:33.664840 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 01 09:00:33 crc kubenswrapper[4689]: I1201 09:00:33.707822 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 09:00:33 crc kubenswrapper[4689]: I1201 09:00:33.810597 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bcd3c9e-fe73-431b-999e-70f81990ffc8-config-data\") pod \"3bcd3c9e-fe73-431b-999e-70f81990ffc8\" (UID: \"3bcd3c9e-fe73-431b-999e-70f81990ffc8\") " Dec 01 09:00:33 crc kubenswrapper[4689]: I1201 09:00:33.810680 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7qrf\" (UniqueName: \"kubernetes.io/projected/3bcd3c9e-fe73-431b-999e-70f81990ffc8-kube-api-access-d7qrf\") pod \"3bcd3c9e-fe73-431b-999e-70f81990ffc8\" (UID: \"3bcd3c9e-fe73-431b-999e-70f81990ffc8\") " Dec 01 09:00:33 crc kubenswrapper[4689]: I1201 09:00:33.810738 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bcd3c9e-fe73-431b-999e-70f81990ffc8-combined-ca-bundle\") pod \"3bcd3c9e-fe73-431b-999e-70f81990ffc8\" (UID: \"3bcd3c9e-fe73-431b-999e-70f81990ffc8\") " Dec 01 09:00:33 crc kubenswrapper[4689]: I1201 09:00:33.810832 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3bcd3c9e-fe73-431b-999e-70f81990ffc8-logs\") pod \"3bcd3c9e-fe73-431b-999e-70f81990ffc8\" (UID: \"3bcd3c9e-fe73-431b-999e-70f81990ffc8\") " Dec 01 09:00:33 crc kubenswrapper[4689]: I1201 09:00:33.814598 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3bcd3c9e-fe73-431b-999e-70f81990ffc8-logs" (OuterVolumeSpecName: "logs") pod "3bcd3c9e-fe73-431b-999e-70f81990ffc8" (UID: "3bcd3c9e-fe73-431b-999e-70f81990ffc8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:00:33 crc kubenswrapper[4689]: I1201 09:00:33.823259 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bcd3c9e-fe73-431b-999e-70f81990ffc8-kube-api-access-d7qrf" (OuterVolumeSpecName: "kube-api-access-d7qrf") pod "3bcd3c9e-fe73-431b-999e-70f81990ffc8" (UID: "3bcd3c9e-fe73-431b-999e-70f81990ffc8"). InnerVolumeSpecName "kube-api-access-d7qrf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:00:33 crc kubenswrapper[4689]: I1201 09:00:33.858297 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bcd3c9e-fe73-431b-999e-70f81990ffc8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3bcd3c9e-fe73-431b-999e-70f81990ffc8" (UID: "3bcd3c9e-fe73-431b-999e-70f81990ffc8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:00:33 crc kubenswrapper[4689]: I1201 09:00:33.877297 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bcd3c9e-fe73-431b-999e-70f81990ffc8-config-data" (OuterVolumeSpecName: "config-data") pod "3bcd3c9e-fe73-431b-999e-70f81990ffc8" (UID: "3bcd3c9e-fe73-431b-999e-70f81990ffc8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:00:33 crc kubenswrapper[4689]: I1201 09:00:33.913661 4689 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bcd3c9e-fe73-431b-999e-70f81990ffc8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 09:00:33 crc kubenswrapper[4689]: I1201 09:00:33.913701 4689 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3bcd3c9e-fe73-431b-999e-70f81990ffc8-logs\") on node \"crc\" DevicePath \"\"" Dec 01 09:00:33 crc kubenswrapper[4689]: I1201 09:00:33.913712 4689 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bcd3c9e-fe73-431b-999e-70f81990ffc8-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 09:00:33 crc kubenswrapper[4689]: I1201 09:00:33.913724 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d7qrf\" (UniqueName: \"kubernetes.io/projected/3bcd3c9e-fe73-431b-999e-70f81990ffc8-kube-api-access-d7qrf\") on node \"crc\" DevicePath \"\"" Dec 01 09:00:33 crc kubenswrapper[4689]: I1201 09:00:33.986242 4689 generic.go:334] "Generic (PLEG): container finished" podID="3bcd3c9e-fe73-431b-999e-70f81990ffc8" containerID="887e7f6c9bed4a9e564d18b0538859d7a61393e12255e9dbb8cfe1506d294606" exitCode=0 Dec 01 09:00:33 crc kubenswrapper[4689]: I1201 09:00:33.986314 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3bcd3c9e-fe73-431b-999e-70f81990ffc8","Type":"ContainerDied","Data":"887e7f6c9bed4a9e564d18b0538859d7a61393e12255e9dbb8cfe1506d294606"} Dec 01 09:00:33 crc kubenswrapper[4689]: I1201 09:00:33.986340 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3bcd3c9e-fe73-431b-999e-70f81990ffc8","Type":"ContainerDied","Data":"f636fc76083a263d2d644b5efeaf81af0f961e97c05f7836c072ed45942808bb"} Dec 01 09:00:33 crc kubenswrapper[4689]: I1201 09:00:33.986356 4689 scope.go:117] "RemoveContainer" containerID="887e7f6c9bed4a9e564d18b0538859d7a61393e12255e9dbb8cfe1506d294606" Dec 01 09:00:33 crc kubenswrapper[4689]: I1201 09:00:33.986619 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 01 09:00:34 crc kubenswrapper[4689]: I1201 09:00:34.043230 4689 scope.go:117] "RemoveContainer" containerID="284d6c5d242588714740ab5e37498a7e51c66a6329a1d26055a0db9498011829" Dec 01 09:00:34 crc kubenswrapper[4689]: I1201 09:00:34.054849 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 01 09:00:34 crc kubenswrapper[4689]: I1201 09:00:34.085860 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 01 09:00:34 crc kubenswrapper[4689]: I1201 09:00:34.092682 4689 scope.go:117] "RemoveContainer" containerID="887e7f6c9bed4a9e564d18b0538859d7a61393e12255e9dbb8cfe1506d294606" Dec 01 09:00:34 crc kubenswrapper[4689]: E1201 09:00:34.098285 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"887e7f6c9bed4a9e564d18b0538859d7a61393e12255e9dbb8cfe1506d294606\": container with ID starting with 887e7f6c9bed4a9e564d18b0538859d7a61393e12255e9dbb8cfe1506d294606 not found: ID does not exist" containerID="887e7f6c9bed4a9e564d18b0538859d7a61393e12255e9dbb8cfe1506d294606" Dec 01 09:00:34 crc kubenswrapper[4689]: I1201 09:00:34.098609 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"887e7f6c9bed4a9e564d18b0538859d7a61393e12255e9dbb8cfe1506d294606"} err="failed to get container status \"887e7f6c9bed4a9e564d18b0538859d7a61393e12255e9dbb8cfe1506d294606\": rpc error: code = NotFound desc = could not find container \"887e7f6c9bed4a9e564d18b0538859d7a61393e12255e9dbb8cfe1506d294606\": container with ID starting with 887e7f6c9bed4a9e564d18b0538859d7a61393e12255e9dbb8cfe1506d294606 not found: ID does not exist" Dec 01 09:00:34 crc kubenswrapper[4689]: I1201 09:00:34.098633 4689 scope.go:117] "RemoveContainer" containerID="284d6c5d242588714740ab5e37498a7e51c66a6329a1d26055a0db9498011829" Dec 01 09:00:34 crc kubenswrapper[4689]: E1201 09:00:34.101719 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"284d6c5d242588714740ab5e37498a7e51c66a6329a1d26055a0db9498011829\": container with ID starting with 284d6c5d242588714740ab5e37498a7e51c66a6329a1d26055a0db9498011829 not found: ID does not exist" containerID="284d6c5d242588714740ab5e37498a7e51c66a6329a1d26055a0db9498011829" Dec 01 09:00:34 crc kubenswrapper[4689]: I1201 09:00:34.101782 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"284d6c5d242588714740ab5e37498a7e51c66a6329a1d26055a0db9498011829"} err="failed to get container status \"284d6c5d242588714740ab5e37498a7e51c66a6329a1d26055a0db9498011829\": rpc error: code = NotFound desc = could not find container \"284d6c5d242588714740ab5e37498a7e51c66a6329a1d26055a0db9498011829\": container with ID starting with 284d6c5d242588714740ab5e37498a7e51c66a6329a1d26055a0db9498011829 not found: ID does not exist" Dec 01 09:00:34 crc kubenswrapper[4689]: I1201 09:00:34.108703 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 01 09:00:34 crc kubenswrapper[4689]: E1201 09:00:34.109154 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bcd3c9e-fe73-431b-999e-70f81990ffc8" containerName="nova-api-api" Dec 01 09:00:34 crc kubenswrapper[4689]: I1201 09:00:34.109166 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bcd3c9e-fe73-431b-999e-70f81990ffc8" containerName="nova-api-api" Dec 01 09:00:34 crc kubenswrapper[4689]: E1201 09:00:34.109189 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bcd3c9e-fe73-431b-999e-70f81990ffc8" containerName="nova-api-log" Dec 01 09:00:34 crc kubenswrapper[4689]: I1201 09:00:34.109196 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bcd3c9e-fe73-431b-999e-70f81990ffc8" containerName="nova-api-log" Dec 01 09:00:34 crc kubenswrapper[4689]: I1201 09:00:34.109360 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bcd3c9e-fe73-431b-999e-70f81990ffc8" containerName="nova-api-api" Dec 01 09:00:34 crc kubenswrapper[4689]: I1201 09:00:34.109402 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bcd3c9e-fe73-431b-999e-70f81990ffc8" containerName="nova-api-log" Dec 01 09:00:34 crc kubenswrapper[4689]: I1201 09:00:34.110353 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 01 09:00:34 crc kubenswrapper[4689]: I1201 09:00:34.125730 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 01 09:00:34 crc kubenswrapper[4689]: I1201 09:00:34.125834 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Dec 01 09:00:34 crc kubenswrapper[4689]: I1201 09:00:34.126402 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Dec 01 09:00:34 crc kubenswrapper[4689]: I1201 09:00:34.136871 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 01 09:00:34 crc kubenswrapper[4689]: I1201 09:00:34.229310 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c36d8ec3-d59d-4189-8f17-8a4ec186e41e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c36d8ec3-d59d-4189-8f17-8a4ec186e41e\") " pod="openstack/nova-api-0" Dec 01 09:00:34 crc kubenswrapper[4689]: I1201 09:00:34.229510 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c36d8ec3-d59d-4189-8f17-8a4ec186e41e-config-data\") pod \"nova-api-0\" (UID: \"c36d8ec3-d59d-4189-8f17-8a4ec186e41e\") " pod="openstack/nova-api-0" Dec 01 09:00:34 crc kubenswrapper[4689]: I1201 09:00:34.229764 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkx6n\" (UniqueName: \"kubernetes.io/projected/c36d8ec3-d59d-4189-8f17-8a4ec186e41e-kube-api-access-lkx6n\") pod \"nova-api-0\" (UID: \"c36d8ec3-d59d-4189-8f17-8a4ec186e41e\") " pod="openstack/nova-api-0" Dec 01 09:00:34 crc kubenswrapper[4689]: I1201 09:00:34.230022 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c36d8ec3-d59d-4189-8f17-8a4ec186e41e-public-tls-certs\") pod \"nova-api-0\" (UID: \"c36d8ec3-d59d-4189-8f17-8a4ec186e41e\") " pod="openstack/nova-api-0" Dec 01 09:00:34 crc kubenswrapper[4689]: I1201 09:00:34.230075 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c36d8ec3-d59d-4189-8f17-8a4ec186e41e-internal-tls-certs\") pod \"nova-api-0\" (UID: \"c36d8ec3-d59d-4189-8f17-8a4ec186e41e\") " pod="openstack/nova-api-0" Dec 01 09:00:34 crc kubenswrapper[4689]: I1201 09:00:34.230213 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c36d8ec3-d59d-4189-8f17-8a4ec186e41e-logs\") pod \"nova-api-0\" (UID: \"c36d8ec3-d59d-4189-8f17-8a4ec186e41e\") " pod="openstack/nova-api-0" Dec 01 09:00:34 crc kubenswrapper[4689]: I1201 09:00:34.331932 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c36d8ec3-d59d-4189-8f17-8a4ec186e41e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c36d8ec3-d59d-4189-8f17-8a4ec186e41e\") " pod="openstack/nova-api-0" Dec 01 09:00:34 crc kubenswrapper[4689]: I1201 09:00:34.332012 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c36d8ec3-d59d-4189-8f17-8a4ec186e41e-config-data\") pod \"nova-api-0\" (UID: \"c36d8ec3-d59d-4189-8f17-8a4ec186e41e\") " pod="openstack/nova-api-0" Dec 01 09:00:34 crc kubenswrapper[4689]: I1201 09:00:34.332053 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkx6n\" (UniqueName: \"kubernetes.io/projected/c36d8ec3-d59d-4189-8f17-8a4ec186e41e-kube-api-access-lkx6n\") pod \"nova-api-0\" (UID: \"c36d8ec3-d59d-4189-8f17-8a4ec186e41e\") " pod="openstack/nova-api-0" Dec 01 09:00:34 crc kubenswrapper[4689]: I1201 09:00:34.332112 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c36d8ec3-d59d-4189-8f17-8a4ec186e41e-public-tls-certs\") pod \"nova-api-0\" (UID: \"c36d8ec3-d59d-4189-8f17-8a4ec186e41e\") " pod="openstack/nova-api-0" Dec 01 09:00:34 crc kubenswrapper[4689]: I1201 09:00:34.332139 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c36d8ec3-d59d-4189-8f17-8a4ec186e41e-internal-tls-certs\") pod \"nova-api-0\" (UID: \"c36d8ec3-d59d-4189-8f17-8a4ec186e41e\") " pod="openstack/nova-api-0" Dec 01 09:00:34 crc kubenswrapper[4689]: I1201 09:00:34.332175 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c36d8ec3-d59d-4189-8f17-8a4ec186e41e-logs\") pod \"nova-api-0\" (UID: \"c36d8ec3-d59d-4189-8f17-8a4ec186e41e\") " pod="openstack/nova-api-0" Dec 01 09:00:34 crc kubenswrapper[4689]: I1201 09:00:34.332505 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c36d8ec3-d59d-4189-8f17-8a4ec186e41e-logs\") pod \"nova-api-0\" (UID: \"c36d8ec3-d59d-4189-8f17-8a4ec186e41e\") " pod="openstack/nova-api-0" Dec 01 09:00:34 crc kubenswrapper[4689]: I1201 09:00:34.336139 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c36d8ec3-d59d-4189-8f17-8a4ec186e41e-public-tls-certs\") pod \"nova-api-0\" (UID: \"c36d8ec3-d59d-4189-8f17-8a4ec186e41e\") " pod="openstack/nova-api-0" Dec 01 09:00:34 crc kubenswrapper[4689]: I1201 09:00:34.336410 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c36d8ec3-d59d-4189-8f17-8a4ec186e41e-config-data\") pod \"nova-api-0\" (UID: \"c36d8ec3-d59d-4189-8f17-8a4ec186e41e\") " pod="openstack/nova-api-0" Dec 01 09:00:34 crc kubenswrapper[4689]: I1201 09:00:34.337820 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c36d8ec3-d59d-4189-8f17-8a4ec186e41e-internal-tls-certs\") pod \"nova-api-0\" (UID: \"c36d8ec3-d59d-4189-8f17-8a4ec186e41e\") " pod="openstack/nova-api-0" Dec 01 09:00:34 crc kubenswrapper[4689]: I1201 09:00:34.348123 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c36d8ec3-d59d-4189-8f17-8a4ec186e41e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c36d8ec3-d59d-4189-8f17-8a4ec186e41e\") " pod="openstack/nova-api-0" Dec 01 09:00:34 crc kubenswrapper[4689]: I1201 09:00:34.362980 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkx6n\" (UniqueName: \"kubernetes.io/projected/c36d8ec3-d59d-4189-8f17-8a4ec186e41e-kube-api-access-lkx6n\") pod \"nova-api-0\" (UID: \"c36d8ec3-d59d-4189-8f17-8a4ec186e41e\") " pod="openstack/nova-api-0" Dec 01 09:00:34 crc kubenswrapper[4689]: I1201 09:00:34.402322 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 01 09:00:34 crc kubenswrapper[4689]: I1201 09:00:34.463893 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 01 09:00:34 crc kubenswrapper[4689]: I1201 09:00:34.567312 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Dec 01 09:00:34 crc kubenswrapper[4689]: I1201 09:00:34.601464 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Dec 01 09:00:34 crc kubenswrapper[4689]: I1201 09:00:34.982400 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 01 09:00:35 crc kubenswrapper[4689]: I1201 09:00:35.011231 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c36d8ec3-d59d-4189-8f17-8a4ec186e41e","Type":"ContainerStarted","Data":"a0e8c85a3cfc720a07ba03d4475a2f6161b5c7b42f93e9c585ea8d67b7798e26"} Dec 01 09:00:35 crc kubenswrapper[4689]: I1201 09:00:35.012732 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"913d1dab-72d0-4f7b-bea3-78aabac0d13f","Type":"ContainerStarted","Data":"583c4d8df03c0620722be3d85efe0072b50994b33ea0afb1321c62b195c8e771"} Dec 01 09:00:35 crc kubenswrapper[4689]: I1201 09:00:35.033472 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Dec 01 09:00:35 crc kubenswrapper[4689]: I1201 09:00:35.175290 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bcd3c9e-fe73-431b-999e-70f81990ffc8" path="/var/lib/kubelet/pods/3bcd3c9e-fe73-431b-999e-70f81990ffc8/volumes" Dec 01 09:00:35 crc kubenswrapper[4689]: I1201 09:00:35.270750 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-jlvvx"] Dec 01 09:00:35 crc kubenswrapper[4689]: I1201 09:00:35.272598 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-jlvvx" Dec 01 09:00:35 crc kubenswrapper[4689]: I1201 09:00:35.288293 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Dec 01 09:00:35 crc kubenswrapper[4689]: I1201 09:00:35.288602 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Dec 01 09:00:35 crc kubenswrapper[4689]: I1201 09:00:35.309164 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-jlvvx"] Dec 01 09:00:35 crc kubenswrapper[4689]: I1201 09:00:35.382721 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3502f3c1-75ed-41f9-b6d3-a7fffbbd9ccd-config-data\") pod \"nova-cell1-cell-mapping-jlvvx\" (UID: \"3502f3c1-75ed-41f9-b6d3-a7fffbbd9ccd\") " pod="openstack/nova-cell1-cell-mapping-jlvvx" Dec 01 09:00:35 crc kubenswrapper[4689]: I1201 09:00:35.382768 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3502f3c1-75ed-41f9-b6d3-a7fffbbd9ccd-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-jlvvx\" (UID: \"3502f3c1-75ed-41f9-b6d3-a7fffbbd9ccd\") " pod="openstack/nova-cell1-cell-mapping-jlvvx" Dec 01 09:00:35 crc kubenswrapper[4689]: I1201 09:00:35.382844 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5w64\" (UniqueName: \"kubernetes.io/projected/3502f3c1-75ed-41f9-b6d3-a7fffbbd9ccd-kube-api-access-s5w64\") pod \"nova-cell1-cell-mapping-jlvvx\" (UID: \"3502f3c1-75ed-41f9-b6d3-a7fffbbd9ccd\") " pod="openstack/nova-cell1-cell-mapping-jlvvx" Dec 01 09:00:35 crc kubenswrapper[4689]: I1201 09:00:35.382883 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3502f3c1-75ed-41f9-b6d3-a7fffbbd9ccd-scripts\") pod \"nova-cell1-cell-mapping-jlvvx\" (UID: \"3502f3c1-75ed-41f9-b6d3-a7fffbbd9ccd\") " pod="openstack/nova-cell1-cell-mapping-jlvvx" Dec 01 09:00:35 crc kubenswrapper[4689]: I1201 09:00:35.484729 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3502f3c1-75ed-41f9-b6d3-a7fffbbd9ccd-config-data\") pod \"nova-cell1-cell-mapping-jlvvx\" (UID: \"3502f3c1-75ed-41f9-b6d3-a7fffbbd9ccd\") " pod="openstack/nova-cell1-cell-mapping-jlvvx" Dec 01 09:00:35 crc kubenswrapper[4689]: I1201 09:00:35.484774 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3502f3c1-75ed-41f9-b6d3-a7fffbbd9ccd-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-jlvvx\" (UID: \"3502f3c1-75ed-41f9-b6d3-a7fffbbd9ccd\") " pod="openstack/nova-cell1-cell-mapping-jlvvx" Dec 01 09:00:35 crc kubenswrapper[4689]: I1201 09:00:35.484820 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5w64\" (UniqueName: \"kubernetes.io/projected/3502f3c1-75ed-41f9-b6d3-a7fffbbd9ccd-kube-api-access-s5w64\") pod \"nova-cell1-cell-mapping-jlvvx\" (UID: \"3502f3c1-75ed-41f9-b6d3-a7fffbbd9ccd\") " pod="openstack/nova-cell1-cell-mapping-jlvvx" Dec 01 09:00:35 crc kubenswrapper[4689]: I1201 09:00:35.484853 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3502f3c1-75ed-41f9-b6d3-a7fffbbd9ccd-scripts\") pod \"nova-cell1-cell-mapping-jlvvx\" (UID: \"3502f3c1-75ed-41f9-b6d3-a7fffbbd9ccd\") " pod="openstack/nova-cell1-cell-mapping-jlvvx" Dec 01 09:00:35 crc kubenswrapper[4689]: I1201 09:00:35.490098 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3502f3c1-75ed-41f9-b6d3-a7fffbbd9ccd-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-jlvvx\" (UID: \"3502f3c1-75ed-41f9-b6d3-a7fffbbd9ccd\") " pod="openstack/nova-cell1-cell-mapping-jlvvx" Dec 01 09:00:35 crc kubenswrapper[4689]: I1201 09:00:35.491280 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3502f3c1-75ed-41f9-b6d3-a7fffbbd9ccd-config-data\") pod \"nova-cell1-cell-mapping-jlvvx\" (UID: \"3502f3c1-75ed-41f9-b6d3-a7fffbbd9ccd\") " pod="openstack/nova-cell1-cell-mapping-jlvvx" Dec 01 09:00:35 crc kubenswrapper[4689]: I1201 09:00:35.498180 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3502f3c1-75ed-41f9-b6d3-a7fffbbd9ccd-scripts\") pod \"nova-cell1-cell-mapping-jlvvx\" (UID: \"3502f3c1-75ed-41f9-b6d3-a7fffbbd9ccd\") " pod="openstack/nova-cell1-cell-mapping-jlvvx" Dec 01 09:00:35 crc kubenswrapper[4689]: I1201 09:00:35.508262 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5w64\" (UniqueName: \"kubernetes.io/projected/3502f3c1-75ed-41f9-b6d3-a7fffbbd9ccd-kube-api-access-s5w64\") pod \"nova-cell1-cell-mapping-jlvvx\" (UID: \"3502f3c1-75ed-41f9-b6d3-a7fffbbd9ccd\") " pod="openstack/nova-cell1-cell-mapping-jlvvx" Dec 01 09:00:35 crc kubenswrapper[4689]: I1201 09:00:35.594695 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-jlvvx" Dec 01 09:00:36 crc kubenswrapper[4689]: I1201 09:00:36.065067 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"913d1dab-72d0-4f7b-bea3-78aabac0d13f","Type":"ContainerStarted","Data":"d769d761b4d40da117b4ff372d555d57d6b2b2f243310bfbfbf3e7c5f695228d"} Dec 01 09:00:36 crc kubenswrapper[4689]: I1201 09:00:36.076263 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c36d8ec3-d59d-4189-8f17-8a4ec186e41e","Type":"ContainerStarted","Data":"14244faeef3d9edb560391ac4c0ef92ac30ba3d9f4f109ffb25a2f9dfdcf34fe"} Dec 01 09:00:36 crc kubenswrapper[4689]: I1201 09:00:36.076294 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c36d8ec3-d59d-4189-8f17-8a4ec186e41e","Type":"ContainerStarted","Data":"46519826b63247d3428c9c28bd4010ffd6ef77597c2f0172d76e1ab862f78c0c"} Dec 01 09:00:36 crc kubenswrapper[4689]: I1201 09:00:36.170190 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.170166926 podStartE2EDuration="2.170166926s" podCreationTimestamp="2025-12-01 09:00:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:00:36.12584385 +0000 UTC m=+1316.198131744" watchObservedRunningTime="2025-12-01 09:00:36.170166926 +0000 UTC m=+1316.242454830" Dec 01 09:00:36 crc kubenswrapper[4689]: I1201 09:00:36.406172 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-jlvvx"] Dec 01 09:00:37 crc kubenswrapper[4689]: I1201 09:00:37.083631 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"913d1dab-72d0-4f7b-bea3-78aabac0d13f","Type":"ContainerStarted","Data":"0c9b87e278508c3cdb1e859c1f1a98b5deb677eed17b3b78f60a66821918d297"} Dec 01 09:00:37 crc kubenswrapper[4689]: I1201 09:00:37.088633 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-jlvvx" event={"ID":"3502f3c1-75ed-41f9-b6d3-a7fffbbd9ccd","Type":"ContainerStarted","Data":"80d58717b2799765da54ea3219c700d5f309a5d2c17fc50f5cff842d9e2f1f9c"} Dec 01 09:00:37 crc kubenswrapper[4689]: I1201 09:00:37.088682 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-jlvvx" event={"ID":"3502f3c1-75ed-41f9-b6d3-a7fffbbd9ccd","Type":"ContainerStarted","Data":"776bc063e554f1445c85908d37ee99597cdc4c884b0022d9d0ce792aa59f67e2"} Dec 01 09:00:37 crc kubenswrapper[4689]: I1201 09:00:37.112948 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-jlvvx" podStartSLOduration=2.112931879 podStartE2EDuration="2.112931879s" podCreationTimestamp="2025-12-01 09:00:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:00:37.111255173 +0000 UTC m=+1317.183543077" watchObservedRunningTime="2025-12-01 09:00:37.112931879 +0000 UTC m=+1317.185219783" Dec 01 09:00:37 crc kubenswrapper[4689]: I1201 09:00:37.499513 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-59cf4bdb65-ks5fk" Dec 01 09:00:37 crc kubenswrapper[4689]: I1201 09:00:37.579585 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-59d8q"] Dec 01 09:00:37 crc kubenswrapper[4689]: I1201 09:00:37.580078 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-845d6d6f59-59d8q" podUID="168c36e5-41be-45fa-8a86-334ccc148504" containerName="dnsmasq-dns" containerID="cri-o://57490bb6e9268b7bada778cd46cdb78d2be222b035283746ea691170a54c8ddb" gracePeriod=10 Dec 01 09:00:38 crc kubenswrapper[4689]: I1201 09:00:38.116310 4689 generic.go:334] "Generic (PLEG): container finished" podID="168c36e5-41be-45fa-8a86-334ccc148504" containerID="57490bb6e9268b7bada778cd46cdb78d2be222b035283746ea691170a54c8ddb" exitCode=0 Dec 01 09:00:38 crc kubenswrapper[4689]: I1201 09:00:38.117593 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-59d8q" event={"ID":"168c36e5-41be-45fa-8a86-334ccc148504","Type":"ContainerDied","Data":"57490bb6e9268b7bada778cd46cdb78d2be222b035283746ea691170a54c8ddb"} Dec 01 09:00:38 crc kubenswrapper[4689]: I1201 09:00:38.117619 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-59d8q" event={"ID":"168c36e5-41be-45fa-8a86-334ccc148504","Type":"ContainerDied","Data":"6891b878febc452425d70233668c965ae7ab6a7e219c6a47f374ea353338d135"} Dec 01 09:00:38 crc kubenswrapper[4689]: I1201 09:00:38.117630 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6891b878febc452425d70233668c965ae7ab6a7e219c6a47f374ea353338d135" Dec 01 09:00:38 crc kubenswrapper[4689]: I1201 09:00:38.184973 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-845d6d6f59-59d8q" Dec 01 09:00:38 crc kubenswrapper[4689]: I1201 09:00:38.256719 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/168c36e5-41be-45fa-8a86-334ccc148504-config\") pod \"168c36e5-41be-45fa-8a86-334ccc148504\" (UID: \"168c36e5-41be-45fa-8a86-334ccc148504\") " Dec 01 09:00:38 crc kubenswrapper[4689]: I1201 09:00:38.256820 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/168c36e5-41be-45fa-8a86-334ccc148504-ovsdbserver-sb\") pod \"168c36e5-41be-45fa-8a86-334ccc148504\" (UID: \"168c36e5-41be-45fa-8a86-334ccc148504\") " Dec 01 09:00:38 crc kubenswrapper[4689]: I1201 09:00:38.256915 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kltzq\" (UniqueName: \"kubernetes.io/projected/168c36e5-41be-45fa-8a86-334ccc148504-kube-api-access-kltzq\") pod \"168c36e5-41be-45fa-8a86-334ccc148504\" (UID: \"168c36e5-41be-45fa-8a86-334ccc148504\") " Dec 01 09:00:38 crc kubenswrapper[4689]: I1201 09:00:38.256941 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/168c36e5-41be-45fa-8a86-334ccc148504-ovsdbserver-nb\") pod \"168c36e5-41be-45fa-8a86-334ccc148504\" (UID: \"168c36e5-41be-45fa-8a86-334ccc148504\") " Dec 01 09:00:38 crc kubenswrapper[4689]: I1201 09:00:38.256989 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/168c36e5-41be-45fa-8a86-334ccc148504-dns-svc\") pod \"168c36e5-41be-45fa-8a86-334ccc148504\" (UID: \"168c36e5-41be-45fa-8a86-334ccc148504\") " Dec 01 09:00:38 crc kubenswrapper[4689]: I1201 09:00:38.257014 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/168c36e5-41be-45fa-8a86-334ccc148504-dns-swift-storage-0\") pod \"168c36e5-41be-45fa-8a86-334ccc148504\" (UID: \"168c36e5-41be-45fa-8a86-334ccc148504\") " Dec 01 09:00:38 crc kubenswrapper[4689]: I1201 09:00:38.281630 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/168c36e5-41be-45fa-8a86-334ccc148504-kube-api-access-kltzq" (OuterVolumeSpecName: "kube-api-access-kltzq") pod "168c36e5-41be-45fa-8a86-334ccc148504" (UID: "168c36e5-41be-45fa-8a86-334ccc148504"). InnerVolumeSpecName "kube-api-access-kltzq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:00:38 crc kubenswrapper[4689]: I1201 09:00:38.359628 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kltzq\" (UniqueName: \"kubernetes.io/projected/168c36e5-41be-45fa-8a86-334ccc148504-kube-api-access-kltzq\") on node \"crc\" DevicePath \"\"" Dec 01 09:00:38 crc kubenswrapper[4689]: I1201 09:00:38.365380 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/168c36e5-41be-45fa-8a86-334ccc148504-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "168c36e5-41be-45fa-8a86-334ccc148504" (UID: "168c36e5-41be-45fa-8a86-334ccc148504"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:00:38 crc kubenswrapper[4689]: I1201 09:00:38.378099 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/168c36e5-41be-45fa-8a86-334ccc148504-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "168c36e5-41be-45fa-8a86-334ccc148504" (UID: "168c36e5-41be-45fa-8a86-334ccc148504"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:00:38 crc kubenswrapper[4689]: I1201 09:00:38.385170 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/168c36e5-41be-45fa-8a86-334ccc148504-config" (OuterVolumeSpecName: "config") pod "168c36e5-41be-45fa-8a86-334ccc148504" (UID: "168c36e5-41be-45fa-8a86-334ccc148504"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:00:38 crc kubenswrapper[4689]: I1201 09:00:38.401000 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/168c36e5-41be-45fa-8a86-334ccc148504-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "168c36e5-41be-45fa-8a86-334ccc148504" (UID: "168c36e5-41be-45fa-8a86-334ccc148504"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:00:38 crc kubenswrapper[4689]: I1201 09:00:38.401723 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/168c36e5-41be-45fa-8a86-334ccc148504-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "168c36e5-41be-45fa-8a86-334ccc148504" (UID: "168c36e5-41be-45fa-8a86-334ccc148504"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:00:38 crc kubenswrapper[4689]: I1201 09:00:38.460890 4689 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/168c36e5-41be-45fa-8a86-334ccc148504-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 01 09:00:38 crc kubenswrapper[4689]: I1201 09:00:38.460931 4689 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/168c36e5-41be-45fa-8a86-334ccc148504-config\") on node \"crc\" DevicePath \"\"" Dec 01 09:00:38 crc kubenswrapper[4689]: I1201 09:00:38.460948 4689 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/168c36e5-41be-45fa-8a86-334ccc148504-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 01 09:00:38 crc kubenswrapper[4689]: I1201 09:00:38.460959 4689 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/168c36e5-41be-45fa-8a86-334ccc148504-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 01 09:00:38 crc kubenswrapper[4689]: I1201 09:00:38.460969 4689 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/168c36e5-41be-45fa-8a86-334ccc148504-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 01 09:00:39 crc kubenswrapper[4689]: I1201 09:00:39.131939 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-845d6d6f59-59d8q" Dec 01 09:00:39 crc kubenswrapper[4689]: I1201 09:00:39.132130 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"913d1dab-72d0-4f7b-bea3-78aabac0d13f","Type":"ContainerStarted","Data":"1ef4915f4e401e0048b873858950aee437da30846a84498cdc9ff067f4a35aad"} Dec 01 09:00:39 crc kubenswrapper[4689]: I1201 09:00:39.161874 4689 patch_prober.go:28] interesting pod/machine-config-daemon-hmdnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 09:00:39 crc kubenswrapper[4689]: I1201 09:00:39.161965 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 09:00:39 crc kubenswrapper[4689]: I1201 09:00:39.162016 4689 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" Dec 01 09:00:39 crc kubenswrapper[4689]: I1201 09:00:39.163055 4689 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a73b6758eaf1af9bc3a327d8874afb8d2ff28265d999a583ab055845b6607b6a"} pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 01 09:00:39 crc kubenswrapper[4689]: I1201 09:00:39.163125 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" containerName="machine-config-daemon" containerID="cri-o://a73b6758eaf1af9bc3a327d8874afb8d2ff28265d999a583ab055845b6607b6a" gracePeriod=600 Dec 01 09:00:39 crc kubenswrapper[4689]: I1201 09:00:39.201799 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-59d8q"] Dec 01 09:00:39 crc kubenswrapper[4689]: I1201 09:00:39.217330 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-59d8q"] Dec 01 09:00:40 crc kubenswrapper[4689]: I1201 09:00:40.145814 4689 generic.go:334] "Generic (PLEG): container finished" podID="3947625d-75bf-4332-a233-1491b2ee9d96" containerID="a73b6758eaf1af9bc3a327d8874afb8d2ff28265d999a583ab055845b6607b6a" exitCode=0 Dec 01 09:00:40 crc kubenswrapper[4689]: I1201 09:00:40.146013 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" event={"ID":"3947625d-75bf-4332-a233-1491b2ee9d96","Type":"ContainerDied","Data":"a73b6758eaf1af9bc3a327d8874afb8d2ff28265d999a583ab055845b6607b6a"} Dec 01 09:00:40 crc kubenswrapper[4689]: I1201 09:00:40.146055 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" event={"ID":"3947625d-75bf-4332-a233-1491b2ee9d96","Type":"ContainerStarted","Data":"3ff597f084dfd52ccbf3aad1aa4bc0d1a664ed524aff9fe436ddb28c1463db49"} Dec 01 09:00:40 crc kubenswrapper[4689]: I1201 09:00:40.146074 4689 scope.go:117] "RemoveContainer" containerID="d1e70c73c88326989d073faf6067f98f45b064162bc9402e3b9575ef624c63ae" Dec 01 09:00:41 crc kubenswrapper[4689]: I1201 09:00:41.064152 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="168c36e5-41be-45fa-8a86-334ccc148504" path="/var/lib/kubelet/pods/168c36e5-41be-45fa-8a86-334ccc148504/volumes" Dec 01 09:00:41 crc kubenswrapper[4689]: I1201 09:00:41.161441 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"913d1dab-72d0-4f7b-bea3-78aabac0d13f","Type":"ContainerStarted","Data":"8563f4e92b70dc58be8b849ffe0655a1d9882ee5599553c78b789ec0b174e24a"} Dec 01 09:00:41 crc kubenswrapper[4689]: I1201 09:00:41.162578 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 01 09:00:41 crc kubenswrapper[4689]: I1201 09:00:41.186033 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.635958919 podStartE2EDuration="8.186013433s" podCreationTimestamp="2025-12-01 09:00:33 +0000 UTC" firstStartedPulling="2025-12-01 09:00:34.402247409 +0000 UTC m=+1314.474535313" lastFinishedPulling="2025-12-01 09:00:39.952301923 +0000 UTC m=+1320.024589827" observedRunningTime="2025-12-01 09:00:41.178451978 +0000 UTC m=+1321.250739882" watchObservedRunningTime="2025-12-01 09:00:41.186013433 +0000 UTC m=+1321.258301337" Dec 01 09:00:44 crc kubenswrapper[4689]: I1201 09:00:44.192618 4689 generic.go:334] "Generic (PLEG): container finished" podID="3502f3c1-75ed-41f9-b6d3-a7fffbbd9ccd" containerID="80d58717b2799765da54ea3219c700d5f309a5d2c17fc50f5cff842d9e2f1f9c" exitCode=0 Dec 01 09:00:44 crc kubenswrapper[4689]: I1201 09:00:44.192832 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-jlvvx" event={"ID":"3502f3c1-75ed-41f9-b6d3-a7fffbbd9ccd","Type":"ContainerDied","Data":"80d58717b2799765da54ea3219c700d5f309a5d2c17fc50f5cff842d9e2f1f9c"} Dec 01 09:00:44 crc kubenswrapper[4689]: I1201 09:00:44.464872 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 01 09:00:44 crc kubenswrapper[4689]: I1201 09:00:44.464927 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 01 09:00:45 crc kubenswrapper[4689]: I1201 09:00:45.480545 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c36d8ec3-d59d-4189-8f17-8a4ec186e41e" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.202:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 01 09:00:45 crc kubenswrapper[4689]: I1201 09:00:45.480597 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c36d8ec3-d59d-4189-8f17-8a4ec186e41e" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.202:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 01 09:00:45 crc kubenswrapper[4689]: I1201 09:00:45.659215 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-jlvvx" Dec 01 09:00:45 crc kubenswrapper[4689]: I1201 09:00:45.724146 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s5w64\" (UniqueName: \"kubernetes.io/projected/3502f3c1-75ed-41f9-b6d3-a7fffbbd9ccd-kube-api-access-s5w64\") pod \"3502f3c1-75ed-41f9-b6d3-a7fffbbd9ccd\" (UID: \"3502f3c1-75ed-41f9-b6d3-a7fffbbd9ccd\") " Dec 01 09:00:45 crc kubenswrapper[4689]: I1201 09:00:45.724200 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3502f3c1-75ed-41f9-b6d3-a7fffbbd9ccd-combined-ca-bundle\") pod \"3502f3c1-75ed-41f9-b6d3-a7fffbbd9ccd\" (UID: \"3502f3c1-75ed-41f9-b6d3-a7fffbbd9ccd\") " Dec 01 09:00:45 crc kubenswrapper[4689]: I1201 09:00:45.724306 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3502f3c1-75ed-41f9-b6d3-a7fffbbd9ccd-scripts\") pod \"3502f3c1-75ed-41f9-b6d3-a7fffbbd9ccd\" (UID: \"3502f3c1-75ed-41f9-b6d3-a7fffbbd9ccd\") " Dec 01 09:00:45 crc kubenswrapper[4689]: I1201 09:00:45.724329 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3502f3c1-75ed-41f9-b6d3-a7fffbbd9ccd-config-data\") pod \"3502f3c1-75ed-41f9-b6d3-a7fffbbd9ccd\" (UID: \"3502f3c1-75ed-41f9-b6d3-a7fffbbd9ccd\") " Dec 01 09:00:45 crc kubenswrapper[4689]: I1201 09:00:45.733074 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3502f3c1-75ed-41f9-b6d3-a7fffbbd9ccd-kube-api-access-s5w64" (OuterVolumeSpecName: "kube-api-access-s5w64") pod "3502f3c1-75ed-41f9-b6d3-a7fffbbd9ccd" (UID: "3502f3c1-75ed-41f9-b6d3-a7fffbbd9ccd"). InnerVolumeSpecName "kube-api-access-s5w64". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:00:45 crc kubenswrapper[4689]: I1201 09:00:45.735649 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3502f3c1-75ed-41f9-b6d3-a7fffbbd9ccd-scripts" (OuterVolumeSpecName: "scripts") pod "3502f3c1-75ed-41f9-b6d3-a7fffbbd9ccd" (UID: "3502f3c1-75ed-41f9-b6d3-a7fffbbd9ccd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:00:45 crc kubenswrapper[4689]: I1201 09:00:45.770877 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3502f3c1-75ed-41f9-b6d3-a7fffbbd9ccd-config-data" (OuterVolumeSpecName: "config-data") pod "3502f3c1-75ed-41f9-b6d3-a7fffbbd9ccd" (UID: "3502f3c1-75ed-41f9-b6d3-a7fffbbd9ccd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:00:45 crc kubenswrapper[4689]: I1201 09:00:45.776054 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3502f3c1-75ed-41f9-b6d3-a7fffbbd9ccd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3502f3c1-75ed-41f9-b6d3-a7fffbbd9ccd" (UID: "3502f3c1-75ed-41f9-b6d3-a7fffbbd9ccd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:00:45 crc kubenswrapper[4689]: I1201 09:00:45.848578 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s5w64\" (UniqueName: \"kubernetes.io/projected/3502f3c1-75ed-41f9-b6d3-a7fffbbd9ccd-kube-api-access-s5w64\") on node \"crc\" DevicePath \"\"" Dec 01 09:00:45 crc kubenswrapper[4689]: I1201 09:00:45.848612 4689 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3502f3c1-75ed-41f9-b6d3-a7fffbbd9ccd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 09:00:45 crc kubenswrapper[4689]: I1201 09:00:45.848623 4689 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3502f3c1-75ed-41f9-b6d3-a7fffbbd9ccd-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 09:00:45 crc kubenswrapper[4689]: I1201 09:00:45.848631 4689 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3502f3c1-75ed-41f9-b6d3-a7fffbbd9ccd-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 09:00:46 crc kubenswrapper[4689]: I1201 09:00:46.212001 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-jlvvx" event={"ID":"3502f3c1-75ed-41f9-b6d3-a7fffbbd9ccd","Type":"ContainerDied","Data":"776bc063e554f1445c85908d37ee99597cdc4c884b0022d9d0ce792aa59f67e2"} Dec 01 09:00:46 crc kubenswrapper[4689]: I1201 09:00:46.212050 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="776bc063e554f1445c85908d37ee99597cdc4c884b0022d9d0ce792aa59f67e2" Dec 01 09:00:46 crc kubenswrapper[4689]: I1201 09:00:46.212459 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-jlvvx" Dec 01 09:00:46 crc kubenswrapper[4689]: I1201 09:00:46.417439 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 01 09:00:46 crc kubenswrapper[4689]: I1201 09:00:46.418096 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c36d8ec3-d59d-4189-8f17-8a4ec186e41e" containerName="nova-api-log" containerID="cri-o://46519826b63247d3428c9c28bd4010ffd6ef77597c2f0172d76e1ab862f78c0c" gracePeriod=30 Dec 01 09:00:46 crc kubenswrapper[4689]: I1201 09:00:46.418207 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c36d8ec3-d59d-4189-8f17-8a4ec186e41e" containerName="nova-api-api" containerID="cri-o://14244faeef3d9edb560391ac4c0ef92ac30ba3d9f4f109ffb25a2f9dfdcf34fe" gracePeriod=30 Dec 01 09:00:46 crc kubenswrapper[4689]: I1201 09:00:46.490223 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 01 09:00:46 crc kubenswrapper[4689]: I1201 09:00:46.490509 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="351bf336-7502-4bd1-be87-b032449e4b00" containerName="nova-metadata-log" containerID="cri-o://7ad020d8558bbbfa4b7c2d9bec00a922d8dede433e450fb0f2efd28697021dd4" gracePeriod=30 Dec 01 09:00:46 crc kubenswrapper[4689]: I1201 09:00:46.491009 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="351bf336-7502-4bd1-be87-b032449e4b00" containerName="nova-metadata-metadata" containerID="cri-o://2c17204e5869ae25d25347a66320b0cb109bc26657a4c319a0f448ea518b1c80" gracePeriod=30 Dec 01 09:00:46 crc kubenswrapper[4689]: I1201 09:00:46.507190 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 01 09:00:46 crc kubenswrapper[4689]: I1201 09:00:46.507415 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="c2c4d31d-4c6b-447e-99ef-8d46cbdaf55d" containerName="nova-scheduler-scheduler" containerID="cri-o://5b628d1f5841af73397176ddc5006ab551751ad2346d113add58773343c530ba" gracePeriod=30 Dec 01 09:00:47 crc kubenswrapper[4689]: I1201 09:00:47.235961 4689 generic.go:334] "Generic (PLEG): container finished" podID="c36d8ec3-d59d-4189-8f17-8a4ec186e41e" containerID="46519826b63247d3428c9c28bd4010ffd6ef77597c2f0172d76e1ab862f78c0c" exitCode=143 Dec 01 09:00:47 crc kubenswrapper[4689]: I1201 09:00:47.236046 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c36d8ec3-d59d-4189-8f17-8a4ec186e41e","Type":"ContainerDied","Data":"46519826b63247d3428c9c28bd4010ffd6ef77597c2f0172d76e1ab862f78c0c"} Dec 01 09:00:47 crc kubenswrapper[4689]: I1201 09:00:47.238333 4689 generic.go:334] "Generic (PLEG): container finished" podID="351bf336-7502-4bd1-be87-b032449e4b00" containerID="7ad020d8558bbbfa4b7c2d9bec00a922d8dede433e450fb0f2efd28697021dd4" exitCode=143 Dec 01 09:00:47 crc kubenswrapper[4689]: I1201 09:00:47.239898 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"351bf336-7502-4bd1-be87-b032449e4b00","Type":"ContainerDied","Data":"7ad020d8558bbbfa4b7c2d9bec00a922d8dede433e450fb0f2efd28697021dd4"} Dec 01 09:00:49 crc kubenswrapper[4689]: E1201 09:00:49.115311 4689 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5b628d1f5841af73397176ddc5006ab551751ad2346d113add58773343c530ba is running failed: container process not found" containerID="5b628d1f5841af73397176ddc5006ab551751ad2346d113add58773343c530ba" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 01 09:00:49 crc kubenswrapper[4689]: E1201 09:00:49.116151 4689 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5b628d1f5841af73397176ddc5006ab551751ad2346d113add58773343c530ba is running failed: container process not found" containerID="5b628d1f5841af73397176ddc5006ab551751ad2346d113add58773343c530ba" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 01 09:00:49 crc kubenswrapper[4689]: E1201 09:00:49.116491 4689 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5b628d1f5841af73397176ddc5006ab551751ad2346d113add58773343c530ba is running failed: container process not found" containerID="5b628d1f5841af73397176ddc5006ab551751ad2346d113add58773343c530ba" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 01 09:00:49 crc kubenswrapper[4689]: E1201 09:00:49.116575 4689 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5b628d1f5841af73397176ddc5006ab551751ad2346d113add58773343c530ba is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="c2c4d31d-4c6b-447e-99ef-8d46cbdaf55d" containerName="nova-scheduler-scheduler" Dec 01 09:00:49 crc kubenswrapper[4689]: I1201 09:00:49.258907 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 01 09:00:49 crc kubenswrapper[4689]: I1201 09:00:49.267932 4689 generic.go:334] "Generic (PLEG): container finished" podID="c2c4d31d-4c6b-447e-99ef-8d46cbdaf55d" containerID="5b628d1f5841af73397176ddc5006ab551751ad2346d113add58773343c530ba" exitCode=0 Dec 01 09:00:49 crc kubenswrapper[4689]: I1201 09:00:49.268169 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 01 09:00:49 crc kubenswrapper[4689]: I1201 09:00:49.268088 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c2c4d31d-4c6b-447e-99ef-8d46cbdaf55d","Type":"ContainerDied","Data":"5b628d1f5841af73397176ddc5006ab551751ad2346d113add58773343c530ba"} Dec 01 09:00:49 crc kubenswrapper[4689]: I1201 09:00:49.268396 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c2c4d31d-4c6b-447e-99ef-8d46cbdaf55d","Type":"ContainerDied","Data":"f6ee744bfc413f3635aa932b4814d4aed70cabf2eedb2ab326e8a68a8b4c425c"} Dec 01 09:00:49 crc kubenswrapper[4689]: I1201 09:00:49.268438 4689 scope.go:117] "RemoveContainer" containerID="5b628d1f5841af73397176ddc5006ab551751ad2346d113add58773343c530ba" Dec 01 09:00:49 crc kubenswrapper[4689]: I1201 09:00:49.300031 4689 scope.go:117] "RemoveContainer" containerID="5b628d1f5841af73397176ddc5006ab551751ad2346d113add58773343c530ba" Dec 01 09:00:49 crc kubenswrapper[4689]: E1201 09:00:49.300442 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b628d1f5841af73397176ddc5006ab551751ad2346d113add58773343c530ba\": container with ID starting with 5b628d1f5841af73397176ddc5006ab551751ad2346d113add58773343c530ba not found: ID does not exist" containerID="5b628d1f5841af73397176ddc5006ab551751ad2346d113add58773343c530ba" Dec 01 09:00:49 crc kubenswrapper[4689]: I1201 09:00:49.300472 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b628d1f5841af73397176ddc5006ab551751ad2346d113add58773343c530ba"} err="failed to get container status \"5b628d1f5841af73397176ddc5006ab551751ad2346d113add58773343c530ba\": rpc error: code = NotFound desc = could not find container \"5b628d1f5841af73397176ddc5006ab551751ad2346d113add58773343c530ba\": container with ID starting with 5b628d1f5841af73397176ddc5006ab551751ad2346d113add58773343c530ba not found: ID does not exist" Dec 01 09:00:49 crc kubenswrapper[4689]: I1201 09:00:49.338587 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2c4d31d-4c6b-447e-99ef-8d46cbdaf55d-config-data\") pod \"c2c4d31d-4c6b-447e-99ef-8d46cbdaf55d\" (UID: \"c2c4d31d-4c6b-447e-99ef-8d46cbdaf55d\") " Dec 01 09:00:49 crc kubenswrapper[4689]: I1201 09:00:49.338706 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2c4d31d-4c6b-447e-99ef-8d46cbdaf55d-combined-ca-bundle\") pod \"c2c4d31d-4c6b-447e-99ef-8d46cbdaf55d\" (UID: \"c2c4d31d-4c6b-447e-99ef-8d46cbdaf55d\") " Dec 01 09:00:49 crc kubenswrapper[4689]: I1201 09:00:49.338857 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7mxsh\" (UniqueName: \"kubernetes.io/projected/c2c4d31d-4c6b-447e-99ef-8d46cbdaf55d-kube-api-access-7mxsh\") pod \"c2c4d31d-4c6b-447e-99ef-8d46cbdaf55d\" (UID: \"c2c4d31d-4c6b-447e-99ef-8d46cbdaf55d\") " Dec 01 09:00:49 crc kubenswrapper[4689]: I1201 09:00:49.347237 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2c4d31d-4c6b-447e-99ef-8d46cbdaf55d-kube-api-access-7mxsh" (OuterVolumeSpecName: "kube-api-access-7mxsh") pod "c2c4d31d-4c6b-447e-99ef-8d46cbdaf55d" (UID: "c2c4d31d-4c6b-447e-99ef-8d46cbdaf55d"). InnerVolumeSpecName "kube-api-access-7mxsh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:00:49 crc kubenswrapper[4689]: I1201 09:00:49.369054 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2c4d31d-4c6b-447e-99ef-8d46cbdaf55d-config-data" (OuterVolumeSpecName: "config-data") pod "c2c4d31d-4c6b-447e-99ef-8d46cbdaf55d" (UID: "c2c4d31d-4c6b-447e-99ef-8d46cbdaf55d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:00:49 crc kubenswrapper[4689]: I1201 09:00:49.401023 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2c4d31d-4c6b-447e-99ef-8d46cbdaf55d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c2c4d31d-4c6b-447e-99ef-8d46cbdaf55d" (UID: "c2c4d31d-4c6b-447e-99ef-8d46cbdaf55d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:00:49 crc kubenswrapper[4689]: I1201 09:00:49.440692 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7mxsh\" (UniqueName: \"kubernetes.io/projected/c2c4d31d-4c6b-447e-99ef-8d46cbdaf55d-kube-api-access-7mxsh\") on node \"crc\" DevicePath \"\"" Dec 01 09:00:49 crc kubenswrapper[4689]: I1201 09:00:49.440739 4689 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2c4d31d-4c6b-447e-99ef-8d46cbdaf55d-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 09:00:49 crc kubenswrapper[4689]: I1201 09:00:49.440757 4689 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2c4d31d-4c6b-447e-99ef-8d46cbdaf55d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 09:00:49 crc kubenswrapper[4689]: I1201 09:00:49.610475 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 01 09:00:49 crc kubenswrapper[4689]: I1201 09:00:49.628662 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Dec 01 09:00:49 crc kubenswrapper[4689]: I1201 09:00:49.643953 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 01 09:00:49 crc kubenswrapper[4689]: E1201 09:00:49.656461 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="168c36e5-41be-45fa-8a86-334ccc148504" containerName="dnsmasq-dns" Dec 01 09:00:49 crc kubenswrapper[4689]: I1201 09:00:49.656521 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="168c36e5-41be-45fa-8a86-334ccc148504" containerName="dnsmasq-dns" Dec 01 09:00:49 crc kubenswrapper[4689]: E1201 09:00:49.656566 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2c4d31d-4c6b-447e-99ef-8d46cbdaf55d" containerName="nova-scheduler-scheduler" Dec 01 09:00:49 crc kubenswrapper[4689]: I1201 09:00:49.656577 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2c4d31d-4c6b-447e-99ef-8d46cbdaf55d" containerName="nova-scheduler-scheduler" Dec 01 09:00:49 crc kubenswrapper[4689]: E1201 09:00:49.656608 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="168c36e5-41be-45fa-8a86-334ccc148504" containerName="init" Dec 01 09:00:49 crc kubenswrapper[4689]: I1201 09:00:49.656617 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="168c36e5-41be-45fa-8a86-334ccc148504" containerName="init" Dec 01 09:00:49 crc kubenswrapper[4689]: E1201 09:00:49.656634 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3502f3c1-75ed-41f9-b6d3-a7fffbbd9ccd" containerName="nova-manage" Dec 01 09:00:49 crc kubenswrapper[4689]: I1201 09:00:49.656642 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="3502f3c1-75ed-41f9-b6d3-a7fffbbd9ccd" containerName="nova-manage" Dec 01 09:00:49 crc kubenswrapper[4689]: I1201 09:00:49.657089 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="168c36e5-41be-45fa-8a86-334ccc148504" containerName="dnsmasq-dns" Dec 01 09:00:49 crc kubenswrapper[4689]: I1201 09:00:49.657112 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2c4d31d-4c6b-447e-99ef-8d46cbdaf55d" containerName="nova-scheduler-scheduler" Dec 01 09:00:49 crc kubenswrapper[4689]: I1201 09:00:49.657128 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="3502f3c1-75ed-41f9-b6d3-a7fffbbd9ccd" containerName="nova-manage" Dec 01 09:00:49 crc kubenswrapper[4689]: I1201 09:00:49.657897 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 01 09:00:49 crc kubenswrapper[4689]: I1201 09:00:49.663888 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 01 09:00:49 crc kubenswrapper[4689]: I1201 09:00:49.664861 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 01 09:00:49 crc kubenswrapper[4689]: I1201 09:00:49.712541 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="351bf336-7502-4bd1-be87-b032449e4b00" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.196:8775/\": read tcp 10.217.0.2:50076->10.217.0.196:8775: read: connection reset by peer" Dec 01 09:00:49 crc kubenswrapper[4689]: I1201 09:00:49.712588 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="351bf336-7502-4bd1-be87-b032449e4b00" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.196:8775/\": read tcp 10.217.0.2:50074->10.217.0.196:8775: read: connection reset by peer" Dec 01 09:00:49 crc kubenswrapper[4689]: I1201 09:00:49.748224 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a13f2879-b8c7-42d5-8f88-ca9aeb7f26bc-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a13f2879-b8c7-42d5-8f88-ca9aeb7f26bc\") " pod="openstack/nova-scheduler-0" Dec 01 09:00:49 crc kubenswrapper[4689]: I1201 09:00:49.748430 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhbr5\" (UniqueName: \"kubernetes.io/projected/a13f2879-b8c7-42d5-8f88-ca9aeb7f26bc-kube-api-access-zhbr5\") pod \"nova-scheduler-0\" (UID: \"a13f2879-b8c7-42d5-8f88-ca9aeb7f26bc\") " pod="openstack/nova-scheduler-0" Dec 01 09:00:49 crc kubenswrapper[4689]: I1201 09:00:49.748535 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a13f2879-b8c7-42d5-8f88-ca9aeb7f26bc-config-data\") pod \"nova-scheduler-0\" (UID: \"a13f2879-b8c7-42d5-8f88-ca9aeb7f26bc\") " pod="openstack/nova-scheduler-0" Dec 01 09:00:49 crc kubenswrapper[4689]: I1201 09:00:49.853953 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhbr5\" (UniqueName: \"kubernetes.io/projected/a13f2879-b8c7-42d5-8f88-ca9aeb7f26bc-kube-api-access-zhbr5\") pod \"nova-scheduler-0\" (UID: \"a13f2879-b8c7-42d5-8f88-ca9aeb7f26bc\") " pod="openstack/nova-scheduler-0" Dec 01 09:00:49 crc kubenswrapper[4689]: I1201 09:00:49.854064 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a13f2879-b8c7-42d5-8f88-ca9aeb7f26bc-config-data\") pod \"nova-scheduler-0\" (UID: \"a13f2879-b8c7-42d5-8f88-ca9aeb7f26bc\") " pod="openstack/nova-scheduler-0" Dec 01 09:00:49 crc kubenswrapper[4689]: I1201 09:00:49.854140 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a13f2879-b8c7-42d5-8f88-ca9aeb7f26bc-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a13f2879-b8c7-42d5-8f88-ca9aeb7f26bc\") " pod="openstack/nova-scheduler-0" Dec 01 09:00:49 crc kubenswrapper[4689]: I1201 09:00:49.863323 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a13f2879-b8c7-42d5-8f88-ca9aeb7f26bc-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a13f2879-b8c7-42d5-8f88-ca9aeb7f26bc\") " pod="openstack/nova-scheduler-0" Dec 01 09:00:49 crc kubenswrapper[4689]: I1201 09:00:49.864025 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a13f2879-b8c7-42d5-8f88-ca9aeb7f26bc-config-data\") pod \"nova-scheduler-0\" (UID: \"a13f2879-b8c7-42d5-8f88-ca9aeb7f26bc\") " pod="openstack/nova-scheduler-0" Dec 01 09:00:49 crc kubenswrapper[4689]: I1201 09:00:49.872704 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhbr5\" (UniqueName: \"kubernetes.io/projected/a13f2879-b8c7-42d5-8f88-ca9aeb7f26bc-kube-api-access-zhbr5\") pod \"nova-scheduler-0\" (UID: \"a13f2879-b8c7-42d5-8f88-ca9aeb7f26bc\") " pod="openstack/nova-scheduler-0" Dec 01 09:00:49 crc kubenswrapper[4689]: I1201 09:00:49.986360 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 01 09:00:50 crc kubenswrapper[4689]: I1201 09:00:50.250494 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 01 09:00:50 crc kubenswrapper[4689]: I1201 09:00:50.308315 4689 generic.go:334] "Generic (PLEG): container finished" podID="351bf336-7502-4bd1-be87-b032449e4b00" containerID="2c17204e5869ae25d25347a66320b0cb109bc26657a4c319a0f448ea518b1c80" exitCode=0 Dec 01 09:00:50 crc kubenswrapper[4689]: I1201 09:00:50.308383 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"351bf336-7502-4bd1-be87-b032449e4b00","Type":"ContainerDied","Data":"2c17204e5869ae25d25347a66320b0cb109bc26657a4c319a0f448ea518b1c80"} Dec 01 09:00:50 crc kubenswrapper[4689]: I1201 09:00:50.308415 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"351bf336-7502-4bd1-be87-b032449e4b00","Type":"ContainerDied","Data":"f2ea5b1b1b1e1d4587b132137b5593b4b4066c7f50c08c5034fa99a4f713967d"} Dec 01 09:00:50 crc kubenswrapper[4689]: I1201 09:00:50.308436 4689 scope.go:117] "RemoveContainer" containerID="2c17204e5869ae25d25347a66320b0cb109bc26657a4c319a0f448ea518b1c80" Dec 01 09:00:50 crc kubenswrapper[4689]: I1201 09:00:50.308448 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 01 09:00:50 crc kubenswrapper[4689]: I1201 09:00:50.362035 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/351bf336-7502-4bd1-be87-b032449e4b00-config-data\") pod \"351bf336-7502-4bd1-be87-b032449e4b00\" (UID: \"351bf336-7502-4bd1-be87-b032449e4b00\") " Dec 01 09:00:50 crc kubenswrapper[4689]: I1201 09:00:50.362353 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/351bf336-7502-4bd1-be87-b032449e4b00-nova-metadata-tls-certs\") pod \"351bf336-7502-4bd1-be87-b032449e4b00\" (UID: \"351bf336-7502-4bd1-be87-b032449e4b00\") " Dec 01 09:00:50 crc kubenswrapper[4689]: I1201 09:00:50.362560 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/351bf336-7502-4bd1-be87-b032449e4b00-logs\") pod \"351bf336-7502-4bd1-be87-b032449e4b00\" (UID: \"351bf336-7502-4bd1-be87-b032449e4b00\") " Dec 01 09:00:50 crc kubenswrapper[4689]: I1201 09:00:50.362786 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pzb96\" (UniqueName: \"kubernetes.io/projected/351bf336-7502-4bd1-be87-b032449e4b00-kube-api-access-pzb96\") pod \"351bf336-7502-4bd1-be87-b032449e4b00\" (UID: \"351bf336-7502-4bd1-be87-b032449e4b00\") " Dec 01 09:00:50 crc kubenswrapper[4689]: I1201 09:00:50.362902 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/351bf336-7502-4bd1-be87-b032449e4b00-combined-ca-bundle\") pod \"351bf336-7502-4bd1-be87-b032449e4b00\" (UID: \"351bf336-7502-4bd1-be87-b032449e4b00\") " Dec 01 09:00:50 crc kubenswrapper[4689]: I1201 09:00:50.362961 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/351bf336-7502-4bd1-be87-b032449e4b00-logs" (OuterVolumeSpecName: "logs") pod "351bf336-7502-4bd1-be87-b032449e4b00" (UID: "351bf336-7502-4bd1-be87-b032449e4b00"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:00:50 crc kubenswrapper[4689]: I1201 09:00:50.363397 4689 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/351bf336-7502-4bd1-be87-b032449e4b00-logs\") on node \"crc\" DevicePath \"\"" Dec 01 09:00:50 crc kubenswrapper[4689]: I1201 09:00:50.371571 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/351bf336-7502-4bd1-be87-b032449e4b00-kube-api-access-pzb96" (OuterVolumeSpecName: "kube-api-access-pzb96") pod "351bf336-7502-4bd1-be87-b032449e4b00" (UID: "351bf336-7502-4bd1-be87-b032449e4b00"). InnerVolumeSpecName "kube-api-access-pzb96". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:00:50 crc kubenswrapper[4689]: I1201 09:00:50.373262 4689 scope.go:117] "RemoveContainer" containerID="7ad020d8558bbbfa4b7c2d9bec00a922d8dede433e450fb0f2efd28697021dd4" Dec 01 09:00:50 crc kubenswrapper[4689]: I1201 09:00:50.406224 4689 scope.go:117] "RemoveContainer" containerID="2c17204e5869ae25d25347a66320b0cb109bc26657a4c319a0f448ea518b1c80" Dec 01 09:00:50 crc kubenswrapper[4689]: E1201 09:00:50.409514 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c17204e5869ae25d25347a66320b0cb109bc26657a4c319a0f448ea518b1c80\": container with ID starting with 2c17204e5869ae25d25347a66320b0cb109bc26657a4c319a0f448ea518b1c80 not found: ID does not exist" containerID="2c17204e5869ae25d25347a66320b0cb109bc26657a4c319a0f448ea518b1c80" Dec 01 09:00:50 crc kubenswrapper[4689]: I1201 09:00:50.409556 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c17204e5869ae25d25347a66320b0cb109bc26657a4c319a0f448ea518b1c80"} err="failed to get container status \"2c17204e5869ae25d25347a66320b0cb109bc26657a4c319a0f448ea518b1c80\": rpc error: code = NotFound desc = could not find container \"2c17204e5869ae25d25347a66320b0cb109bc26657a4c319a0f448ea518b1c80\": container with ID starting with 2c17204e5869ae25d25347a66320b0cb109bc26657a4c319a0f448ea518b1c80 not found: ID does not exist" Dec 01 09:00:50 crc kubenswrapper[4689]: I1201 09:00:50.409585 4689 scope.go:117] "RemoveContainer" containerID="7ad020d8558bbbfa4b7c2d9bec00a922d8dede433e450fb0f2efd28697021dd4" Dec 01 09:00:50 crc kubenswrapper[4689]: E1201 09:00:50.410076 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ad020d8558bbbfa4b7c2d9bec00a922d8dede433e450fb0f2efd28697021dd4\": container with ID starting with 7ad020d8558bbbfa4b7c2d9bec00a922d8dede433e450fb0f2efd28697021dd4 not found: ID does not exist" containerID="7ad020d8558bbbfa4b7c2d9bec00a922d8dede433e450fb0f2efd28697021dd4" Dec 01 09:00:50 crc kubenswrapper[4689]: I1201 09:00:50.410133 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ad020d8558bbbfa4b7c2d9bec00a922d8dede433e450fb0f2efd28697021dd4"} err="failed to get container status \"7ad020d8558bbbfa4b7c2d9bec00a922d8dede433e450fb0f2efd28697021dd4\": rpc error: code = NotFound desc = could not find container \"7ad020d8558bbbfa4b7c2d9bec00a922d8dede433e450fb0f2efd28697021dd4\": container with ID starting with 7ad020d8558bbbfa4b7c2d9bec00a922d8dede433e450fb0f2efd28697021dd4 not found: ID does not exist" Dec 01 09:00:50 crc kubenswrapper[4689]: I1201 09:00:50.419169 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/351bf336-7502-4bd1-be87-b032449e4b00-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "351bf336-7502-4bd1-be87-b032449e4b00" (UID: "351bf336-7502-4bd1-be87-b032449e4b00"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:00:50 crc kubenswrapper[4689]: I1201 09:00:50.424138 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/351bf336-7502-4bd1-be87-b032449e4b00-config-data" (OuterVolumeSpecName: "config-data") pod "351bf336-7502-4bd1-be87-b032449e4b00" (UID: "351bf336-7502-4bd1-be87-b032449e4b00"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:00:50 crc kubenswrapper[4689]: I1201 09:00:50.441641 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/351bf336-7502-4bd1-be87-b032449e4b00-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "351bf336-7502-4bd1-be87-b032449e4b00" (UID: "351bf336-7502-4bd1-be87-b032449e4b00"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:00:50 crc kubenswrapper[4689]: I1201 09:00:50.470218 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pzb96\" (UniqueName: \"kubernetes.io/projected/351bf336-7502-4bd1-be87-b032449e4b00-kube-api-access-pzb96\") on node \"crc\" DevicePath \"\"" Dec 01 09:00:50 crc kubenswrapper[4689]: I1201 09:00:50.470269 4689 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/351bf336-7502-4bd1-be87-b032449e4b00-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 09:00:50 crc kubenswrapper[4689]: I1201 09:00:50.470302 4689 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/351bf336-7502-4bd1-be87-b032449e4b00-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 09:00:50 crc kubenswrapper[4689]: I1201 09:00:50.470314 4689 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/351bf336-7502-4bd1-be87-b032449e4b00-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 01 09:00:50 crc kubenswrapper[4689]: W1201 09:00:50.671632 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda13f2879_b8c7_42d5_8f88_ca9aeb7f26bc.slice/crio-a7a46cdec045aa0e173e0488f9cca680175dfd23db80df733077be4e7398b8ff WatchSource:0}: Error finding container a7a46cdec045aa0e173e0488f9cca680175dfd23db80df733077be4e7398b8ff: Status 404 returned error can't find the container with id a7a46cdec045aa0e173e0488f9cca680175dfd23db80df733077be4e7398b8ff Dec 01 09:00:50 crc kubenswrapper[4689]: I1201 09:00:50.685673 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 01 09:00:50 crc kubenswrapper[4689]: I1201 09:00:50.696011 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 01 09:00:50 crc kubenswrapper[4689]: I1201 09:00:50.713234 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 01 09:00:50 crc kubenswrapper[4689]: I1201 09:00:50.729103 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 01 09:00:50 crc kubenswrapper[4689]: E1201 09:00:50.731997 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="351bf336-7502-4bd1-be87-b032449e4b00" containerName="nova-metadata-metadata" Dec 01 09:00:50 crc kubenswrapper[4689]: I1201 09:00:50.732040 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="351bf336-7502-4bd1-be87-b032449e4b00" containerName="nova-metadata-metadata" Dec 01 09:00:50 crc kubenswrapper[4689]: E1201 09:00:50.732051 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="351bf336-7502-4bd1-be87-b032449e4b00" containerName="nova-metadata-log" Dec 01 09:00:50 crc kubenswrapper[4689]: I1201 09:00:50.732060 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="351bf336-7502-4bd1-be87-b032449e4b00" containerName="nova-metadata-log" Dec 01 09:00:50 crc kubenswrapper[4689]: I1201 09:00:50.732447 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="351bf336-7502-4bd1-be87-b032449e4b00" containerName="nova-metadata-metadata" Dec 01 09:00:50 crc kubenswrapper[4689]: I1201 09:00:50.732472 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="351bf336-7502-4bd1-be87-b032449e4b00" containerName="nova-metadata-log" Dec 01 09:00:50 crc kubenswrapper[4689]: I1201 09:00:50.739986 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 01 09:00:50 crc kubenswrapper[4689]: I1201 09:00:50.744134 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 01 09:00:50 crc kubenswrapper[4689]: I1201 09:00:50.744161 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 01 09:00:50 crc kubenswrapper[4689]: I1201 09:00:50.749005 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 01 09:00:50 crc kubenswrapper[4689]: I1201 09:00:50.877058 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0e9419c-e23b-4c71-b88e-736138bcdd65-config-data\") pod \"nova-metadata-0\" (UID: \"b0e9419c-e23b-4c71-b88e-736138bcdd65\") " pod="openstack/nova-metadata-0" Dec 01 09:00:50 crc kubenswrapper[4689]: I1201 09:00:50.877131 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0e9419c-e23b-4c71-b88e-736138bcdd65-logs\") pod \"nova-metadata-0\" (UID: \"b0e9419c-e23b-4c71-b88e-736138bcdd65\") " pod="openstack/nova-metadata-0" Dec 01 09:00:50 crc kubenswrapper[4689]: I1201 09:00:50.877161 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0e9419c-e23b-4c71-b88e-736138bcdd65-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b0e9419c-e23b-4c71-b88e-736138bcdd65\") " pod="openstack/nova-metadata-0" Dec 01 09:00:50 crc kubenswrapper[4689]: I1201 09:00:50.877224 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hngjf\" (UniqueName: \"kubernetes.io/projected/b0e9419c-e23b-4c71-b88e-736138bcdd65-kube-api-access-hngjf\") pod \"nova-metadata-0\" (UID: \"b0e9419c-e23b-4c71-b88e-736138bcdd65\") " pod="openstack/nova-metadata-0" Dec 01 09:00:50 crc kubenswrapper[4689]: I1201 09:00:50.877258 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0e9419c-e23b-4c71-b88e-736138bcdd65-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b0e9419c-e23b-4c71-b88e-736138bcdd65\") " pod="openstack/nova-metadata-0" Dec 01 09:00:50 crc kubenswrapper[4689]: I1201 09:00:50.979846 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0e9419c-e23b-4c71-b88e-736138bcdd65-config-data\") pod \"nova-metadata-0\" (UID: \"b0e9419c-e23b-4c71-b88e-736138bcdd65\") " pod="openstack/nova-metadata-0" Dec 01 09:00:50 crc kubenswrapper[4689]: I1201 09:00:50.979924 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0e9419c-e23b-4c71-b88e-736138bcdd65-logs\") pod \"nova-metadata-0\" (UID: \"b0e9419c-e23b-4c71-b88e-736138bcdd65\") " pod="openstack/nova-metadata-0" Dec 01 09:00:50 crc kubenswrapper[4689]: I1201 09:00:50.979960 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0e9419c-e23b-4c71-b88e-736138bcdd65-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b0e9419c-e23b-4c71-b88e-736138bcdd65\") " pod="openstack/nova-metadata-0" Dec 01 09:00:50 crc kubenswrapper[4689]: I1201 09:00:50.980035 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hngjf\" (UniqueName: \"kubernetes.io/projected/b0e9419c-e23b-4c71-b88e-736138bcdd65-kube-api-access-hngjf\") pod \"nova-metadata-0\" (UID: \"b0e9419c-e23b-4c71-b88e-736138bcdd65\") " pod="openstack/nova-metadata-0" Dec 01 09:00:50 crc kubenswrapper[4689]: I1201 09:00:50.980067 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0e9419c-e23b-4c71-b88e-736138bcdd65-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b0e9419c-e23b-4c71-b88e-736138bcdd65\") " pod="openstack/nova-metadata-0" Dec 01 09:00:50 crc kubenswrapper[4689]: I1201 09:00:50.983066 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0e9419c-e23b-4c71-b88e-736138bcdd65-logs\") pod \"nova-metadata-0\" (UID: \"b0e9419c-e23b-4c71-b88e-736138bcdd65\") " pod="openstack/nova-metadata-0" Dec 01 09:00:50 crc kubenswrapper[4689]: I1201 09:00:50.991399 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0e9419c-e23b-4c71-b88e-736138bcdd65-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b0e9419c-e23b-4c71-b88e-736138bcdd65\") " pod="openstack/nova-metadata-0" Dec 01 09:00:50 crc kubenswrapper[4689]: I1201 09:00:50.991984 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0e9419c-e23b-4c71-b88e-736138bcdd65-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b0e9419c-e23b-4c71-b88e-736138bcdd65\") " pod="openstack/nova-metadata-0" Dec 01 09:00:50 crc kubenswrapper[4689]: I1201 09:00:50.992547 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0e9419c-e23b-4c71-b88e-736138bcdd65-config-data\") pod \"nova-metadata-0\" (UID: \"b0e9419c-e23b-4c71-b88e-736138bcdd65\") " pod="openstack/nova-metadata-0" Dec 01 09:00:51 crc kubenswrapper[4689]: I1201 09:00:51.026193 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hngjf\" (UniqueName: \"kubernetes.io/projected/b0e9419c-e23b-4c71-b88e-736138bcdd65-kube-api-access-hngjf\") pod \"nova-metadata-0\" (UID: \"b0e9419c-e23b-4c71-b88e-736138bcdd65\") " pod="openstack/nova-metadata-0" Dec 01 09:00:51 crc kubenswrapper[4689]: I1201 09:00:51.069547 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 01 09:00:51 crc kubenswrapper[4689]: I1201 09:00:51.079439 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="351bf336-7502-4bd1-be87-b032449e4b00" path="/var/lib/kubelet/pods/351bf336-7502-4bd1-be87-b032449e4b00/volumes" Dec 01 09:00:51 crc kubenswrapper[4689]: I1201 09:00:51.080349 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2c4d31d-4c6b-447e-99ef-8d46cbdaf55d" path="/var/lib/kubelet/pods/c2c4d31d-4c6b-447e-99ef-8d46cbdaf55d/volumes" Dec 01 09:00:51 crc kubenswrapper[4689]: I1201 09:00:51.324900 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a13f2879-b8c7-42d5-8f88-ca9aeb7f26bc","Type":"ContainerStarted","Data":"a7a46cdec045aa0e173e0488f9cca680175dfd23db80df733077be4e7398b8ff"} Dec 01 09:00:51 crc kubenswrapper[4689]: I1201 09:00:51.627750 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 01 09:00:52 crc kubenswrapper[4689]: I1201 09:00:52.336072 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b0e9419c-e23b-4c71-b88e-736138bcdd65","Type":"ContainerStarted","Data":"51f98aff94db8180f00ee6c87fce8a191f20ee2e18f39c184a87a8707de23f2f"} Dec 01 09:00:52 crc kubenswrapper[4689]: I1201 09:00:52.339032 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a13f2879-b8c7-42d5-8f88-ca9aeb7f26bc","Type":"ContainerStarted","Data":"8b7e91afcc6bc03bc4505611cf54e0254511130044ecb6660cdedec60b533bd9"} Dec 01 09:00:55 crc kubenswrapper[4689]: I1201 09:00:55.555553 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-7vrt5" podUID="5266d333-3337-4481-9478-2e1df848bfa2" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.54:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 01 09:00:55 crc kubenswrapper[4689]: I1201 09:00:55.622623 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="555543d8-21bb-4dba-9c08-ab82e90ea894" containerName="galera" probeResult="failure" output="command timed out" Dec 01 09:00:55 crc kubenswrapper[4689]: I1201 09:00:55.624919 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="555543d8-21bb-4dba-9c08-ab82e90ea894" containerName="galera" probeResult="failure" output="command timed out" Dec 01 09:00:57 crc kubenswrapper[4689]: I1201 09:00:57.622887 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="bc1ecd4c-eede-492c-ac97-071c42545607" containerName="galera" probeResult="failure" output="command timed out" Dec 01 09:00:57 crc kubenswrapper[4689]: I1201 09:00:57.623937 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="bc1ecd4c-eede-492c-ac97-071c42545607" containerName="galera" probeResult="failure" output="command timed out" Dec 01 09:00:57 crc kubenswrapper[4689]: I1201 09:00:57.777626 4689 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-jhh4c container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.57:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 01 09:00:57 crc kubenswrapper[4689]: I1201 09:00:57.777714 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-jhh4c" podUID="0cd9ccf0-2f85-4649-ac80-931f337566ca" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.57:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 01 09:01:00 crc kubenswrapper[4689]: I1201 09:01:00.141403 4689 generic.go:334] "Generic (PLEG): container finished" podID="c36d8ec3-d59d-4189-8f17-8a4ec186e41e" containerID="14244faeef3d9edb560391ac4c0ef92ac30ba3d9f4f109ffb25a2f9dfdcf34fe" exitCode=-1 Dec 01 09:01:00 crc kubenswrapper[4689]: I1201 09:01:00.141552 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c36d8ec3-d59d-4189-8f17-8a4ec186e41e","Type":"ContainerDied","Data":"14244faeef3d9edb560391ac4c0ef92ac30ba3d9f4f109ffb25a2f9dfdcf34fe"} Dec 01 09:01:01 crc kubenswrapper[4689]: I1201 09:01:01.937327 4689 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-lz96b container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 01 09:01:01 crc kubenswrapper[4689]: I1201 09:01:01.937725 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lz96b" podUID="b1b970c0-59a2-4782-8664-b17a7d7a8202" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.39:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 01 09:01:01 crc kubenswrapper[4689]: I1201 09:01:01.937338 4689 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-lz96b container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.39:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 01 09:01:01 crc kubenswrapper[4689]: I1201 09:01:01.937853 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lz96b" podUID="b1b970c0-59a2-4782-8664-b17a7d7a8202" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.39:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 01 09:01:02 crc kubenswrapper[4689]: I1201 09:01:02.973073 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29409661-b4dz4"] Dec 01 09:01:02 crc kubenswrapper[4689]: I1201 09:01:02.976266 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29409661-b4dz4" Dec 01 09:01:02 crc kubenswrapper[4689]: I1201 09:01:02.984515 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29409661-b4dz4"] Dec 01 09:01:03 crc kubenswrapper[4689]: I1201 09:01:03.062050 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06af101b-855c-409b-8f88-171d7e9aaffc-combined-ca-bundle\") pod \"keystone-cron-29409661-b4dz4\" (UID: \"06af101b-855c-409b-8f88-171d7e9aaffc\") " pod="openstack/keystone-cron-29409661-b4dz4" Dec 01 09:01:03 crc kubenswrapper[4689]: I1201 09:01:03.062137 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06af101b-855c-409b-8f88-171d7e9aaffc-config-data\") pod \"keystone-cron-29409661-b4dz4\" (UID: \"06af101b-855c-409b-8f88-171d7e9aaffc\") " pod="openstack/keystone-cron-29409661-b4dz4" Dec 01 09:01:03 crc kubenswrapper[4689]: I1201 09:01:03.062162 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/06af101b-855c-409b-8f88-171d7e9aaffc-fernet-keys\") pod \"keystone-cron-29409661-b4dz4\" (UID: \"06af101b-855c-409b-8f88-171d7e9aaffc\") " pod="openstack/keystone-cron-29409661-b4dz4" Dec 01 09:01:03 crc kubenswrapper[4689]: I1201 09:01:03.062223 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5z8pk\" (UniqueName: \"kubernetes.io/projected/06af101b-855c-409b-8f88-171d7e9aaffc-kube-api-access-5z8pk\") pod \"keystone-cron-29409661-b4dz4\" (UID: \"06af101b-855c-409b-8f88-171d7e9aaffc\") " pod="openstack/keystone-cron-29409661-b4dz4" Dec 01 09:01:03 crc kubenswrapper[4689]: I1201 09:01:03.183964 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06af101b-855c-409b-8f88-171d7e9aaffc-combined-ca-bundle\") pod \"keystone-cron-29409661-b4dz4\" (UID: \"06af101b-855c-409b-8f88-171d7e9aaffc\") " pod="openstack/keystone-cron-29409661-b4dz4" Dec 01 09:01:03 crc kubenswrapper[4689]: I1201 09:01:03.184066 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06af101b-855c-409b-8f88-171d7e9aaffc-config-data\") pod \"keystone-cron-29409661-b4dz4\" (UID: \"06af101b-855c-409b-8f88-171d7e9aaffc\") " pod="openstack/keystone-cron-29409661-b4dz4" Dec 01 09:01:03 crc kubenswrapper[4689]: I1201 09:01:03.184112 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/06af101b-855c-409b-8f88-171d7e9aaffc-fernet-keys\") pod \"keystone-cron-29409661-b4dz4\" (UID: \"06af101b-855c-409b-8f88-171d7e9aaffc\") " pod="openstack/keystone-cron-29409661-b4dz4" Dec 01 09:01:03 crc kubenswrapper[4689]: I1201 09:01:03.184384 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5z8pk\" (UniqueName: \"kubernetes.io/projected/06af101b-855c-409b-8f88-171d7e9aaffc-kube-api-access-5z8pk\") pod \"keystone-cron-29409661-b4dz4\" (UID: \"06af101b-855c-409b-8f88-171d7e9aaffc\") " pod="openstack/keystone-cron-29409661-b4dz4" Dec 01 09:01:03 crc kubenswrapper[4689]: I1201 09:01:03.190867 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/06af101b-855c-409b-8f88-171d7e9aaffc-fernet-keys\") pod \"keystone-cron-29409661-b4dz4\" (UID: \"06af101b-855c-409b-8f88-171d7e9aaffc\") " pod="openstack/keystone-cron-29409661-b4dz4" Dec 01 09:01:03 crc kubenswrapper[4689]: I1201 09:01:03.195890 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06af101b-855c-409b-8f88-171d7e9aaffc-combined-ca-bundle\") pod \"keystone-cron-29409661-b4dz4\" (UID: \"06af101b-855c-409b-8f88-171d7e9aaffc\") " pod="openstack/keystone-cron-29409661-b4dz4" Dec 01 09:01:03 crc kubenswrapper[4689]: I1201 09:01:03.217309 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06af101b-855c-409b-8f88-171d7e9aaffc-config-data\") pod \"keystone-cron-29409661-b4dz4\" (UID: \"06af101b-855c-409b-8f88-171d7e9aaffc\") " pod="openstack/keystone-cron-29409661-b4dz4" Dec 01 09:01:03 crc kubenswrapper[4689]: I1201 09:01:03.228386 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5z8pk\" (UniqueName: \"kubernetes.io/projected/06af101b-855c-409b-8f88-171d7e9aaffc-kube-api-access-5z8pk\") pod \"keystone-cron-29409661-b4dz4\" (UID: \"06af101b-855c-409b-8f88-171d7e9aaffc\") " pod="openstack/keystone-cron-29409661-b4dz4" Dec 01 09:01:03 crc kubenswrapper[4689]: I1201 09:01:03.308006 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29409661-b4dz4" Dec 01 09:01:04 crc kubenswrapper[4689]: I1201 09:01:04.004078 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 01 09:01:04 crc kubenswrapper[4689]: I1201 09:01:04.217794 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b0e9419c-e23b-4c71-b88e-736138bcdd65","Type":"ContainerStarted","Data":"1b797774757f694f816224944208d9dc5bfe9a50bf5db7eeaef14bbeeb0c5c6b"} Dec 01 09:01:04 crc kubenswrapper[4689]: I1201 09:01:04.349339 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 01 09:01:04 crc kubenswrapper[4689]: I1201 09:01:04.410096 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29409661-b4dz4"] Dec 01 09:01:04 crc kubenswrapper[4689]: I1201 09:01:04.478043 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=15.478026043 podStartE2EDuration="15.478026043s" podCreationTimestamp="2025-12-01 09:00:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:01:04.477738906 +0000 UTC m=+1344.550026810" watchObservedRunningTime="2025-12-01 09:01:04.478026043 +0000 UTC m=+1344.550313947" Dec 01 09:01:04 crc kubenswrapper[4689]: I1201 09:01:04.527614 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c36d8ec3-d59d-4189-8f17-8a4ec186e41e-combined-ca-bundle\") pod \"c36d8ec3-d59d-4189-8f17-8a4ec186e41e\" (UID: \"c36d8ec3-d59d-4189-8f17-8a4ec186e41e\") " Dec 01 09:01:04 crc kubenswrapper[4689]: I1201 09:01:04.527663 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c36d8ec3-d59d-4189-8f17-8a4ec186e41e-public-tls-certs\") pod \"c36d8ec3-d59d-4189-8f17-8a4ec186e41e\" (UID: \"c36d8ec3-d59d-4189-8f17-8a4ec186e41e\") " Dec 01 09:01:04 crc kubenswrapper[4689]: I1201 09:01:04.527809 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c36d8ec3-d59d-4189-8f17-8a4ec186e41e-config-data\") pod \"c36d8ec3-d59d-4189-8f17-8a4ec186e41e\" (UID: \"c36d8ec3-d59d-4189-8f17-8a4ec186e41e\") " Dec 01 09:01:04 crc kubenswrapper[4689]: I1201 09:01:04.527856 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c36d8ec3-d59d-4189-8f17-8a4ec186e41e-logs\") pod \"c36d8ec3-d59d-4189-8f17-8a4ec186e41e\" (UID: \"c36d8ec3-d59d-4189-8f17-8a4ec186e41e\") " Dec 01 09:01:04 crc kubenswrapper[4689]: I1201 09:01:04.527946 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lkx6n\" (UniqueName: \"kubernetes.io/projected/c36d8ec3-d59d-4189-8f17-8a4ec186e41e-kube-api-access-lkx6n\") pod \"c36d8ec3-d59d-4189-8f17-8a4ec186e41e\" (UID: \"c36d8ec3-d59d-4189-8f17-8a4ec186e41e\") " Dec 01 09:01:04 crc kubenswrapper[4689]: I1201 09:01:04.527968 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c36d8ec3-d59d-4189-8f17-8a4ec186e41e-internal-tls-certs\") pod \"c36d8ec3-d59d-4189-8f17-8a4ec186e41e\" (UID: \"c36d8ec3-d59d-4189-8f17-8a4ec186e41e\") " Dec 01 09:01:04 crc kubenswrapper[4689]: I1201 09:01:04.531202 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c36d8ec3-d59d-4189-8f17-8a4ec186e41e-logs" (OuterVolumeSpecName: "logs") pod "c36d8ec3-d59d-4189-8f17-8a4ec186e41e" (UID: "c36d8ec3-d59d-4189-8f17-8a4ec186e41e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:01:04 crc kubenswrapper[4689]: I1201 09:01:04.570730 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c36d8ec3-d59d-4189-8f17-8a4ec186e41e-kube-api-access-lkx6n" (OuterVolumeSpecName: "kube-api-access-lkx6n") pod "c36d8ec3-d59d-4189-8f17-8a4ec186e41e" (UID: "c36d8ec3-d59d-4189-8f17-8a4ec186e41e"). InnerVolumeSpecName "kube-api-access-lkx6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:01:04 crc kubenswrapper[4689]: I1201 09:01:04.614541 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c36d8ec3-d59d-4189-8f17-8a4ec186e41e-config-data" (OuterVolumeSpecName: "config-data") pod "c36d8ec3-d59d-4189-8f17-8a4ec186e41e" (UID: "c36d8ec3-d59d-4189-8f17-8a4ec186e41e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:01:04 crc kubenswrapper[4689]: I1201 09:01:04.615853 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c36d8ec3-d59d-4189-8f17-8a4ec186e41e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c36d8ec3-d59d-4189-8f17-8a4ec186e41e" (UID: "c36d8ec3-d59d-4189-8f17-8a4ec186e41e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:01:04 crc kubenswrapper[4689]: I1201 09:01:04.630670 4689 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c36d8ec3-d59d-4189-8f17-8a4ec186e41e-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 09:01:04 crc kubenswrapper[4689]: I1201 09:01:04.630700 4689 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c36d8ec3-d59d-4189-8f17-8a4ec186e41e-logs\") on node \"crc\" DevicePath \"\"" Dec 01 09:01:04 crc kubenswrapper[4689]: I1201 09:01:04.630712 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lkx6n\" (UniqueName: \"kubernetes.io/projected/c36d8ec3-d59d-4189-8f17-8a4ec186e41e-kube-api-access-lkx6n\") on node \"crc\" DevicePath \"\"" Dec 01 09:01:04 crc kubenswrapper[4689]: I1201 09:01:04.630722 4689 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c36d8ec3-d59d-4189-8f17-8a4ec186e41e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 09:01:04 crc kubenswrapper[4689]: I1201 09:01:04.687643 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c36d8ec3-d59d-4189-8f17-8a4ec186e41e-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "c36d8ec3-d59d-4189-8f17-8a4ec186e41e" (UID: "c36d8ec3-d59d-4189-8f17-8a4ec186e41e"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:01:04 crc kubenswrapper[4689]: I1201 09:01:04.718088 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c36d8ec3-d59d-4189-8f17-8a4ec186e41e-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "c36d8ec3-d59d-4189-8f17-8a4ec186e41e" (UID: "c36d8ec3-d59d-4189-8f17-8a4ec186e41e"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:01:04 crc kubenswrapper[4689]: I1201 09:01:04.732309 4689 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c36d8ec3-d59d-4189-8f17-8a4ec186e41e-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 01 09:01:04 crc kubenswrapper[4689]: I1201 09:01:04.732348 4689 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c36d8ec3-d59d-4189-8f17-8a4ec186e41e-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 01 09:01:04 crc kubenswrapper[4689]: I1201 09:01:04.987973 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 01 09:01:05 crc kubenswrapper[4689]: I1201 09:01:05.228529 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c36d8ec3-d59d-4189-8f17-8a4ec186e41e","Type":"ContainerDied","Data":"a0e8c85a3cfc720a07ba03d4475a2f6161b5c7b42f93e9c585ea8d67b7798e26"} Dec 01 09:01:05 crc kubenswrapper[4689]: I1201 09:01:05.228581 4689 scope.go:117] "RemoveContainer" containerID="14244faeef3d9edb560391ac4c0ef92ac30ba3d9f4f109ffb25a2f9dfdcf34fe" Dec 01 09:01:05 crc kubenswrapper[4689]: I1201 09:01:05.228702 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 01 09:01:05 crc kubenswrapper[4689]: I1201 09:01:05.234877 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b0e9419c-e23b-4c71-b88e-736138bcdd65","Type":"ContainerStarted","Data":"8c05dd7b740399b0328f87adf6cb02e022e1b8c89810cbebc53f1a7560107c72"} Dec 01 09:01:05 crc kubenswrapper[4689]: I1201 09:01:05.237602 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29409661-b4dz4" event={"ID":"06af101b-855c-409b-8f88-171d7e9aaffc","Type":"ContainerStarted","Data":"bd977e75c04c003c23b27132f464f4f87880a9092ee0c79f13c4bbb3ef0c49cf"} Dec 01 09:01:05 crc kubenswrapper[4689]: I1201 09:01:05.237645 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29409661-b4dz4" event={"ID":"06af101b-855c-409b-8f88-171d7e9aaffc","Type":"ContainerStarted","Data":"aae7336c0e3337d173bdc5c355ea3e1d1070bb61ca424ef282ff9f33b6676a9c"} Dec 01 09:01:05 crc kubenswrapper[4689]: I1201 09:01:05.270788 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 01 09:01:05 crc kubenswrapper[4689]: I1201 09:01:05.282553 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 01 09:01:05 crc kubenswrapper[4689]: I1201 09:01:05.297333 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=15.297311971 podStartE2EDuration="15.297311971s" podCreationTimestamp="2025-12-01 09:00:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:01:05.293107477 +0000 UTC m=+1345.365395401" watchObservedRunningTime="2025-12-01 09:01:05.297311971 +0000 UTC m=+1345.369599875" Dec 01 09:01:05 crc kubenswrapper[4689]: I1201 09:01:05.343022 4689 scope.go:117] "RemoveContainer" containerID="46519826b63247d3428c9c28bd4010ffd6ef77597c2f0172d76e1ab862f78c0c" Dec 01 09:01:05 crc kubenswrapper[4689]: I1201 09:01:05.365396 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 01 09:01:05 crc kubenswrapper[4689]: E1201 09:01:05.365859 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c36d8ec3-d59d-4189-8f17-8a4ec186e41e" containerName="nova-api-log" Dec 01 09:01:05 crc kubenswrapper[4689]: I1201 09:01:05.365879 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="c36d8ec3-d59d-4189-8f17-8a4ec186e41e" containerName="nova-api-log" Dec 01 09:01:05 crc kubenswrapper[4689]: E1201 09:01:05.365909 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c36d8ec3-d59d-4189-8f17-8a4ec186e41e" containerName="nova-api-api" Dec 01 09:01:05 crc kubenswrapper[4689]: I1201 09:01:05.365915 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="c36d8ec3-d59d-4189-8f17-8a4ec186e41e" containerName="nova-api-api" Dec 01 09:01:05 crc kubenswrapper[4689]: I1201 09:01:05.366115 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="c36d8ec3-d59d-4189-8f17-8a4ec186e41e" containerName="nova-api-api" Dec 01 09:01:05 crc kubenswrapper[4689]: I1201 09:01:05.366126 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="c36d8ec3-d59d-4189-8f17-8a4ec186e41e" containerName="nova-api-log" Dec 01 09:01:05 crc kubenswrapper[4689]: I1201 09:01:05.367089 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 01 09:01:05 crc kubenswrapper[4689]: I1201 09:01:05.371890 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 01 09:01:05 crc kubenswrapper[4689]: I1201 09:01:05.372094 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Dec 01 09:01:05 crc kubenswrapper[4689]: I1201 09:01:05.372240 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Dec 01 09:01:05 crc kubenswrapper[4689]: I1201 09:01:05.384453 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 01 09:01:05 crc kubenswrapper[4689]: I1201 09:01:05.385226 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29409661-b4dz4" podStartSLOduration=3.385202714 podStartE2EDuration="3.385202714s" podCreationTimestamp="2025-12-01 09:01:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:01:05.343028055 +0000 UTC m=+1345.415315979" watchObservedRunningTime="2025-12-01 09:01:05.385202714 +0000 UTC m=+1345.457490618" Dec 01 09:01:05 crc kubenswrapper[4689]: I1201 09:01:05.448161 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3a578c7-bcdf-46f5-a781-5759e3c6da45-config-data\") pod \"nova-api-0\" (UID: \"a3a578c7-bcdf-46f5-a781-5759e3c6da45\") " pod="openstack/nova-api-0" Dec 01 09:01:05 crc kubenswrapper[4689]: I1201 09:01:05.448216 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3a578c7-bcdf-46f5-a781-5759e3c6da45-public-tls-certs\") pod \"nova-api-0\" (UID: \"a3a578c7-bcdf-46f5-a781-5759e3c6da45\") " pod="openstack/nova-api-0" Dec 01 09:01:05 crc kubenswrapper[4689]: I1201 09:01:05.448299 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtsz8\" (UniqueName: \"kubernetes.io/projected/a3a578c7-bcdf-46f5-a781-5759e3c6da45-kube-api-access-xtsz8\") pod \"nova-api-0\" (UID: \"a3a578c7-bcdf-46f5-a781-5759e3c6da45\") " pod="openstack/nova-api-0" Dec 01 09:01:05 crc kubenswrapper[4689]: I1201 09:01:05.448412 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3a578c7-bcdf-46f5-a781-5759e3c6da45-internal-tls-certs\") pod \"nova-api-0\" (UID: \"a3a578c7-bcdf-46f5-a781-5759e3c6da45\") " pod="openstack/nova-api-0" Dec 01 09:01:05 crc kubenswrapper[4689]: I1201 09:01:05.448435 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a3a578c7-bcdf-46f5-a781-5759e3c6da45-logs\") pod \"nova-api-0\" (UID: \"a3a578c7-bcdf-46f5-a781-5759e3c6da45\") " pod="openstack/nova-api-0" Dec 01 09:01:05 crc kubenswrapper[4689]: I1201 09:01:05.448460 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3a578c7-bcdf-46f5-a781-5759e3c6da45-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a3a578c7-bcdf-46f5-a781-5759e3c6da45\") " pod="openstack/nova-api-0" Dec 01 09:01:05 crc kubenswrapper[4689]: I1201 09:01:05.551248 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3a578c7-bcdf-46f5-a781-5759e3c6da45-config-data\") pod \"nova-api-0\" (UID: \"a3a578c7-bcdf-46f5-a781-5759e3c6da45\") " pod="openstack/nova-api-0" Dec 01 09:01:05 crc kubenswrapper[4689]: I1201 09:01:05.551584 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3a578c7-bcdf-46f5-a781-5759e3c6da45-public-tls-certs\") pod \"nova-api-0\" (UID: \"a3a578c7-bcdf-46f5-a781-5759e3c6da45\") " pod="openstack/nova-api-0" Dec 01 09:01:05 crc kubenswrapper[4689]: I1201 09:01:05.551735 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtsz8\" (UniqueName: \"kubernetes.io/projected/a3a578c7-bcdf-46f5-a781-5759e3c6da45-kube-api-access-xtsz8\") pod \"nova-api-0\" (UID: \"a3a578c7-bcdf-46f5-a781-5759e3c6da45\") " pod="openstack/nova-api-0" Dec 01 09:01:05 crc kubenswrapper[4689]: I1201 09:01:05.552013 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3a578c7-bcdf-46f5-a781-5759e3c6da45-internal-tls-certs\") pod \"nova-api-0\" (UID: \"a3a578c7-bcdf-46f5-a781-5759e3c6da45\") " pod="openstack/nova-api-0" Dec 01 09:01:05 crc kubenswrapper[4689]: I1201 09:01:05.552135 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a3a578c7-bcdf-46f5-a781-5759e3c6da45-logs\") pod \"nova-api-0\" (UID: \"a3a578c7-bcdf-46f5-a781-5759e3c6da45\") " pod="openstack/nova-api-0" Dec 01 09:01:05 crc kubenswrapper[4689]: I1201 09:01:05.552256 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3a578c7-bcdf-46f5-a781-5759e3c6da45-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a3a578c7-bcdf-46f5-a781-5759e3c6da45\") " pod="openstack/nova-api-0" Dec 01 09:01:05 crc kubenswrapper[4689]: I1201 09:01:05.552943 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a3a578c7-bcdf-46f5-a781-5759e3c6da45-logs\") pod \"nova-api-0\" (UID: \"a3a578c7-bcdf-46f5-a781-5759e3c6da45\") " pod="openstack/nova-api-0" Dec 01 09:01:05 crc kubenswrapper[4689]: I1201 09:01:05.557220 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3a578c7-bcdf-46f5-a781-5759e3c6da45-config-data\") pod \"nova-api-0\" (UID: \"a3a578c7-bcdf-46f5-a781-5759e3c6da45\") " pod="openstack/nova-api-0" Dec 01 09:01:05 crc kubenswrapper[4689]: I1201 09:01:05.558116 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3a578c7-bcdf-46f5-a781-5759e3c6da45-internal-tls-certs\") pod \"nova-api-0\" (UID: \"a3a578c7-bcdf-46f5-a781-5759e3c6da45\") " pod="openstack/nova-api-0" Dec 01 09:01:05 crc kubenswrapper[4689]: I1201 09:01:05.558836 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3a578c7-bcdf-46f5-a781-5759e3c6da45-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a3a578c7-bcdf-46f5-a781-5759e3c6da45\") " pod="openstack/nova-api-0" Dec 01 09:01:05 crc kubenswrapper[4689]: I1201 09:01:05.571959 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3a578c7-bcdf-46f5-a781-5759e3c6da45-public-tls-certs\") pod \"nova-api-0\" (UID: \"a3a578c7-bcdf-46f5-a781-5759e3c6da45\") " pod="openstack/nova-api-0" Dec 01 09:01:05 crc kubenswrapper[4689]: I1201 09:01:05.573712 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtsz8\" (UniqueName: \"kubernetes.io/projected/a3a578c7-bcdf-46f5-a781-5759e3c6da45-kube-api-access-xtsz8\") pod \"nova-api-0\" (UID: \"a3a578c7-bcdf-46f5-a781-5759e3c6da45\") " pod="openstack/nova-api-0" Dec 01 09:01:05 crc kubenswrapper[4689]: I1201 09:01:05.722294 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 01 09:01:06 crc kubenswrapper[4689]: I1201 09:01:06.070577 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 01 09:01:06 crc kubenswrapper[4689]: I1201 09:01:06.071186 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 01 09:01:06 crc kubenswrapper[4689]: W1201 09:01:06.211012 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda3a578c7_bcdf_46f5_a781_5759e3c6da45.slice/crio-dcc898d785fd606802d96796420062bdd8e2e157b2c8fd5fdb41a1f53f339f9e WatchSource:0}: Error finding container dcc898d785fd606802d96796420062bdd8e2e157b2c8fd5fdb41a1f53f339f9e: Status 404 returned error can't find the container with id dcc898d785fd606802d96796420062bdd8e2e157b2c8fd5fdb41a1f53f339f9e Dec 01 09:01:06 crc kubenswrapper[4689]: I1201 09:01:06.215603 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 01 09:01:06 crc kubenswrapper[4689]: I1201 09:01:06.280317 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a3a578c7-bcdf-46f5-a781-5759e3c6da45","Type":"ContainerStarted","Data":"dcc898d785fd606802d96796420062bdd8e2e157b2c8fd5fdb41a1f53f339f9e"} Dec 01 09:01:07 crc kubenswrapper[4689]: I1201 09:01:07.061650 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c36d8ec3-d59d-4189-8f17-8a4ec186e41e" path="/var/lib/kubelet/pods/c36d8ec3-d59d-4189-8f17-8a4ec186e41e/volumes" Dec 01 09:01:07 crc kubenswrapper[4689]: I1201 09:01:07.293452 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a3a578c7-bcdf-46f5-a781-5759e3c6da45","Type":"ContainerStarted","Data":"bb59a2934681384cde14be0af192111e76c0b676289cc8b22d1500102f817b95"} Dec 01 09:01:07 crc kubenswrapper[4689]: I1201 09:01:07.294152 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a3a578c7-bcdf-46f5-a781-5759e3c6da45","Type":"ContainerStarted","Data":"f4a47cecc4df29167762f2b2dd42fd820b681ff1a9a727945ff2739359fbbe9f"} Dec 01 09:01:07 crc kubenswrapper[4689]: I1201 09:01:07.317328 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.317294466 podStartE2EDuration="2.317294466s" podCreationTimestamp="2025-12-01 09:01:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:01:07.310828972 +0000 UTC m=+1347.383116876" watchObservedRunningTime="2025-12-01 09:01:07.317294466 +0000 UTC m=+1347.389582370" Dec 01 09:01:09 crc kubenswrapper[4689]: I1201 09:01:09.312256 4689 generic.go:334] "Generic (PLEG): container finished" podID="06af101b-855c-409b-8f88-171d7e9aaffc" containerID="bd977e75c04c003c23b27132f464f4f87880a9092ee0c79f13c4bbb3ef0c49cf" exitCode=0 Dec 01 09:01:09 crc kubenswrapper[4689]: I1201 09:01:09.312349 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29409661-b4dz4" event={"ID":"06af101b-855c-409b-8f88-171d7e9aaffc","Type":"ContainerDied","Data":"bd977e75c04c003c23b27132f464f4f87880a9092ee0c79f13c4bbb3ef0c49cf"} Dec 01 09:01:09 crc kubenswrapper[4689]: I1201 09:01:09.987900 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 01 09:01:10 crc kubenswrapper[4689]: I1201 09:01:10.029207 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 01 09:01:10 crc kubenswrapper[4689]: I1201 09:01:10.353871 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 01 09:01:10 crc kubenswrapper[4689]: I1201 09:01:10.658062 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29409661-b4dz4" Dec 01 09:01:10 crc kubenswrapper[4689]: I1201 09:01:10.756313 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/06af101b-855c-409b-8f88-171d7e9aaffc-fernet-keys\") pod \"06af101b-855c-409b-8f88-171d7e9aaffc\" (UID: \"06af101b-855c-409b-8f88-171d7e9aaffc\") " Dec 01 09:01:10 crc kubenswrapper[4689]: I1201 09:01:10.756509 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5z8pk\" (UniqueName: \"kubernetes.io/projected/06af101b-855c-409b-8f88-171d7e9aaffc-kube-api-access-5z8pk\") pod \"06af101b-855c-409b-8f88-171d7e9aaffc\" (UID: \"06af101b-855c-409b-8f88-171d7e9aaffc\") " Dec 01 09:01:10 crc kubenswrapper[4689]: I1201 09:01:10.756580 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06af101b-855c-409b-8f88-171d7e9aaffc-config-data\") pod \"06af101b-855c-409b-8f88-171d7e9aaffc\" (UID: \"06af101b-855c-409b-8f88-171d7e9aaffc\") " Dec 01 09:01:10 crc kubenswrapper[4689]: I1201 09:01:10.756600 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06af101b-855c-409b-8f88-171d7e9aaffc-combined-ca-bundle\") pod \"06af101b-855c-409b-8f88-171d7e9aaffc\" (UID: \"06af101b-855c-409b-8f88-171d7e9aaffc\") " Dec 01 09:01:10 crc kubenswrapper[4689]: I1201 09:01:10.762575 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06af101b-855c-409b-8f88-171d7e9aaffc-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "06af101b-855c-409b-8f88-171d7e9aaffc" (UID: "06af101b-855c-409b-8f88-171d7e9aaffc"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:01:10 crc kubenswrapper[4689]: I1201 09:01:10.762885 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06af101b-855c-409b-8f88-171d7e9aaffc-kube-api-access-5z8pk" (OuterVolumeSpecName: "kube-api-access-5z8pk") pod "06af101b-855c-409b-8f88-171d7e9aaffc" (UID: "06af101b-855c-409b-8f88-171d7e9aaffc"). InnerVolumeSpecName "kube-api-access-5z8pk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:01:10 crc kubenswrapper[4689]: I1201 09:01:10.788452 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06af101b-855c-409b-8f88-171d7e9aaffc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "06af101b-855c-409b-8f88-171d7e9aaffc" (UID: "06af101b-855c-409b-8f88-171d7e9aaffc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:01:10 crc kubenswrapper[4689]: I1201 09:01:10.831311 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06af101b-855c-409b-8f88-171d7e9aaffc-config-data" (OuterVolumeSpecName: "config-data") pod "06af101b-855c-409b-8f88-171d7e9aaffc" (UID: "06af101b-855c-409b-8f88-171d7e9aaffc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:01:10 crc kubenswrapper[4689]: I1201 09:01:10.858833 4689 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06af101b-855c-409b-8f88-171d7e9aaffc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 09:01:10 crc kubenswrapper[4689]: I1201 09:01:10.858872 4689 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/06af101b-855c-409b-8f88-171d7e9aaffc-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 01 09:01:10 crc kubenswrapper[4689]: I1201 09:01:10.858884 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5z8pk\" (UniqueName: \"kubernetes.io/projected/06af101b-855c-409b-8f88-171d7e9aaffc-kube-api-access-5z8pk\") on node \"crc\" DevicePath \"\"" Dec 01 09:01:10 crc kubenswrapper[4689]: I1201 09:01:10.858896 4689 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06af101b-855c-409b-8f88-171d7e9aaffc-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 09:01:11 crc kubenswrapper[4689]: I1201 09:01:11.070947 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 01 09:01:11 crc kubenswrapper[4689]: I1201 09:01:11.071259 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 01 09:01:11 crc kubenswrapper[4689]: I1201 09:01:11.333583 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29409661-b4dz4" Dec 01 09:01:11 crc kubenswrapper[4689]: I1201 09:01:11.333571 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29409661-b4dz4" event={"ID":"06af101b-855c-409b-8f88-171d7e9aaffc","Type":"ContainerDied","Data":"aae7336c0e3337d173bdc5c355ea3e1d1070bb61ca424ef282ff9f33b6676a9c"} Dec 01 09:01:11 crc kubenswrapper[4689]: I1201 09:01:11.333696 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aae7336c0e3337d173bdc5c355ea3e1d1070bb61ca424ef282ff9f33b6676a9c" Dec 01 09:01:11 crc kubenswrapper[4689]: I1201 09:01:11.430527 4689 patch_prober.go:28] interesting pod/downloads-7954f5f757-xx949 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.22:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 01 09:01:11 crc kubenswrapper[4689]: I1201 09:01:11.430541 4689 patch_prober.go:28] interesting pod/downloads-7954f5f757-xx949 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.22:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 01 09:01:11 crc kubenswrapper[4689]: I1201 09:01:11.430658 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-xx949" podUID="bd24264f-fc40-410e-9bed-3f8e340035b5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.22:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 01 09:01:11 crc kubenswrapper[4689]: I1201 09:01:11.430593 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-xx949" podUID="bd24264f-fc40-410e-9bed-3f8e340035b5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.22:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 01 09:01:12 crc kubenswrapper[4689]: I1201 09:01:12.086719 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="b0e9419c-e23b-4c71-b88e-736138bcdd65" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.205:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 01 09:01:12 crc kubenswrapper[4689]: I1201 09:01:12.087477 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="b0e9419c-e23b-4c71-b88e-736138bcdd65" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.205:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 01 09:01:15 crc kubenswrapper[4689]: I1201 09:01:15.725641 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 01 09:01:15 crc kubenswrapper[4689]: I1201 09:01:15.727087 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 01 09:01:15 crc kubenswrapper[4689]: I1201 09:01:15.787740 4689 patch_prober.go:28] interesting pod/route-controller-manager-6cf74ff74d-rrhrc container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.68:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 01 09:01:15 crc kubenswrapper[4689]: I1201 09:01:15.787824 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-6cf74ff74d-rrhrc" podUID="e48cba8c-2540-496c-87df-1be952119db4" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.68:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 01 09:01:16 crc kubenswrapper[4689]: I1201 09:01:16.757894 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a3a578c7-bcdf-46f5-a781-5759e3c6da45" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.207:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 01 09:01:16 crc kubenswrapper[4689]: I1201 09:01:16.757934 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a3a578c7-bcdf-46f5-a781-5759e3c6da45" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.207:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 01 09:01:18 crc kubenswrapper[4689]: I1201 09:01:18.648054 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="432574e7-df30-4103-a396-c758c4df932c" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.0.183:8080/livez\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 01 09:01:21 crc kubenswrapper[4689]: I1201 09:01:21.431567 4689 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-gwkk8 container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 01 09:01:21 crc kubenswrapper[4689]: I1201 09:01:21.431855 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-gwkk8" podUID="8bebf2e0-afe5-4e98-8cdf-496c5d355ef9" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 01 09:01:21 crc kubenswrapper[4689]: I1201 09:01:21.431914 4689 patch_prober.go:28] interesting pod/downloads-7954f5f757-xx949 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.22:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 01 09:01:21 crc kubenswrapper[4689]: I1201 09:01:21.431946 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-xx949" podUID="bd24264f-fc40-410e-9bed-3f8e340035b5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.22:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 01 09:01:21 crc kubenswrapper[4689]: I1201 09:01:21.431988 4689 patch_prober.go:28] interesting pod/downloads-7954f5f757-xx949 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.22:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 01 09:01:21 crc kubenswrapper[4689]: I1201 09:01:21.432056 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-xx949" podUID="bd24264f-fc40-410e-9bed-3f8e340035b5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.22:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 01 09:01:22 crc kubenswrapper[4689]: I1201 09:01:22.077578 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="b0e9419c-e23b-4c71-b88e-736138bcdd65" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.205:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 01 09:01:22 crc kubenswrapper[4689]: I1201 09:01:22.077975 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="b0e9419c-e23b-4c71-b88e-736138bcdd65" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.205:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 01 09:01:22 crc kubenswrapper[4689]: I1201 09:01:22.159615 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4sbfl9" podUID="6d24ac0e-a14a-4644-ba9a-bc0a6bb0c538" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.80:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 01 09:01:24 crc kubenswrapper[4689]: E1201 09:01:24.798138 4689 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 01 09:01:24 crc kubenswrapper[4689]: I1201 09:01:24.981013 4689 patch_prober.go:28] interesting pod/console-866776c457-g542r container/console namespace/openshift-console: Liveness probe status=failure output="Get \"https://10.217.0.35:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 01 09:01:24 crc kubenswrapper[4689]: I1201 09:01:24.981602 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/console-866776c457-g542r" podUID="c24dc181-1b13-4a51-a87c-16a0b8d1d11d" containerName="console" probeResult="failure" output="Get \"https://10.217.0.35:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 01 09:01:24 crc kubenswrapper[4689]: I1201 09:01:24.981720 4689 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/console-866776c457-g542r" Dec 01 09:01:24 crc kubenswrapper[4689]: I1201 09:01:24.982615 4689 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="console" containerStatusID={"Type":"cri-o","ID":"2abba3240e9e4eeaf650b3140491a451e1d082916f928a2a272d231dd36fc3f9"} pod="openshift-console/console-866776c457-g542r" containerMessage="Container console failed liveness probe, will be restarted" Dec 01 09:01:25 crc kubenswrapper[4689]: I1201 09:01:25.864656 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-f7xtr" podUID="ea3e4b08-090d-444e-ba53-a3df490fbaf8" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.74:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 01 09:01:26 crc kubenswrapper[4689]: I1201 09:01:26.198936 4689 patch_prober.go:28] interesting pod/nmstate-webhook-5f6d4c5ccb-fbrdp container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.19:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 01 09:01:26 crc kubenswrapper[4689]: I1201 09:01:26.199092 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-fbrdp" podUID="4eb87e27-d5ce-4aa6-9808-862d7afb9fd1" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.19:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 01 09:01:26 crc kubenswrapper[4689]: I1201 09:01:26.358540 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-78f8948974-nsnm9" podUID="3e8aa0dc-ea41-48e6-b047-4bb71fd01f8a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.82:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 01 09:01:26 crc kubenswrapper[4689]: I1201 09:01:26.708756 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-5854674fcc-vbkrn" podUID="f94d79da-740a-4080-81d0-ff3bf1867b3d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.86:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 01 09:01:26 crc kubenswrapper[4689]: I1201 09:01:26.708764 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-prvxn" podUID="af92d0ca-8211-49a0-9362-bd5749143fff" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.83:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 01 09:01:26 crc kubenswrapper[4689]: I1201 09:01:26.708743 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-sfplx" podUID="5f9861d6-2700-4af6-b385-e79220c14b2e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.85:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 01 09:01:26 crc kubenswrapper[4689]: I1201 09:01:26.732656 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a3a578c7-bcdf-46f5-a781-5759e3c6da45" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.207:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 01 09:01:26 crc kubenswrapper[4689]: I1201 09:01:26.732682 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a3a578c7-bcdf-46f5-a781-5759e3c6da45" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.207:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 01 09:01:27 crc kubenswrapper[4689]: I1201 09:01:27.085007 4689 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 01 09:01:27 crc kubenswrapper[4689]: I1201 09:01:27.085073 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 01 09:01:27 crc kubenswrapper[4689]: I1201 09:01:27.085085 4689 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:6443/livez?exclude=etcd\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 01 09:01:27 crc kubenswrapper[4689]: I1201 09:01:27.085273 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/livez?exclude=etcd\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 01 09:01:27 crc kubenswrapper[4689]: I1201 09:01:27.090508 4689 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 01 09:01:27 crc kubenswrapper[4689]: I1201 09:01:27.090543 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 01 09:01:28 crc kubenswrapper[4689]: I1201 09:01:28.647219 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="432574e7-df30-4103-a396-c758c4df932c" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.0.183:8080/livez\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 01 09:01:31 crc kubenswrapper[4689]: I1201 09:01:31.437545 4689 patch_prober.go:28] interesting pod/downloads-7954f5f757-xx949 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.22:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 01 09:01:31 crc kubenswrapper[4689]: I1201 09:01:31.437530 4689 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-gwkk8 container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 01 09:01:31 crc kubenswrapper[4689]: I1201 09:01:31.438046 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-gwkk8" podUID="8bebf2e0-afe5-4e98-8cdf-496c5d355ef9" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 01 09:01:31 crc kubenswrapper[4689]: I1201 09:01:31.438027 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-xx949" podUID="bd24264f-fc40-410e-9bed-3f8e340035b5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.22:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 01 09:01:31 crc kubenswrapper[4689]: I1201 09:01:31.437663 4689 patch_prober.go:28] interesting pod/downloads-7954f5f757-xx949 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.22:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 01 09:01:31 crc kubenswrapper[4689]: I1201 09:01:31.438182 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-xx949" podUID="bd24264f-fc40-410e-9bed-3f8e340035b5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.22:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 01 09:01:31 crc kubenswrapper[4689]: I1201 09:01:31.438192 4689 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-xx949" Dec 01 09:01:31 crc kubenswrapper[4689]: I1201 09:01:31.438299 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-xx949" Dec 01 09:01:31 crc kubenswrapper[4689]: I1201 09:01:31.439078 4689 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"efa025fea1ec8337bd709a13be3919080f774ea2493595d595a90da6dd2b01d3"} pod="openshift-console/downloads-7954f5f757-xx949" containerMessage="Container download-server failed liveness probe, will be restarted" Dec 01 09:01:31 crc kubenswrapper[4689]: I1201 09:01:31.439118 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-xx949" podUID="bd24264f-fc40-410e-9bed-3f8e340035b5" containerName="download-server" containerID="cri-o://efa025fea1ec8337bd709a13be3919080f774ea2493595d595a90da6dd2b01d3" gracePeriod=2 Dec 01 09:01:31 crc kubenswrapper[4689]: I1201 09:01:31.649570 4689 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-zvzpg container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 01 09:01:31 crc kubenswrapper[4689]: I1201 09:01:31.649828 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zvzpg" podUID="c1a4774c-b15d-424e-bb37-d6880da5ad85" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.21:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 01 09:01:32 crc kubenswrapper[4689]: I1201 09:01:32.067032 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-8pg9k" podUID="79369af1-c9d2-4d8e-a675-a5174bc0e4ad" containerName="hostpath-provisioner" probeResult="failure" output="Get \"http://10.217.0.42:9898/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 01 09:01:32 crc kubenswrapper[4689]: I1201 09:01:32.079469 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="b0e9419c-e23b-4c71-b88e-736138bcdd65" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.205:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 01 09:01:32 crc kubenswrapper[4689]: I1201 09:01:32.079542 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="b0e9419c-e23b-4c71-b88e-736138bcdd65" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.205:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 01 09:01:32 crc kubenswrapper[4689]: I1201 09:01:32.201597 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4sbfl9" podUID="6d24ac0e-a14a-4644-ba9a-bc0a6bb0c538" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.80:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 01 09:01:32 crc kubenswrapper[4689]: I1201 09:01:32.201676 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4sbfl9" podUID="6d24ac0e-a14a-4644-ba9a-bc0a6bb0c538" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.80:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 01 09:01:32 crc kubenswrapper[4689]: I1201 09:01:32.398531 4689 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-29dmp container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 01 09:01:32 crc kubenswrapper[4689]: I1201 09:01:32.398836 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-29dmp" podUID="21eaf97a-bf73-4e70-a9bc-153b17b8a799" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 01 09:01:32 crc kubenswrapper[4689]: I1201 09:01:32.479558 4689 patch_prober.go:28] interesting pod/downloads-7954f5f757-xx949 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.22:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 01 09:01:32 crc kubenswrapper[4689]: I1201 09:01:32.479628 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-xx949" podUID="bd24264f-fc40-410e-9bed-3f8e340035b5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.22:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 01 09:01:33 crc kubenswrapper[4689]: I1201 09:01:33.559693 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Dec 01 09:01:33 crc kubenswrapper[4689]: I1201 09:01:33.604878 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 01 09:01:33 crc kubenswrapper[4689]: I1201 09:01:33.605188 4689 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="bc73e3a6c8466074f7a3426599663264f2b944371848010f6ae8a2865175740f" exitCode=1 Dec 01 09:01:33 crc kubenswrapper[4689]: I1201 09:01:33.605306 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"bc73e3a6c8466074f7a3426599663264f2b944371848010f6ae8a2865175740f"} Dec 01 09:01:33 crc kubenswrapper[4689]: I1201 09:01:33.605341 4689 scope.go:117] "RemoveContainer" containerID="16d09630f7eb6a631d65473e272c4f98de5ba071d33992776b4101ac797b0854" Dec 01 09:01:33 crc kubenswrapper[4689]: I1201 09:01:33.606151 4689 scope.go:117] "RemoveContainer" containerID="bc73e3a6c8466074f7a3426599663264f2b944371848010f6ae8a2865175740f" Dec 01 09:01:33 crc kubenswrapper[4689]: I1201 09:01:33.642530 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_downloads-7954f5f757-xx949_bd24264f-fc40-410e-9bed-3f8e340035b5/download-server/0.log" Dec 01 09:01:33 crc kubenswrapper[4689]: I1201 09:01:33.642570 4689 generic.go:334] "Generic (PLEG): container finished" podID="bd24264f-fc40-410e-9bed-3f8e340035b5" containerID="efa025fea1ec8337bd709a13be3919080f774ea2493595d595a90da6dd2b01d3" exitCode=137 Dec 01 09:01:33 crc kubenswrapper[4689]: I1201 09:01:33.642653 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-xx949" event={"ID":"bd24264f-fc40-410e-9bed-3f8e340035b5","Type":"ContainerDied","Data":"efa025fea1ec8337bd709a13be3919080f774ea2493595d595a90da6dd2b01d3"} Dec 01 09:01:33 crc kubenswrapper[4689]: I1201 09:01:33.689484 4689 generic.go:334] "Generic (PLEG): container finished" podID="3e8aa0dc-ea41-48e6-b047-4bb71fd01f8a" containerID="e59fb6fb45cf29d075833a33d00c3b71fa2b98532303d400115ac9d6e827ef12" exitCode=1 Dec 01 09:01:33 crc kubenswrapper[4689]: I1201 09:01:33.689604 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-nsnm9" event={"ID":"3e8aa0dc-ea41-48e6-b047-4bb71fd01f8a","Type":"ContainerDied","Data":"e59fb6fb45cf29d075833a33d00c3b71fa2b98532303d400115ac9d6e827ef12"} Dec 01 09:01:33 crc kubenswrapper[4689]: I1201 09:01:33.690300 4689 scope.go:117] "RemoveContainer" containerID="e59fb6fb45cf29d075833a33d00c3b71fa2b98532303d400115ac9d6e827ef12" Dec 01 09:01:33 crc kubenswrapper[4689]: I1201 09:01:33.703814 4689 generic.go:334] "Generic (PLEG): container finished" podID="ae47d16a-5025-44f4-8fa4-f5aa08b126b8" containerID="e5b0e1ca75714ff3f38148592fad2593531e37a12b4cf4cba6605f284f908d55" exitCode=1 Dec 01 09:01:33 crc kubenswrapper[4689]: I1201 09:01:33.703878 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-xhrp7" event={"ID":"ae47d16a-5025-44f4-8fa4-f5aa08b126b8","Type":"ContainerDied","Data":"e5b0e1ca75714ff3f38148592fad2593531e37a12b4cf4cba6605f284f908d55"} Dec 01 09:01:33 crc kubenswrapper[4689]: I1201 09:01:33.717402 4689 scope.go:117] "RemoveContainer" containerID="e5b0e1ca75714ff3f38148592fad2593531e37a12b4cf4cba6605f284f908d55" Dec 01 09:01:33 crc kubenswrapper[4689]: I1201 09:01:33.736181 4689 generic.go:334] "Generic (PLEG): container finished" podID="f94d79da-740a-4080-81d0-ff3bf1867b3d" containerID="5c8a188ae4918f28252525d4f1ed4bce661c362aa0df7e4b9adf6aa4d4425e27" exitCode=1 Dec 01 09:01:33 crc kubenswrapper[4689]: I1201 09:01:33.736276 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-vbkrn" event={"ID":"f94d79da-740a-4080-81d0-ff3bf1867b3d","Type":"ContainerDied","Data":"5c8a188ae4918f28252525d4f1ed4bce661c362aa0df7e4b9adf6aa4d4425e27"} Dec 01 09:01:33 crc kubenswrapper[4689]: I1201 09:01:33.736942 4689 scope.go:117] "RemoveContainer" containerID="5c8a188ae4918f28252525d4f1ed4bce661c362aa0df7e4b9adf6aa4d4425e27" Dec 01 09:01:33 crc kubenswrapper[4689]: I1201 09:01:33.784337 4689 generic.go:334] "Generic (PLEG): container finished" podID="7ce2f328-3ee3-4800-89e4-9141c841c258" containerID="7347f054335dc90e751a8ca26d7279f6c11dddb31bdeecf16e5d090060e7a37d" exitCode=1 Dec 01 09:01:33 crc kubenswrapper[4689]: I1201 09:01:33.790071 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-7vlqn" event={"ID":"7ce2f328-3ee3-4800-89e4-9141c841c258","Type":"ContainerDied","Data":"7347f054335dc90e751a8ca26d7279f6c11dddb31bdeecf16e5d090060e7a37d"} Dec 01 09:01:33 crc kubenswrapper[4689]: I1201 09:01:33.823547 4689 generic.go:334] "Generic (PLEG): container finished" podID="e44ef73a-e172-4557-920d-42f84488390e" containerID="b4446a22dda5c8c8d06cb6191791c18c7741f439351c31f950ba7c4a66a5f81e" exitCode=1 Dec 01 09:01:33 crc kubenswrapper[4689]: I1201 09:01:33.823624 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-tgmx9" event={"ID":"e44ef73a-e172-4557-920d-42f84488390e","Type":"ContainerDied","Data":"b4446a22dda5c8c8d06cb6191791c18c7741f439351c31f950ba7c4a66a5f81e"} Dec 01 09:01:33 crc kubenswrapper[4689]: I1201 09:01:33.826786 4689 generic.go:334] "Generic (PLEG): container finished" podID="4d923f8c-103b-4b12-b2e7-ea926440e5e7" containerID="93aab006f6281f8ee17b6f05c766568d383eda0fea5bcd10f6b4866513687905" exitCode=1 Dec 01 09:01:33 crc kubenswrapper[4689]: I1201 09:01:33.827560 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-ghq5b" event={"ID":"4d923f8c-103b-4b12-b2e7-ea926440e5e7","Type":"ContainerDied","Data":"93aab006f6281f8ee17b6f05c766568d383eda0fea5bcd10f6b4866513687905"} Dec 01 09:01:33 crc kubenswrapper[4689]: I1201 09:01:33.840323 4689 generic.go:334] "Generic (PLEG): container finished" podID="7d09395b-ad54-4b96-af05-ea6ce866de71" containerID="ce9e6f72ccd8bad046549034209643bd06bc3ee86c38add8f2155910e6b38163" exitCode=1 Dec 01 09:01:33 crc kubenswrapper[4689]: I1201 09:01:33.840470 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6599c4498-sh7sl" event={"ID":"7d09395b-ad54-4b96-af05-ea6ce866de71","Type":"ContainerDied","Data":"ce9e6f72ccd8bad046549034209643bd06bc3ee86c38add8f2155910e6b38163"} Dec 01 09:01:33 crc kubenswrapper[4689]: I1201 09:01:33.840998 4689 scope.go:117] "RemoveContainer" containerID="7347f054335dc90e751a8ca26d7279f6c11dddb31bdeecf16e5d090060e7a37d" Dec 01 09:01:33 crc kubenswrapper[4689]: I1201 09:01:33.846103 4689 scope.go:117] "RemoveContainer" containerID="b4446a22dda5c8c8d06cb6191791c18c7741f439351c31f950ba7c4a66a5f81e" Dec 01 09:01:33 crc kubenswrapper[4689]: I1201 09:01:33.847493 4689 scope.go:117] "RemoveContainer" containerID="93aab006f6281f8ee17b6f05c766568d383eda0fea5bcd10f6b4866513687905" Dec 01 09:01:33 crc kubenswrapper[4689]: I1201 09:01:33.851109 4689 scope.go:117] "RemoveContainer" containerID="ce9e6f72ccd8bad046549034209643bd06bc3ee86c38add8f2155910e6b38163" Dec 01 09:01:33 crc kubenswrapper[4689]: I1201 09:01:33.909793 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 09:01:33 crc kubenswrapper[4689]: I1201 09:01:33.981415 4689 generic.go:334] "Generic (PLEG): container finished" podID="ea3e4b08-090d-444e-ba53-a3df490fbaf8" containerID="d46890654ac3bf95905488cacc82bbc6875b199f3e7054d393c7023165d23dfe" exitCode=1 Dec 01 09:01:33 crc kubenswrapper[4689]: I1201 09:01:33.981487 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-f7xtr" event={"ID":"ea3e4b08-090d-444e-ba53-a3df490fbaf8","Type":"ContainerDied","Data":"d46890654ac3bf95905488cacc82bbc6875b199f3e7054d393c7023165d23dfe"} Dec 01 09:01:33 crc kubenswrapper[4689]: I1201 09:01:33.982110 4689 scope.go:117] "RemoveContainer" containerID="d46890654ac3bf95905488cacc82bbc6875b199f3e7054d393c7023165d23dfe" Dec 01 09:01:34 crc kubenswrapper[4689]: I1201 09:01:34.008269 4689 generic.go:334] "Generic (PLEG): container finished" podID="5f9861d6-2700-4af6-b385-e79220c14b2e" containerID="510706de463d3cc3ee305a707744841042c8c7dd65f83e004176b52779a99d8b" exitCode=1 Dec 01 09:01:34 crc kubenswrapper[4689]: I1201 09:01:34.008330 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-sfplx" event={"ID":"5f9861d6-2700-4af6-b385-e79220c14b2e","Type":"ContainerDied","Data":"510706de463d3cc3ee305a707744841042c8c7dd65f83e004176b52779a99d8b"} Dec 01 09:01:34 crc kubenswrapper[4689]: I1201 09:01:34.008861 4689 scope.go:117] "RemoveContainer" containerID="510706de463d3cc3ee305a707744841042c8c7dd65f83e004176b52779a99d8b" Dec 01 09:01:34 crc kubenswrapper[4689]: I1201 09:01:34.019774 4689 generic.go:334] "Generic (PLEG): container finished" podID="b3049390-311d-46ed-b472-d32a22f2f8d2" containerID="03a5f1e4ba4e2fbd1999315d002f1864a6be92021362866ce0fdec62bfbb07fa" exitCode=1 Dec 01 09:01:34 crc kubenswrapper[4689]: I1201 09:01:34.020109 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-p296h" event={"ID":"b3049390-311d-46ed-b472-d32a22f2f8d2","Type":"ContainerDied","Data":"03a5f1e4ba4e2fbd1999315d002f1864a6be92021362866ce0fdec62bfbb07fa"} Dec 01 09:01:34 crc kubenswrapper[4689]: I1201 09:01:34.020743 4689 scope.go:117] "RemoveContainer" containerID="03a5f1e4ba4e2fbd1999315d002f1864a6be92021362866ce0fdec62bfbb07fa" Dec 01 09:01:34 crc kubenswrapper[4689]: I1201 09:01:34.053353 4689 generic.go:334] "Generic (PLEG): container finished" podID="12885cbd-1d3e-40c1-b7f5-73bdb6572db9" containerID="15fdcdf37e974a67377b06bd36ea61e169720e8196e73318493a1f93a71ab8f9" exitCode=1 Dec 01 09:01:34 crc kubenswrapper[4689]: I1201 09:01:34.053429 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-vfnzm" event={"ID":"12885cbd-1d3e-40c1-b7f5-73bdb6572db9","Type":"ContainerDied","Data":"15fdcdf37e974a67377b06bd36ea61e169720e8196e73318493a1f93a71ab8f9"} Dec 01 09:01:34 crc kubenswrapper[4689]: I1201 09:01:34.054283 4689 scope.go:117] "RemoveContainer" containerID="15fdcdf37e974a67377b06bd36ea61e169720e8196e73318493a1f93a71ab8f9" Dec 01 09:01:34 crc kubenswrapper[4689]: I1201 09:01:34.491519 4689 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-7vlqn" Dec 01 09:01:34 crc kubenswrapper[4689]: I1201 09:01:34.491595 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-7vlqn" Dec 01 09:01:34 crc kubenswrapper[4689]: I1201 09:01:34.824111 4689 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-f7xtr" Dec 01 09:01:34 crc kubenswrapper[4689]: I1201 09:01:34.824490 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-f7xtr" Dec 01 09:01:34 crc kubenswrapper[4689]: I1201 09:01:34.840430 4689 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-xhrp7" Dec 01 09:01:34 crc kubenswrapper[4689]: I1201 09:01:34.840483 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-xhrp7" Dec 01 09:01:35 crc kubenswrapper[4689]: I1201 09:01:35.008428 4689 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-ghq5b" Dec 01 09:01:35 crc kubenswrapper[4689]: I1201 09:01:35.008483 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-ghq5b" Dec 01 09:01:35 crc kubenswrapper[4689]: I1201 09:01:35.073840 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Dec 01 09:01:35 crc kubenswrapper[4689]: I1201 09:01:35.292915 4689 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-p296h" Dec 01 09:01:35 crc kubenswrapper[4689]: I1201 09:01:35.292970 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-p296h" Dec 01 09:01:35 crc kubenswrapper[4689]: I1201 09:01:35.316149 4689 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/placement-operator-controller-manager-78f8948974-nsnm9" Dec 01 09:01:35 crc kubenswrapper[4689]: I1201 09:01:35.316193 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-78f8948974-nsnm9" Dec 01 09:01:35 crc kubenswrapper[4689]: I1201 09:01:35.378393 4689 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/octavia-operator-controller-manager-998648c74-vfnzm" Dec 01 09:01:35 crc kubenswrapper[4689]: I1201 09:01:35.378460 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-998648c74-vfnzm" Dec 01 09:01:35 crc kubenswrapper[4689]: I1201 09:01:35.402520 4689 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-29dmp container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 01 09:01:35 crc kubenswrapper[4689]: I1201 09:01:35.402577 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-29dmp" podUID="21eaf97a-bf73-4e70-a9bc-153b17b8a799" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 01 09:01:35 crc kubenswrapper[4689]: I1201 09:01:35.402520 4689 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-29dmp container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 01 09:01:35 crc kubenswrapper[4689]: I1201 09:01:35.402779 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-29dmp" podUID="21eaf97a-bf73-4e70-a9bc-153b17b8a799" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 01 09:01:35 crc kubenswrapper[4689]: I1201 09:01:35.585719 4689 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-sfplx" Dec 01 09:01:35 crc kubenswrapper[4689]: I1201 09:01:35.585767 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-sfplx" Dec 01 09:01:35 crc kubenswrapper[4689]: I1201 09:01:35.607146 4689 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/test-operator-controller-manager-5854674fcc-vbkrn" Dec 01 09:01:35 crc kubenswrapper[4689]: I1201 09:01:35.607192 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5854674fcc-vbkrn" Dec 01 09:01:35 crc kubenswrapper[4689]: I1201 09:01:35.722826 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 01 09:01:35 crc kubenswrapper[4689]: I1201 09:01:35.723699 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 01 09:01:36 crc kubenswrapper[4689]: I1201 09:01:36.290465 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-6599c4498-sh7sl" Dec 01 09:01:36 crc kubenswrapper[4689]: I1201 09:01:36.736662 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a3a578c7-bcdf-46f5-a781-5759e3c6da45" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.207:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 01 09:01:36 crc kubenswrapper[4689]: I1201 09:01:36.737083 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a3a578c7-bcdf-46f5-a781-5759e3c6da45" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.207:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 01 09:01:37 crc kubenswrapper[4689]: I1201 09:01:37.368921 4689 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 09:01:37 crc kubenswrapper[4689]: I1201 09:01:37.413434 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-7459744dff-cxqv7" podUID="e242b763-d0db-401f-b552-d109d6c5ec28" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Dec 01 09:01:38 crc kubenswrapper[4689]: I1201 09:01:38.402517 4689 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-29dmp container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 01 09:01:38 crc kubenswrapper[4689]: I1201 09:01:38.402536 4689 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-29dmp container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 01 09:01:38 crc kubenswrapper[4689]: I1201 09:01:38.402585 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-29dmp" podUID="21eaf97a-bf73-4e70-a9bc-153b17b8a799" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 01 09:01:38 crc kubenswrapper[4689]: I1201 09:01:38.402604 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-29dmp" podUID="21eaf97a-bf73-4e70-a9bc-153b17b8a799" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 01 09:01:38 crc kubenswrapper[4689]: I1201 09:01:38.402737 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-29dmp" Dec 01 09:01:39 crc kubenswrapper[4689]: I1201 09:01:39.406714 4689 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-29dmp container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 01 09:01:39 crc kubenswrapper[4689]: I1201 09:01:39.406797 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-29dmp" podUID="21eaf97a-bf73-4e70-a9bc-153b17b8a799" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 01 09:01:40 crc kubenswrapper[4689]: I1201 09:01:40.347971 4689 patch_prober.go:28] interesting pod/downloads-7954f5f757-xx949 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" start-of-body= Dec 01 09:01:40 crc kubenswrapper[4689]: I1201 09:01:40.348350 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-xx949" podUID="bd24264f-fc40-410e-9bed-3f8e340035b5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" Dec 01 09:01:40 crc kubenswrapper[4689]: I1201 09:01:40.532668 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 09:01:40 crc kubenswrapper[4689]: I1201 09:01:40.582804 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-57548d458d-tgmx9" Dec 01 09:01:41 crc kubenswrapper[4689]: I1201 09:01:41.404520 4689 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-29dmp container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 01 09:01:41 crc kubenswrapper[4689]: I1201 09:01:41.404849 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-29dmp" podUID="21eaf97a-bf73-4e70-a9bc-153b17b8a799" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 01 09:01:41 crc kubenswrapper[4689]: I1201 09:01:41.404532 4689 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-29dmp container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 01 09:01:41 crc kubenswrapper[4689]: I1201 09:01:41.404954 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-29dmp" podUID="21eaf97a-bf73-4e70-a9bc-153b17b8a799" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 01 09:01:41 crc kubenswrapper[4689]: I1201 09:01:41.405015 4689 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-7777fb866f-29dmp" Dec 01 09:01:41 crc kubenswrapper[4689]: I1201 09:01:41.405842 4689 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="openshift-config-operator" containerStatusID={"Type":"cri-o","ID":"a85e9de4547f742cf05dd249b25308b5a4189c6cb2d4fc75a6012f15e9f9e0ed"} pod="openshift-config-operator/openshift-config-operator-7777fb866f-29dmp" containerMessage="Container openshift-config-operator failed liveness probe, will be restarted" Dec 01 09:01:41 crc kubenswrapper[4689]: I1201 09:01:41.405884 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-config-operator/openshift-config-operator-7777fb866f-29dmp" podUID="21eaf97a-bf73-4e70-a9bc-153b17b8a799" containerName="openshift-config-operator" containerID="cri-o://a85e9de4547f742cf05dd249b25308b5a4189c6cb2d4fc75a6012f15e9f9e0ed" gracePeriod=30 Dec 01 09:01:42 crc kubenswrapper[4689]: I1201 09:01:42.092568 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="b0e9419c-e23b-4c71-b88e-736138bcdd65" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.205:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 01 09:01:42 crc kubenswrapper[4689]: I1201 09:01:42.092775 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="b0e9419c-e23b-4c71-b88e-736138bcdd65" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.205:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 01 09:01:42 crc kubenswrapper[4689]: I1201 09:01:42.325117 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-29dmp" Dec 01 09:01:43 crc kubenswrapper[4689]: I1201 09:01:43.394601 4689 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-29dmp container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Dec 01 09:01:43 crc kubenswrapper[4689]: I1201 09:01:43.395001 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-29dmp" podUID="21eaf97a-bf73-4e70-a9bc-153b17b8a799" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" Dec 01 09:01:44 crc kubenswrapper[4689]: I1201 09:01:44.828597 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-operators-2vf7n" podUID="11f527ec-49a1-4be9-a67b-676eb6b8feba" containerName="registry-server" probeResult="failure" output=< Dec 01 09:01:44 crc kubenswrapper[4689]: timeout: health rpc did not complete within 1s Dec 01 09:01:44 crc kubenswrapper[4689]: > Dec 01 09:01:45 crc kubenswrapper[4689]: I1201 09:01:45.818535 4689 patch_prober.go:28] interesting pod/controller-manager-689b8cbc5f-scmr6 container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.61:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 01 09:01:45 crc kubenswrapper[4689]: I1201 09:01:45.818630 4689 patch_prober.go:28] interesting pod/controller-manager-689b8cbc5f-scmr6 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.61:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 01 09:01:45 crc kubenswrapper[4689]: I1201 09:01:45.818629 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-689b8cbc5f-scmr6" podUID="fad122ae-5995-4afe-8520-d3f958ff065c" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.61:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 01 09:01:45 crc kubenswrapper[4689]: I1201 09:01:45.818706 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-689b8cbc5f-scmr6" podUID="fad122ae-5995-4afe-8520-d3f958ff065c" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.61:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 01 09:01:46 crc kubenswrapper[4689]: I1201 09:01:46.394046 4689 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-29dmp container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Dec 01 09:01:46 crc kubenswrapper[4689]: I1201 09:01:46.394105 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-29dmp" podUID="21eaf97a-bf73-4e70-a9bc-153b17b8a799" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" Dec 01 09:01:46 crc kubenswrapper[4689]: I1201 09:01:46.732531 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a3a578c7-bcdf-46f5-a781-5759e3c6da45" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.207:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 01 09:01:46 crc kubenswrapper[4689]: I1201 09:01:46.732555 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a3a578c7-bcdf-46f5-a781-5759e3c6da45" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.207:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 01 09:01:49 crc kubenswrapper[4689]: I1201 09:01:49.394533 4689 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-29dmp container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Dec 01 09:01:49 crc kubenswrapper[4689]: I1201 09:01:49.394940 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-29dmp" podUID="21eaf97a-bf73-4e70-a9bc-153b17b8a799" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" Dec 01 09:01:49 crc kubenswrapper[4689]: I1201 09:01:49.395075 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-29dmp" Dec 01 09:01:50 crc kubenswrapper[4689]: I1201 09:01:50.037337 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-866776c457-g542r" podUID="c24dc181-1b13-4a51-a87c-16a0b8d1d11d" containerName="console" containerID="cri-o://2abba3240e9e4eeaf650b3140491a451e1d082916f928a2a272d231dd36fc3f9" gracePeriod=15 Dec 01 09:01:50 crc kubenswrapper[4689]: I1201 09:01:50.347697 4689 patch_prober.go:28] interesting pod/downloads-7954f5f757-xx949 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" start-of-body= Dec 01 09:01:50 crc kubenswrapper[4689]: I1201 09:01:50.348039 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-xx949" podUID="bd24264f-fc40-410e-9bed-3f8e340035b5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" Dec 01 09:01:50 crc kubenswrapper[4689]: I1201 09:01:50.582640 4689 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/infra-operator-controller-manager-57548d458d-tgmx9" Dec 01 09:01:51 crc kubenswrapper[4689]: I1201 09:01:51.726955 4689 patch_prober.go:28] interesting pod/console-operator-58897d9998-z629s container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 01 09:01:51 crc kubenswrapper[4689]: I1201 09:01:51.727014 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-z629s" podUID="304e31dc-6fcd-4654-9c3d-ef693f7c71a6" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 01 09:01:51 crc kubenswrapper[4689]: I1201 09:01:51.726969 4689 patch_prober.go:28] interesting pod/console-operator-58897d9998-z629s container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 01 09:01:51 crc kubenswrapper[4689]: I1201 09:01:51.727148 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-z629s" podUID="304e31dc-6fcd-4654-9c3d-ef693f7c71a6" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 01 09:01:51 crc kubenswrapper[4689]: I1201 09:01:51.937521 4689 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-lz96b container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 01 09:01:51 crc kubenswrapper[4689]: I1201 09:01:51.937665 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lz96b" podUID="b1b970c0-59a2-4782-8664-b17a7d7a8202" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.39:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 01 09:01:51 crc kubenswrapper[4689]: I1201 09:01:51.937549 4689 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-lz96b container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.39:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 01 09:01:51 crc kubenswrapper[4689]: I1201 09:01:51.937866 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lz96b" podUID="b1b970c0-59a2-4782-8664-b17a7d7a8202" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.39:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 01 09:01:52 crc kubenswrapper[4689]: I1201 09:01:52.078504 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="b0e9419c-e23b-4c71-b88e-736138bcdd65" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.205:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 01 09:01:52 crc kubenswrapper[4689]: I1201 09:01:52.078502 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="b0e9419c-e23b-4c71-b88e-736138bcdd65" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.205:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 01 09:01:52 crc kubenswrapper[4689]: I1201 09:01:52.394262 4689 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-29dmp container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Dec 01 09:01:52 crc kubenswrapper[4689]: I1201 09:01:52.394354 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-29dmp" podUID="21eaf97a-bf73-4e70-a9bc-153b17b8a799" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" Dec 01 09:01:53 crc kubenswrapper[4689]: I1201 09:01:53.152255 4689 patch_prober.go:28] interesting pod/image-registry-66df7c8f76-tg572 container/registry namespace/openshift-image-registry: Liveness probe status=failure output="Get \"https://10.217.0.66:5000/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 01 09:01:53 crc kubenswrapper[4689]: I1201 09:01:53.153021 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-image-registry/image-registry-66df7c8f76-tg572" podUID="5b45e776-d57b-4922-b11b-80b8de9f85d3" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.66:5000/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 01 09:01:55 crc kubenswrapper[4689]: I1201 09:01:55.394455 4689 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-29dmp container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Dec 01 09:01:55 crc kubenswrapper[4689]: I1201 09:01:55.394963 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-29dmp" podUID="21eaf97a-bf73-4e70-a9bc-153b17b8a799" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" Dec 01 09:01:55 crc kubenswrapper[4689]: I1201 09:01:55.777637 4689 patch_prober.go:28] interesting pod/controller-manager-689b8cbc5f-scmr6 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.61:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 01 09:01:55 crc kubenswrapper[4689]: I1201 09:01:55.777710 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-689b8cbc5f-scmr6" podUID="fad122ae-5995-4afe-8520-d3f958ff065c" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.61:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 01 09:01:56 crc kubenswrapper[4689]: I1201 09:01:56.732518 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a3a578c7-bcdf-46f5-a781-5759e3c6da45" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.207:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 01 09:01:56 crc kubenswrapper[4689]: I1201 09:01:56.732553 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a3a578c7-bcdf-46f5-a781-5759e3c6da45" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.207:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 01 09:01:57 crc kubenswrapper[4689]: I1201 09:01:57.820596 4689 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-jhh4c container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.57:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 01 09:01:57 crc kubenswrapper[4689]: I1201 09:01:57.820598 4689 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-jhh4c container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.57:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 01 09:01:57 crc kubenswrapper[4689]: I1201 09:01:57.820708 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-jhh4c" podUID="0cd9ccf0-2f85-4649-ac80-931f337566ca" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.57:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 01 09:01:57 crc kubenswrapper[4689]: I1201 09:01:57.820740 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-jhh4c" podUID="0cd9ccf0-2f85-4649-ac80-931f337566ca" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.57:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 01 09:01:58 crc kubenswrapper[4689]: I1201 09:01:58.394208 4689 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-29dmp container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Dec 01 09:01:58 crc kubenswrapper[4689]: I1201 09:01:58.394569 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-29dmp" podUID="21eaf97a-bf73-4e70-a9bc-153b17b8a799" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" Dec 01 09:01:58 crc kubenswrapper[4689]: I1201 09:01:58.647264 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="432574e7-df30-4103-a396-c758c4df932c" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.0.183:8081/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 01 09:01:59 crc kubenswrapper[4689]: I1201 09:01:59.136548 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-nmlzc" podUID="b3b8a95d-6924-4416-a625-995ed59e230d" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.47:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 01 09:02:00 crc kubenswrapper[4689]: I1201 09:02:00.347114 4689 patch_prober.go:28] interesting pod/downloads-7954f5f757-xx949 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" start-of-body= Dec 01 09:02:00 crc kubenswrapper[4689]: I1201 09:02:00.347168 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-xx949" podUID="bd24264f-fc40-410e-9bed-3f8e340035b5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" Dec 01 09:02:01 crc kubenswrapper[4689]: I1201 09:02:01.394793 4689 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-29dmp container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Dec 01 09:02:01 crc kubenswrapper[4689]: I1201 09:02:01.395327 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-29dmp" podUID="21eaf97a-bf73-4e70-a9bc-153b17b8a799" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" Dec 01 09:02:01 crc kubenswrapper[4689]: I1201 09:02:01.727195 4689 patch_prober.go:28] interesting pod/console-operator-58897d9998-z629s container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 01 09:02:01 crc kubenswrapper[4689]: I1201 09:02:01.727278 4689 patch_prober.go:28] interesting pod/console-operator-58897d9998-z629s container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 01 09:02:01 crc kubenswrapper[4689]: I1201 09:02:01.727419 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-z629s" podUID="304e31dc-6fcd-4654-9c3d-ef693f7c71a6" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 01 09:02:01 crc kubenswrapper[4689]: I1201 09:02:01.727285 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-z629s" podUID="304e31dc-6fcd-4654-9c3d-ef693f7c71a6" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 01 09:02:01 crc kubenswrapper[4689]: I1201 09:02:01.937432 4689 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-lz96b container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 01 09:02:01 crc kubenswrapper[4689]: I1201 09:02:01.937492 4689 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-lz96b container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.39:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 01 09:02:01 crc kubenswrapper[4689]: I1201 09:02:01.937527 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lz96b" podUID="b1b970c0-59a2-4782-8664-b17a7d7a8202" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.39:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 01 09:02:01 crc kubenswrapper[4689]: I1201 09:02:01.937550 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lz96b" podUID="b1b970c0-59a2-4782-8664-b17a7d7a8202" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.39:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 01 09:02:02 crc kubenswrapper[4689]: I1201 09:02:02.118566 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="b0e9419c-e23b-4c71-b88e-736138bcdd65" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.205:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 01 09:02:02 crc kubenswrapper[4689]: I1201 09:02:02.118676 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 01 09:02:02 crc kubenswrapper[4689]: I1201 09:02:02.118805 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="b0e9419c-e23b-4c71-b88e-736138bcdd65" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.205:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 01 09:02:02 crc kubenswrapper[4689]: I1201 09:02:02.119600 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 01 09:02:02 crc kubenswrapper[4689]: I1201 09:02:02.119655 4689 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="nova-metadata-log" containerStatusID={"Type":"cri-o","ID":"1b797774757f694f816224944208d9dc5bfe9a50bf5db7eeaef14bbeeb0c5c6b"} pod="openstack/nova-metadata-0" containerMessage="Container nova-metadata-log failed startup probe, will be restarted" Dec 01 09:02:02 crc kubenswrapper[4689]: I1201 09:02:02.119720 4689 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="nova-metadata-metadata" containerStatusID={"Type":"cri-o","ID":"8c05dd7b740399b0328f87adf6cb02e022e1b8c89810cbebc53f1a7560107c72"} pod="openstack/nova-metadata-0" containerMessage="Container nova-metadata-metadata failed startup probe, will be restarted" Dec 01 09:02:02 crc kubenswrapper[4689]: I1201 09:02:02.119754 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="b0e9419c-e23b-4c71-b88e-736138bcdd65" containerName="nova-metadata-log" containerID="cri-o://1b797774757f694f816224944208d9dc5bfe9a50bf5db7eeaef14bbeeb0c5c6b" gracePeriod=30 Dec 01 09:02:04 crc kubenswrapper[4689]: I1201 09:02:04.394315 4689 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-29dmp container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Dec 01 09:02:04 crc kubenswrapper[4689]: I1201 09:02:04.395675 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-29dmp" podUID="21eaf97a-bf73-4e70-a9bc-153b17b8a799" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" Dec 01 09:02:05 crc kubenswrapper[4689]: I1201 09:02:05.565623 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-25q6j" podUID="2b35aff9-c66d-448c-9883-05e650f7f147" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.55:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 01 09:02:05 crc kubenswrapper[4689]: I1201 09:02:05.622414 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="555543d8-21bb-4dba-9c08-ab82e90ea894" containerName="galera" probeResult="failure" output="command timed out" Dec 01 09:02:05 crc kubenswrapper[4689]: I1201 09:02:05.622721 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="555543d8-21bb-4dba-9c08-ab82e90ea894" containerName="galera" probeResult="failure" output="command timed out" Dec 01 09:02:05 crc kubenswrapper[4689]: I1201 09:02:05.777559 4689 patch_prober.go:28] interesting pod/controller-manager-689b8cbc5f-scmr6 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.61:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 01 09:02:05 crc kubenswrapper[4689]: I1201 09:02:05.777631 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-689b8cbc5f-scmr6" podUID="fad122ae-5995-4afe-8520-d3f958ff065c" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.61:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 01 09:02:06 crc kubenswrapper[4689]: I1201 09:02:06.805618 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a3a578c7-bcdf-46f5-a781-5759e3c6da45" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.207:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 01 09:02:06 crc kubenswrapper[4689]: I1201 09:02:06.805731 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 01 09:02:06 crc kubenswrapper[4689]: I1201 09:02:06.806569 4689 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="nova-api-log" containerStatusID={"Type":"cri-o","ID":"f4a47cecc4df29167762f2b2dd42fd820b681ff1a9a727945ff2739359fbbe9f"} pod="openstack/nova-api-0" containerMessage="Container nova-api-log failed startup probe, will be restarted" Dec 01 09:02:06 crc kubenswrapper[4689]: I1201 09:02:06.806620 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a3a578c7-bcdf-46f5-a781-5759e3c6da45" containerName="nova-api-log" containerID="cri-o://f4a47cecc4df29167762f2b2dd42fd820b681ff1a9a727945ff2739359fbbe9f" gracePeriod=30 Dec 01 09:02:06 crc kubenswrapper[4689]: I1201 09:02:06.806746 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a3a578c7-bcdf-46f5-a781-5759e3c6da45" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.207:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 01 09:02:06 crc kubenswrapper[4689]: I1201 09:02:06.806781 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 01 09:02:07 crc kubenswrapper[4689]: I1201 09:02:07.394209 4689 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-29dmp container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Dec 01 09:02:07 crc kubenswrapper[4689]: I1201 09:02:07.394286 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-29dmp" podUID="21eaf97a-bf73-4e70-a9bc-153b17b8a799" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" Dec 01 09:02:07 crc kubenswrapper[4689]: I1201 09:02:07.819549 4689 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-jhh4c container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.57:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 01 09:02:07 crc kubenswrapper[4689]: I1201 09:02:07.819558 4689 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-jhh4c container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.57:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 01 09:02:07 crc kubenswrapper[4689]: I1201 09:02:07.819616 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-jhh4c" podUID="0cd9ccf0-2f85-4649-ac80-931f337566ca" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.57:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 01 09:02:07 crc kubenswrapper[4689]: I1201 09:02:07.819622 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-jhh4c" podUID="0cd9ccf0-2f85-4649-ac80-931f337566ca" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.57:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 01 09:02:08 crc kubenswrapper[4689]: I1201 09:02:08.471544 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-operator-c6fb994fd-5lzsb" podUID="161f3daa-6403-48b2-8e33-b01d632a2316" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.52:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 01 09:02:08 crc kubenswrapper[4689]: I1201 09:02:08.471603 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-operator-c6fb994fd-5lzsb" podUID="161f3daa-6403-48b2-8e33-b01d632a2316" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.52:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 01 09:02:09 crc kubenswrapper[4689]: I1201 09:02:09.152678 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-5j4hf" podUID="67f63643-d748-4058-b24c-66ce8a8c3234" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 01 09:02:10 crc kubenswrapper[4689]: I1201 09:02:10.351235 4689 patch_prober.go:28] interesting pod/downloads-7954f5f757-xx949 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" start-of-body= Dec 01 09:02:10 crc kubenswrapper[4689]: I1201 09:02:10.351295 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-xx949" podUID="bd24264f-fc40-410e-9bed-3f8e340035b5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" Dec 01 09:02:10 crc kubenswrapper[4689]: I1201 09:02:10.394083 4689 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-29dmp container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Dec 01 09:02:10 crc kubenswrapper[4689]: I1201 09:02:10.394179 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-29dmp" podUID="21eaf97a-bf73-4e70-a9bc-153b17b8a799" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" Dec 01 09:02:11 crc kubenswrapper[4689]: I1201 09:02:11.727191 4689 patch_prober.go:28] interesting pod/console-operator-58897d9998-z629s container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 01 09:02:11 crc kubenswrapper[4689]: I1201 09:02:11.727340 4689 patch_prober.go:28] interesting pod/console-operator-58897d9998-z629s container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 01 09:02:11 crc kubenswrapper[4689]: I1201 09:02:11.727345 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-z629s" podUID="304e31dc-6fcd-4654-9c3d-ef693f7c71a6" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 01 09:02:11 crc kubenswrapper[4689]: I1201 09:02:11.727459 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-z629s" podUID="304e31dc-6fcd-4654-9c3d-ef693f7c71a6" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 01 09:02:11 crc kubenswrapper[4689]: I1201 09:02:11.727513 4689 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console-operator/console-operator-58897d9998-z629s" Dec 01 09:02:11 crc kubenswrapper[4689]: I1201 09:02:11.727635 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-z629s" Dec 01 09:02:11 crc kubenswrapper[4689]: I1201 09:02:11.729473 4689 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="console-operator" containerStatusID={"Type":"cri-o","ID":"c099adf2e4ed6ab3907d7b9cee98d370f405b1fc52d0b864e548ec3172dd2661"} pod="openshift-console-operator/console-operator-58897d9998-z629s" containerMessage="Container console-operator failed liveness probe, will be restarted" Dec 01 09:02:11 crc kubenswrapper[4689]: I1201 09:02:11.729599 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console-operator/console-operator-58897d9998-z629s" podUID="304e31dc-6fcd-4654-9c3d-ef693f7c71a6" containerName="console-operator" containerID="cri-o://c099adf2e4ed6ab3907d7b9cee98d370f405b1fc52d0b864e548ec3172dd2661" gracePeriod=30 Dec 01 09:02:11 crc kubenswrapper[4689]: I1201 09:02:11.936912 4689 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-lz96b container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.39:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 01 09:02:11 crc kubenswrapper[4689]: I1201 09:02:11.936998 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lz96b" podUID="b1b970c0-59a2-4782-8664-b17a7d7a8202" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.39:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 01 09:02:11 crc kubenswrapper[4689]: I1201 09:02:11.937051 4689 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lz96b" Dec 01 09:02:11 crc kubenswrapper[4689]: I1201 09:02:11.937832 4689 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="olm-operator" containerStatusID={"Type":"cri-o","ID":"cba92047a7a632925f896dab9b77969c1150e1427687ee0c155b4db192ee4e3d"} pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lz96b" containerMessage="Container olm-operator failed liveness probe, will be restarted" Dec 01 09:02:11 crc kubenswrapper[4689]: I1201 09:02:11.937875 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lz96b" podUID="b1b970c0-59a2-4782-8664-b17a7d7a8202" containerName="olm-operator" containerID="cri-o://cba92047a7a632925f896dab9b77969c1150e1427687ee0c155b4db192ee4e3d" gracePeriod=30 Dec 01 09:02:11 crc kubenswrapper[4689]: I1201 09:02:11.938116 4689 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-lz96b container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 01 09:02:11 crc kubenswrapper[4689]: I1201 09:02:11.938216 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lz96b" podUID="b1b970c0-59a2-4782-8664-b17a7d7a8202" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.39:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 01 09:02:11 crc kubenswrapper[4689]: I1201 09:02:11.938331 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lz96b" Dec 01 09:02:12 crc kubenswrapper[4689]: I1201 09:02:12.729166 4689 patch_prober.go:28] interesting pod/console-operator-58897d9998-z629s container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 01 09:02:12 crc kubenswrapper[4689]: I1201 09:02:12.729249 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-z629s" podUID="304e31dc-6fcd-4654-9c3d-ef693f7c71a6" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 01 09:02:13 crc kubenswrapper[4689]: I1201 09:02:13.150958 4689 patch_prober.go:28] interesting pod/image-registry-66df7c8f76-tg572 container/registry namespace/openshift-image-registry: Liveness probe status=failure output="Get \"https://10.217.0.66:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 01 09:02:13 crc kubenswrapper[4689]: I1201 09:02:13.151312 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-image-registry/image-registry-66df7c8f76-tg572" podUID="5b45e776-d57b-4922-b11b-80b8de9f85d3" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.66:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 01 09:02:13 crc kubenswrapper[4689]: I1201 09:02:13.394642 4689 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-29dmp container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Dec 01 09:02:13 crc kubenswrapper[4689]: I1201 09:02:13.394718 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-29dmp" podUID="21eaf97a-bf73-4e70-a9bc-153b17b8a799" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" Dec 01 09:02:14 crc kubenswrapper[4689]: I1201 09:02:14.981894 4689 patch_prober.go:28] interesting pod/console-866776c457-g542r container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/health\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Dec 01 09:02:14 crc kubenswrapper[4689]: I1201 09:02:14.983074 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-866776c457-g542r" podUID="c24dc181-1b13-4a51-a87c-16a0b8d1d11d" containerName="console" probeResult="failure" output="Get \"https://10.217.0.35:8443/health\": dial tcp 10.217.0.35:8443: connect: connection refused" Dec 01 09:02:15 crc kubenswrapper[4689]: I1201 09:02:15.565549 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-25q6j" podUID="2b35aff9-c66d-448c-9883-05e650f7f147" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.55:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 01 09:02:15 crc kubenswrapper[4689]: I1201 09:02:15.623496 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="555543d8-21bb-4dba-9c08-ab82e90ea894" containerName="galera" probeResult="failure" output="command timed out" Dec 01 09:02:15 crc kubenswrapper[4689]: I1201 09:02:15.623495 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="555543d8-21bb-4dba-9c08-ab82e90ea894" containerName="galera" probeResult="failure" output="command timed out" Dec 01 09:02:16 crc kubenswrapper[4689]: I1201 09:02:16.394871 4689 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-29dmp container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Dec 01 09:02:16 crc kubenswrapper[4689]: I1201 09:02:16.395817 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-29dmp" podUID="21eaf97a-bf73-4e70-a9bc-153b17b8a799" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" Dec 01 09:02:17 crc kubenswrapper[4689]: I1201 09:02:17.819589 4689 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-jhh4c container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.57:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 01 09:02:17 crc kubenswrapper[4689]: I1201 09:02:17.820293 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-jhh4c" podUID="0cd9ccf0-2f85-4649-ac80-931f337566ca" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.57:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 01 09:02:17 crc kubenswrapper[4689]: I1201 09:02:17.819591 4689 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-jhh4c container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.57:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 01 09:02:17 crc kubenswrapper[4689]: I1201 09:02:17.820508 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-jhh4c" podUID="0cd9ccf0-2f85-4649-ac80-931f337566ca" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.57:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 01 09:02:17 crc kubenswrapper[4689]: I1201 09:02:17.820629 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-jhh4c" Dec 01 09:02:17 crc kubenswrapper[4689]: I1201 09:02:17.820733 4689 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-79b997595-jhh4c" Dec 01 09:02:17 crc kubenswrapper[4689]: I1201 09:02:17.821275 4689 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="marketplace-operator" containerStatusID={"Type":"cri-o","ID":"a44c6ac6c569313a1279c318dc779effd9720f53f67036c49136a7b38e19dcc8"} pod="openshift-marketplace/marketplace-operator-79b997595-jhh4c" containerMessage="Container marketplace-operator failed liveness probe, will be restarted" Dec 01 09:02:17 crc kubenswrapper[4689]: I1201 09:02:17.821319 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-jhh4c" podUID="0cd9ccf0-2f85-4649-ac80-931f337566ca" containerName="marketplace-operator" containerID="cri-o://a44c6ac6c569313a1279c318dc779effd9720f53f67036c49136a7b38e19dcc8" gracePeriod=30 Dec 01 09:02:18 crc kubenswrapper[4689]: I1201 09:02:18.430683 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-operator-c6fb994fd-5lzsb" podUID="161f3daa-6403-48b2-8e33-b01d632a2316" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.52:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 01 09:02:18 crc kubenswrapper[4689]: I1201 09:02:18.648476 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="432574e7-df30-4103-a396-c758c4df932c" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.0.183:8080/livez\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 01 09:02:18 crc kubenswrapper[4689]: I1201 09:02:18.861566 4689 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-jhh4c container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.57:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 01 09:02:18 crc kubenswrapper[4689]: I1201 09:02:18.861641 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-jhh4c" podUID="0cd9ccf0-2f85-4649-ac80-931f337566ca" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.57:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 01 09:02:19 crc kubenswrapper[4689]: I1201 09:02:19.395037 4689 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-29dmp container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Dec 01 09:02:19 crc kubenswrapper[4689]: I1201 09:02:19.395473 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-29dmp" podUID="21eaf97a-bf73-4e70-a9bc-153b17b8a799" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" Dec 01 09:02:20 crc kubenswrapper[4689]: I1201 09:02:20.228570 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-5c56f" podUID="4e0a2cf0-4edb-4f5e-90d2-e276ddd43fd1" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 01 09:02:20 crc kubenswrapper[4689]: I1201 09:02:20.228690 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-5c56f" podUID="4e0a2cf0-4edb-4f5e-90d2-e276ddd43fd1" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 01 09:02:20 crc kubenswrapper[4689]: I1201 09:02:20.347703 4689 patch_prober.go:28] interesting pod/downloads-7954f5f757-xx949 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" start-of-body= Dec 01 09:02:20 crc kubenswrapper[4689]: I1201 09:02:20.347799 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-xx949" podUID="bd24264f-fc40-410e-9bed-3f8e340035b5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" Dec 01 09:02:21 crc kubenswrapper[4689]: I1201 09:02:21.726838 4689 patch_prober.go:28] interesting pod/console-operator-58897d9998-z629s container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 01 09:02:21 crc kubenswrapper[4689]: I1201 09:02:21.727174 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-z629s" podUID="304e31dc-6fcd-4654-9c3d-ef693f7c71a6" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 01 09:02:21 crc kubenswrapper[4689]: I1201 09:02:21.940926 4689 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-lz96b container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 01 09:02:21 crc kubenswrapper[4689]: I1201 09:02:21.941324 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lz96b" podUID="b1b970c0-59a2-4782-8664-b17a7d7a8202" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.39:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 01 09:02:22 crc kubenswrapper[4689]: I1201 09:02:22.163515 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4sbfl9" podUID="6d24ac0e-a14a-4644-ba9a-bc0a6bb0c538" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.80:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 01 09:02:22 crc kubenswrapper[4689]: I1201 09:02:22.221564 4689 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-ltkzh container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 01 09:02:22 crc kubenswrapper[4689]: I1201 09:02:22.221709 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ltkzh" podUID="dcf239d3-b34c-4fa0-aafa-4e82e29e0c9b" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 01 09:02:22 crc kubenswrapper[4689]: I1201 09:02:22.222458 4689 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-ltkzh container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 01 09:02:22 crc kubenswrapper[4689]: I1201 09:02:22.222548 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ltkzh" podUID="dcf239d3-b34c-4fa0-aafa-4e82e29e0c9b" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 01 09:02:22 crc kubenswrapper[4689]: I1201 09:02:22.395005 4689 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-29dmp container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Dec 01 09:02:22 crc kubenswrapper[4689]: I1201 09:02:22.395072 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-29dmp" podUID="21eaf97a-bf73-4e70-a9bc-153b17b8a799" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" Dec 01 09:02:23 crc kubenswrapper[4689]: I1201 09:02:23.152578 4689 patch_prober.go:28] interesting pod/image-registry-66df7c8f76-tg572 container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.66:5000/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 01 09:02:23 crc kubenswrapper[4689]: I1201 09:02:23.153252 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-66df7c8f76-tg572" podUID="5b45e776-d57b-4922-b11b-80b8de9f85d3" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.66:5000/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 01 09:02:23 crc kubenswrapper[4689]: E1201 09:02:23.408185 4689 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 01 09:02:24 crc kubenswrapper[4689]: I1201 09:02:24.515777 4689 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-nnx7f container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 01 09:02:24 crc kubenswrapper[4689]: I1201 09:02:24.515854 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nnx7f" podUID="70e552a9-22d9-4efc-b40a-25232123691b" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.5:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 01 09:02:24 crc kubenswrapper[4689]: I1201 09:02:24.980919 4689 patch_prober.go:28] interesting pod/console-866776c457-g542r container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/health\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Dec 01 09:02:24 crc kubenswrapper[4689]: I1201 09:02:24.981015 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-866776c457-g542r" podUID="c24dc181-1b13-4a51-a87c-16a0b8d1d11d" containerName="console" probeResult="failure" output="Get \"https://10.217.0.35:8443/health\": dial tcp 10.217.0.35:8443: connect: connection refused" Dec 01 09:02:25 crc kubenswrapper[4689]: I1201 09:02:25.394993 4689 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-29dmp container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Dec 01 09:02:25 crc kubenswrapper[4689]: I1201 09:02:25.395091 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-29dmp" podUID="21eaf97a-bf73-4e70-a9bc-153b17b8a799" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" Dec 01 09:02:25 crc kubenswrapper[4689]: I1201 09:02:25.635520 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="555543d8-21bb-4dba-9c08-ab82e90ea894" containerName="galera" probeResult="failure" output="command timed out" Dec 01 09:02:25 crc kubenswrapper[4689]: I1201 09:02:25.636709 4689 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/openstack-galera-0" Dec 01 09:02:25 crc kubenswrapper[4689]: I1201 09:02:25.635534 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="555543d8-21bb-4dba-9c08-ab82e90ea894" containerName="galera" probeResult="failure" output="command timed out" Dec 01 09:02:25 crc kubenswrapper[4689]: I1201 09:02:25.637276 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Dec 01 09:02:25 crc kubenswrapper[4689]: I1201 09:02:25.637931 4689 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"dd35cdcc63b59bd0ec3b1fcfc4e426e3585823f6c176ab62c8ed5bfd6bedec01"} pod="openstack/openstack-galera-0" containerMessage="Container galera failed liveness probe, will be restarted" Dec 01 09:02:25 crc kubenswrapper[4689]: I1201 09:02:25.778665 4689 patch_prober.go:28] interesting pod/controller-manager-689b8cbc5f-scmr6 container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.61:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 01 09:02:25 crc kubenswrapper[4689]: I1201 09:02:25.778959 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-689b8cbc5f-scmr6" podUID="fad122ae-5995-4afe-8520-d3f958ff065c" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.61:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 01 09:02:26 crc kubenswrapper[4689]: I1201 09:02:26.012644 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-fm9bv" podUID="0d311ded-de3a-42e8-87d3-23c50c4fbd8a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.76:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 01 09:02:26 crc kubenswrapper[4689]: E1201 09:02:26.854196 4689 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:02:16Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:02:16Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:02:16Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T09:02:16Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 01 09:02:27 crc kubenswrapper[4689]: I1201 09:02:27.779541 4689 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-jhh4c container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.57:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 01 09:02:27 crc kubenswrapper[4689]: I1201 09:02:27.779647 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-jhh4c" podUID="0cd9ccf0-2f85-4649-ac80-931f337566ca" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.57:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 01 09:02:28 crc kubenswrapper[4689]: I1201 09:02:28.473686 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-operator-c6fb994fd-5lzsb" podUID="161f3daa-6403-48b2-8e33-b01d632a2316" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.52:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 01 09:02:28 crc kubenswrapper[4689]: I1201 09:02:28.473854 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-c6fb994fd-5lzsb" Dec 01 09:02:28 crc kubenswrapper[4689]: I1201 09:02:28.474099 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-operator-c6fb994fd-5lzsb" podUID="161f3daa-6403-48b2-8e33-b01d632a2316" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.52:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 01 09:02:28 crc kubenswrapper[4689]: I1201 09:02:28.474510 4689 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-29dmp container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Dec 01 09:02:28 crc kubenswrapper[4689]: I1201 09:02:28.474541 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-29dmp" podUID="21eaf97a-bf73-4e70-a9bc-153b17b8a799" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" Dec 01 09:02:28 crc kubenswrapper[4689]: I1201 09:02:28.647888 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="432574e7-df30-4103-a396-c758c4df932c" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.0.183:8081/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 01 09:02:28 crc kubenswrapper[4689]: I1201 09:02:28.647963 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="432574e7-df30-4103-a396-c758c4df932c" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.0.183:8080/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 01 09:02:29 crc kubenswrapper[4689]: I1201 09:02:29.152065 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-5j4hf" podUID="67f63643-d748-4058-b24c-66ce8a8c3234" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 01 09:02:29 crc kubenswrapper[4689]: I1201 09:02:29.515774 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-operator-c6fb994fd-5lzsb" podUID="161f3daa-6403-48b2-8e33-b01d632a2316" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.52:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 01 09:02:30 crc kubenswrapper[4689]: I1201 09:02:30.229647 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-5c56f" podUID="4e0a2cf0-4edb-4f5e-90d2-e276ddd43fd1" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 01 09:02:30 crc kubenswrapper[4689]: I1201 09:02:30.229672 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-5c56f" podUID="4e0a2cf0-4edb-4f5e-90d2-e276ddd43fd1" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 01 09:02:30 crc kubenswrapper[4689]: I1201 09:02:30.347813 4689 patch_prober.go:28] interesting pod/downloads-7954f5f757-xx949 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" start-of-body= Dec 01 09:02:30 crc kubenswrapper[4689]: I1201 09:02:30.348141 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-xx949" podUID="bd24264f-fc40-410e-9bed-3f8e340035b5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" Dec 01 09:02:31 crc kubenswrapper[4689]: I1201 09:02:31.394298 4689 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-29dmp container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Dec 01 09:02:31 crc kubenswrapper[4689]: I1201 09:02:31.394405 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-29dmp" podUID="21eaf97a-bf73-4e70-a9bc-153b17b8a799" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" Dec 01 09:02:31 crc kubenswrapper[4689]: I1201 09:02:31.648561 4689 patch_prober.go:28] interesting pod/oauth-openshift-7544d6d989-kzcmr container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 01 09:02:31 crc kubenswrapper[4689]: I1201 09:02:31.648636 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-7544d6d989-kzcmr" podUID="2043c180-d558-48e0-8295-e2d244822828" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 01 09:02:31 crc kubenswrapper[4689]: I1201 09:02:31.727519 4689 patch_prober.go:28] interesting pod/console-operator-58897d9998-z629s container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 01 09:02:31 crc kubenswrapper[4689]: I1201 09:02:31.727579 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-z629s" podUID="304e31dc-6fcd-4654-9c3d-ef693f7c71a6" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 01 09:02:31 crc kubenswrapper[4689]: I1201 09:02:31.937446 4689 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-lz96b container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 01 09:02:31 crc kubenswrapper[4689]: I1201 09:02:31.937514 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lz96b" podUID="b1b970c0-59a2-4782-8664-b17a7d7a8202" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.39:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 01 09:02:32 crc kubenswrapper[4689]: I1201 09:02:32.084338 4689 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 01 09:02:32 crc kubenswrapper[4689]: I1201 09:02:32.084470 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 01 09:02:32 crc kubenswrapper[4689]: I1201 09:02:32.199782 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4sbfl9" podUID="6d24ac0e-a14a-4644-ba9a-bc0a6bb0c538" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.80:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 01 09:02:32 crc kubenswrapper[4689]: I1201 09:02:32.199860 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4sbfl9" podUID="6d24ac0e-a14a-4644-ba9a-bc0a6bb0c538" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.80:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 01 09:02:32 crc kubenswrapper[4689]: I1201 09:02:32.221800 4689 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-ltkzh container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 01 09:02:32 crc kubenswrapper[4689]: I1201 09:02:32.221872 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ltkzh" podUID="dcf239d3-b34c-4fa0-aafa-4e82e29e0c9b" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 01 09:02:32 crc kubenswrapper[4689]: I1201 09:02:32.223251 4689 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-ltkzh container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 01 09:02:32 crc kubenswrapper[4689]: I1201 09:02:32.223292 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ltkzh" podUID="dcf239d3-b34c-4fa0-aafa-4e82e29e0c9b" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 01 09:02:32 crc kubenswrapper[4689]: I1201 09:02:32.410550 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-6fc767d767-8r9dw" podUID="4f43cf3a-d166-44ba-8d44-9e81b0666e0a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.87:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 01 09:02:32 crc kubenswrapper[4689]: I1201 09:02:32.410619 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-manager-6fc767d767-8r9dw" podUID="4f43cf3a-d166-44ba-8d44-9e81b0666e0a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.87:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 01 09:02:33 crc kubenswrapper[4689]: I1201 09:02:33.151845 4689 patch_prober.go:28] interesting pod/image-registry-66df7c8f76-tg572 container/registry namespace/openshift-image-registry: Liveness probe status=failure output="Get \"https://10.217.0.66:5000/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 01 09:02:33 crc kubenswrapper[4689]: I1201 09:02:33.152283 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-image-registry/image-registry-66df7c8f76-tg572" podUID="5b45e776-d57b-4922-b11b-80b8de9f85d3" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.66:5000/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 01 09:02:33 crc kubenswrapper[4689]: E1201 09:02:33.408587 4689 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 01 09:02:34 crc kubenswrapper[4689]: I1201 09:02:34.394564 4689 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-29dmp container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Dec 01 09:02:34 crc kubenswrapper[4689]: I1201 09:02:34.394618 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-29dmp" podUID="21eaf97a-bf73-4e70-a9bc-153b17b8a799" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" Dec 01 09:02:34 crc kubenswrapper[4689]: I1201 09:02:34.516490 4689 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-nnx7f container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 01 09:02:34 crc kubenswrapper[4689]: I1201 09:02:34.516555 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nnx7f" podUID="70e552a9-22d9-4efc-b40a-25232123691b" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.5:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 01 09:02:34 crc kubenswrapper[4689]: I1201 09:02:34.980871 4689 patch_prober.go:28] interesting pod/console-866776c457-g542r container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/health\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Dec 01 09:02:34 crc kubenswrapper[4689]: I1201 09:02:34.980932 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-866776c457-g542r" podUID="c24dc181-1b13-4a51-a87c-16a0b8d1d11d" containerName="console" probeResult="failure" output="Get \"https://10.217.0.35:8443/health\": dial tcp 10.217.0.35:8443: connect: connection refused" Dec 01 09:02:34 crc kubenswrapper[4689]: I1201 09:02:34.981039 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-866776c457-g542r" Dec 01 09:02:35 crc kubenswrapper[4689]: I1201 09:02:35.622792 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="555543d8-21bb-4dba-9c08-ab82e90ea894" containerName="galera" probeResult="failure" output="command timed out" Dec 01 09:02:35 crc kubenswrapper[4689]: I1201 09:02:35.691472 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-5d8x5" podUID="8b33263b-a51c-49e4-b301-b975791e098a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.84:8081/readyz\": dial tcp 10.217.0.84:8081: connect: connection refused" Dec 01 09:02:35 crc kubenswrapper[4689]: I1201 09:02:35.691543 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-5d8x5" podUID="8b33263b-a51c-49e4-b301-b975791e098a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.84:8081/healthz\": dial tcp 10.217.0.84:8081: connect: connection refused" Dec 01 09:02:35 crc kubenswrapper[4689]: I1201 09:02:35.736396 4689 patch_prober.go:28] interesting pod/controller-manager-689b8cbc5f-scmr6 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.61:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 01 09:02:35 crc kubenswrapper[4689]: I1201 09:02:35.736490 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-689b8cbc5f-scmr6" podUID="fad122ae-5995-4afe-8520-d3f958ff065c" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.61:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 01 09:02:36 crc kubenswrapper[4689]: I1201 09:02:36.046614 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-fm9bv" podUID="0d311ded-de3a-42e8-87d3-23c50c4fbd8a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.76:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 01 09:02:36 crc kubenswrapper[4689]: I1201 09:02:36.046687 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-fm9bv" podUID="0d311ded-de3a-42e8-87d3-23c50c4fbd8a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.76:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 01 09:02:36 crc kubenswrapper[4689]: E1201 09:02:36.855466 4689 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 01 09:02:37 crc kubenswrapper[4689]: I1201 09:02:37.085317 4689 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:6443/livez?exclude=etcd\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 01 09:02:37 crc kubenswrapper[4689]: I1201 09:02:37.085380 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/livez?exclude=etcd\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 01 09:02:37 crc kubenswrapper[4689]: I1201 09:02:37.395966 4689 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-29dmp container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Dec 01 09:02:37 crc kubenswrapper[4689]: I1201 09:02:37.396041 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-29dmp" podUID="21eaf97a-bf73-4e70-a9bc-153b17b8a799" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" Dec 01 09:02:37 crc kubenswrapper[4689]: I1201 09:02:37.738729 4689 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-jhh4c container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.57:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 01 09:02:37 crc kubenswrapper[4689]: I1201 09:02:37.739069 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-jhh4c" podUID="0cd9ccf0-2f85-4649-ac80-931f337566ca" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.57:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 01 09:02:38 crc kubenswrapper[4689]: I1201 09:02:38.431585 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-operator-c6fb994fd-5lzsb" podUID="161f3daa-6403-48b2-8e33-b01d632a2316" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.52:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 01 09:02:38 crc kubenswrapper[4689]: I1201 09:02:38.647791 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="432574e7-df30-4103-a396-c758c4df932c" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.0.183:8081/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 01 09:02:38 crc kubenswrapper[4689]: I1201 09:02:38.647879 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="432574e7-df30-4103-a396-c758c4df932c" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.0.183:8080/livez\": context deadline exceeded" Dec 01 09:02:38 crc kubenswrapper[4689]: I1201 09:02:38.647919 4689 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/kube-state-metrics-0" Dec 01 09:02:38 crc kubenswrapper[4689]: I1201 09:02:38.648649 4689 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-state-metrics" containerStatusID={"Type":"cri-o","ID":"1c208c6d7e118cbc858f8c9cceb237c4f18b73917b5d1fc309cdaffdab0f24b0"} pod="openstack/kube-state-metrics-0" containerMessage="Container kube-state-metrics failed liveness probe, will be restarted" Dec 01 09:02:38 crc kubenswrapper[4689]: I1201 09:02:38.648686 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="432574e7-df30-4103-a396-c758c4df932c" containerName="kube-state-metrics" containerID="cri-o://1c208c6d7e118cbc858f8c9cceb237c4f18b73917b5d1fc309cdaffdab0f24b0" gracePeriod=30 Dec 01 09:02:39 crc kubenswrapper[4689]: I1201 09:02:39.152667 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-5j4hf" podUID="67f63643-d748-4058-b24c-66ce8a8c3234" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 01 09:02:39 crc kubenswrapper[4689]: I1201 09:02:39.153008 4689 patch_prober.go:28] interesting pod/machine-config-daemon-hmdnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 09:02:39 crc kubenswrapper[4689]: I1201 09:02:39.153113 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 09:02:40 crc kubenswrapper[4689]: I1201 09:02:40.231550 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-5c56f" podUID="4e0a2cf0-4edb-4f5e-90d2-e276ddd43fd1" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 01 09:02:40 crc kubenswrapper[4689]: I1201 09:02:40.231974 4689 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/speaker-5c56f" Dec 01 09:02:40 crc kubenswrapper[4689]: I1201 09:02:40.231549 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-5c56f" podUID="4e0a2cf0-4edb-4f5e-90d2-e276ddd43fd1" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 01 09:02:40 crc kubenswrapper[4689]: I1201 09:02:40.232172 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-5c56f" Dec 01 09:02:40 crc kubenswrapper[4689]: I1201 09:02:40.233007 4689 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="speaker" containerStatusID={"Type":"cri-o","ID":"e5d7e508a15b7fbc78e47144c9b5278a60dab21b547acd114dee6298729f4e40"} pod="metallb-system/speaker-5c56f" containerMessage="Container speaker failed liveness probe, will be restarted" Dec 01 09:02:40 crc kubenswrapper[4689]: I1201 09:02:40.233088 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/speaker-5c56f" podUID="4e0a2cf0-4edb-4f5e-90d2-e276ddd43fd1" containerName="speaker" containerID="cri-o://e5d7e508a15b7fbc78e47144c9b5278a60dab21b547acd114dee6298729f4e40" gracePeriod=2 Dec 01 09:02:40 crc kubenswrapper[4689]: I1201 09:02:40.347797 4689 patch_prober.go:28] interesting pod/downloads-7954f5f757-xx949 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" start-of-body= Dec 01 09:02:40 crc kubenswrapper[4689]: I1201 09:02:40.347874 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-xx949" podUID="bd24264f-fc40-410e-9bed-3f8e340035b5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" Dec 01 09:02:40 crc kubenswrapper[4689]: I1201 09:02:40.394242 4689 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-29dmp container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Dec 01 09:02:40 crc kubenswrapper[4689]: I1201 09:02:40.394294 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-29dmp" podUID="21eaf97a-bf73-4e70-a9bc-153b17b8a799" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" Dec 01 09:02:41 crc kubenswrapper[4689]: I1201 09:02:41.286172 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-5c56f" podUID="4e0a2cf0-4edb-4f5e-90d2-e276ddd43fd1" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 01 09:02:41 crc kubenswrapper[4689]: I1201 09:02:41.645183 4689 patch_prober.go:28] interesting pod/oauth-openshift-7544d6d989-kzcmr container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 01 09:02:41 crc kubenswrapper[4689]: I1201 09:02:41.645491 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-7544d6d989-kzcmr" podUID="2043c180-d558-48e0-8295-e2d244822828" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 01 09:02:41 crc kubenswrapper[4689]: I1201 09:02:41.726729 4689 patch_prober.go:28] interesting pod/console-operator-58897d9998-z629s container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 01 09:02:41 crc kubenswrapper[4689]: I1201 09:02:41.726779 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-z629s" podUID="304e31dc-6fcd-4654-9c3d-ef693f7c71a6" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 01 09:02:41 crc kubenswrapper[4689]: I1201 09:02:41.937203 4689 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-lz96b container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 01 09:02:41 crc kubenswrapper[4689]: I1201 09:02:41.937258 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lz96b" podUID="b1b970c0-59a2-4782-8664-b17a7d7a8202" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.39:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 01 09:02:42 crc kubenswrapper[4689]: I1201 09:02:42.084943 4689 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 01 09:02:42 crc kubenswrapper[4689]: I1201 09:02:42.085000 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 01 09:02:42 crc kubenswrapper[4689]: I1201 09:02:42.159547 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4sbfl9" podUID="6d24ac0e-a14a-4644-ba9a-bc0a6bb0c538" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.80:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 01 09:02:42 crc kubenswrapper[4689]: I1201 09:02:42.159680 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4sbfl9" Dec 01 09:02:42 crc kubenswrapper[4689]: I1201 09:02:42.221711 4689 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-ltkzh container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 01 09:02:42 crc kubenswrapper[4689]: I1201 09:02:42.221776 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ltkzh" podUID="dcf239d3-b34c-4fa0-aafa-4e82e29e0c9b" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 01 09:02:42 crc kubenswrapper[4689]: I1201 09:02:42.221816 4689 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ltkzh" Dec 01 09:02:42 crc kubenswrapper[4689]: I1201 09:02:42.221740 4689 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-ltkzh container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 01 09:02:42 crc kubenswrapper[4689]: I1201 09:02:42.221898 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ltkzh" podUID="dcf239d3-b34c-4fa0-aafa-4e82e29e0c9b" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 01 09:02:42 crc kubenswrapper[4689]: I1201 09:02:42.222037 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ltkzh" Dec 01 09:02:42 crc kubenswrapper[4689]: I1201 09:02:42.222342 4689 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="catalog-operator" containerStatusID={"Type":"cri-o","ID":"1986c8ce300cf1cdb7c3455a45a9b0bf56a6767193d9c1fb845bbe4e7a5cbfb9"} pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ltkzh" containerMessage="Container catalog-operator failed liveness probe, will be restarted" Dec 01 09:02:42 crc kubenswrapper[4689]: I1201 09:02:42.222393 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ltkzh" podUID="dcf239d3-b34c-4fa0-aafa-4e82e29e0c9b" containerName="catalog-operator" containerID="cri-o://1986c8ce300cf1cdb7c3455a45a9b0bf56a6767193d9c1fb845bbe4e7a5cbfb9" gracePeriod=30 Dec 01 09:02:42 crc kubenswrapper[4689]: I1201 09:02:42.368499 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-6fc767d767-8r9dw" podUID="4f43cf3a-d166-44ba-8d44-9e81b0666e0a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.87:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 01 09:02:43 crc kubenswrapper[4689]: I1201 09:02:43.151447 4689 patch_prober.go:28] interesting pod/image-registry-66df7c8f76-tg572 container/registry namespace/openshift-image-registry: Liveness probe status=failure output="Get \"https://10.217.0.66:5000/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 01 09:02:43 crc kubenswrapper[4689]: I1201 09:02:43.151569 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-image-registry/image-registry-66df7c8f76-tg572" podUID="5b45e776-d57b-4922-b11b-80b8de9f85d3" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.66:5000/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 01 09:02:43 crc kubenswrapper[4689]: I1201 09:02:43.152496 4689 patch_prober.go:28] interesting pod/image-registry-66df7c8f76-tg572 container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.66:5000/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 01 09:02:43 crc kubenswrapper[4689]: I1201 09:02:43.152555 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-66df7c8f76-tg572" podUID="5b45e776-d57b-4922-b11b-80b8de9f85d3" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.66:5000/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 01 09:02:43 crc kubenswrapper[4689]: I1201 09:02:43.201697 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4sbfl9" podUID="6d24ac0e-a14a-4644-ba9a-bc0a6bb0c538" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.80:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 01 09:02:43 crc kubenswrapper[4689]: I1201 09:02:43.394930 4689 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-29dmp container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Dec 01 09:02:43 crc kubenswrapper[4689]: I1201 09:02:43.395046 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-29dmp" podUID="21eaf97a-bf73-4e70-a9bc-153b17b8a799" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" Dec 01 09:02:43 crc kubenswrapper[4689]: E1201 09:02:43.409230 4689 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 01 09:02:43 crc kubenswrapper[4689]: I1201 09:02:43.648057 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="432574e7-df30-4103-a396-c758c4df932c" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.0.183:8081/readyz\": dial tcp 10.217.0.183:8081: connect: connection refused" Dec 01 09:02:43 crc kubenswrapper[4689]: I1201 09:02:43.648289 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Dec 01 09:02:44 crc kubenswrapper[4689]: I1201 09:02:44.517006 4689 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-nnx7f container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 01 09:02:44 crc kubenswrapper[4689]: I1201 09:02:44.517125 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nnx7f" podUID="70e552a9-22d9-4efc-b40a-25232123691b" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.5:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 01 09:02:44 crc kubenswrapper[4689]: I1201 09:02:44.517387 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nnx7f" Dec 01 09:02:44 crc kubenswrapper[4689]: I1201 09:02:44.669793 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nnx7f" Dec 01 09:02:44 crc kubenswrapper[4689]: E1201 09:02:44.849577 4689 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openstack/persistence-rabbitmq-server-0: failed to fetch PVC from API server: rpc error: code = DeadlineExceeded desc = context deadline exceeded" pod="openstack/rabbitmq-server-0" volumeName="persistence" Dec 01 09:02:44 crc kubenswrapper[4689]: I1201 09:02:44.985492 4689 patch_prober.go:28] interesting pod/console-866776c457-g542r container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/health\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Dec 01 09:02:44 crc kubenswrapper[4689]: I1201 09:02:44.985964 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-866776c457-g542r" podUID="c24dc181-1b13-4a51-a87c-16a0b8d1d11d" containerName="console" probeResult="failure" output="Get \"https://10.217.0.35:8443/health\": dial tcp 10.217.0.35:8443: connect: connection refused" Dec 01 09:02:45 crc kubenswrapper[4689]: E1201 09:02:45.003958 4689 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" event=< Dec 01 09:02:45 crc kubenswrapper[4689]: &Event{ObjectMeta:{console-operator-58897d9998-z629s.187d0be60f5b6e2d openshift-console-operator 47241 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-console-operator,Name:console-operator-58897d9998-z629s,UID:304e31dc-6fcd-4654-9c3d-ef693f7c71a6,APIVersion:v1,ResourceVersion:27210,FieldPath:spec.containers{console-operator},},Reason:ProbeError,Message:Liveness probe error: Get "https://10.217.0.20:8443/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Dec 01 09:02:45 crc kubenswrapper[4689]: body: Dec 01 09:02:45 crc kubenswrapper[4689]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-01 09:01:51 +0000 UTC,LastTimestamp:2025-12-01 09:02:11.727305698 +0000 UTC m=+1411.799593642,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 01 09:02:45 crc kubenswrapper[4689]: > Dec 01 09:02:45 crc kubenswrapper[4689]: I1201 09:02:45.032266 4689 status_manager.go:851] "Failed to get status for pod" podUID="304e31dc-6fcd-4654-9c3d-ef693f7c71a6" pod="openshift-console-operator/console-operator-58897d9998-z629s" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" Dec 01 09:02:45 crc kubenswrapper[4689]: I1201 09:02:45.081985 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-pssbg" podUID="d4a1d78c-9486-4b3b-afac-2d51d2cb14df" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.78:8081/readyz\": read tcp 10.217.0.2:50900->10.217.0.78:8081: read: connection reset by peer" Dec 01 09:02:45 crc kubenswrapper[4689]: I1201 09:02:45.590957 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-prvxn" podUID="af92d0ca-8211-49a0-9362-bd5749143fff" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.83:8081/readyz\": dial tcp 10.217.0.83:8081: connect: connection refused" Dec 01 09:02:45 crc kubenswrapper[4689]: I1201 09:02:45.642129 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="555543d8-21bb-4dba-9c08-ab82e90ea894" containerName="galera" probeResult="failure" output="command timed out" Dec 01 09:02:45 crc kubenswrapper[4689]: I1201 09:02:45.693614 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-5d8x5" podUID="8b33263b-a51c-49e4-b301-b975791e098a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.84:8081/readyz\": dial tcp 10.217.0.84:8081: connect: connection refused" Dec 01 09:02:45 crc kubenswrapper[4689]: I1201 09:02:45.907732 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-nsnm9" event={"ID":"3e8aa0dc-ea41-48e6-b047-4bb71fd01f8a","Type":"ContainerStarted","Data":"1cfd652fc5c923303b067f14a289ca6fdfebff77c3f3528a23837ee68d7defd5"} Dec 01 09:02:45 crc kubenswrapper[4689]: I1201 09:02:45.962448 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_olm-operator-6b444d44fb-lz96b_b1b970c0-59a2-4782-8664-b17a7d7a8202/olm-operator/0.log" Dec 01 09:02:45 crc kubenswrapper[4689]: I1201 09:02:45.962541 4689 generic.go:334] "Generic (PLEG): container finished" podID="b1b970c0-59a2-4782-8664-b17a7d7a8202" containerID="cba92047a7a632925f896dab9b77969c1150e1427687ee0c155b4db192ee4e3d" exitCode=137 Dec 01 09:02:45 crc kubenswrapper[4689]: I1201 09:02:45.962706 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lz96b" event={"ID":"b1b970c0-59a2-4782-8664-b17a7d7a8202","Type":"ContainerDied","Data":"cba92047a7a632925f896dab9b77969c1150e1427687ee0c155b4db192ee4e3d"} Dec 01 09:02:46 crc kubenswrapper[4689]: I1201 09:02:45.998095 4689 generic.go:334] "Generic (PLEG): container finished" podID="432574e7-df30-4103-a396-c758c4df932c" containerID="1c208c6d7e118cbc858f8c9cceb237c4f18b73917b5d1fc309cdaffdab0f24b0" exitCode=2 Dec 01 09:02:46 crc kubenswrapper[4689]: I1201 09:02:45.998213 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"432574e7-df30-4103-a396-c758c4df932c","Type":"ContainerDied","Data":"1c208c6d7e118cbc858f8c9cceb237c4f18b73917b5d1fc309cdaffdab0f24b0"} Dec 01 09:02:46 crc kubenswrapper[4689]: I1201 09:02:46.033389 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_downloads-7954f5f757-xx949_bd24264f-fc40-410e-9bed-3f8e340035b5/download-server/0.log" Dec 01 09:02:46 crc kubenswrapper[4689]: I1201 09:02:46.033684 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-xx949" event={"ID":"bd24264f-fc40-410e-9bed-3f8e340035b5","Type":"ContainerStarted","Data":"e5fc8f64853773373310d29668966bab7f77963df58530ad88dd85640d99cfb1"} Dec 01 09:02:46 crc kubenswrapper[4689]: I1201 09:02:46.053724 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-866776c457-g542r_c24dc181-1b13-4a51-a87c-16a0b8d1d11d/console/0.log" Dec 01 09:02:46 crc kubenswrapper[4689]: I1201 09:02:46.053965 4689 generic.go:334] "Generic (PLEG): container finished" podID="c24dc181-1b13-4a51-a87c-16a0b8d1d11d" containerID="2abba3240e9e4eeaf650b3140491a451e1d082916f928a2a272d231dd36fc3f9" exitCode=137 Dec 01 09:02:46 crc kubenswrapper[4689]: I1201 09:02:46.054100 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-866776c457-g542r" event={"ID":"c24dc181-1b13-4a51-a87c-16a0b8d1d11d","Type":"ContainerDied","Data":"2abba3240e9e4eeaf650b3140491a451e1d082916f928a2a272d231dd36fc3f9"} Dec 01 09:02:46 crc kubenswrapper[4689]: I1201 09:02:46.137618 4689 generic.go:334] "Generic (PLEG): container finished" podID="b0e9419c-e23b-4c71-b88e-736138bcdd65" containerID="1b797774757f694f816224944208d9dc5bfe9a50bf5db7eeaef14bbeeb0c5c6b" exitCode=143 Dec 01 09:02:46 crc kubenswrapper[4689]: I1201 09:02:46.137707 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b0e9419c-e23b-4c71-b88e-736138bcdd65","Type":"ContainerDied","Data":"1b797774757f694f816224944208d9dc5bfe9a50bf5db7eeaef14bbeeb0c5c6b"} Dec 01 09:02:46 crc kubenswrapper[4689]: I1201 09:02:46.179387 4689 generic.go:334] "Generic (PLEG): container finished" podID="a3a578c7-bcdf-46f5-a781-5759e3c6da45" containerID="f4a47cecc4df29167762f2b2dd42fd820b681ff1a9a727945ff2739359fbbe9f" exitCode=143 Dec 01 09:02:46 crc kubenswrapper[4689]: I1201 09:02:46.179469 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a3a578c7-bcdf-46f5-a781-5759e3c6da45","Type":"ContainerDied","Data":"f4a47cecc4df29167762f2b2dd42fd820b681ff1a9a727945ff2739359fbbe9f"} Dec 01 09:02:46 crc kubenswrapper[4689]: I1201 09:02:46.219955 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-58897d9998-z629s_304e31dc-6fcd-4654-9c3d-ef693f7c71a6/console-operator/0.log" Dec 01 09:02:46 crc kubenswrapper[4689]: I1201 09:02:46.220008 4689 generic.go:334] "Generic (PLEG): container finished" podID="304e31dc-6fcd-4654-9c3d-ef693f7c71a6" containerID="c099adf2e4ed6ab3907d7b9cee98d370f405b1fc52d0b864e548ec3172dd2661" exitCode=137 Dec 01 09:02:46 crc kubenswrapper[4689]: I1201 09:02:46.220084 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-z629s" event={"ID":"304e31dc-6fcd-4654-9c3d-ef693f7c71a6","Type":"ContainerDied","Data":"c099adf2e4ed6ab3907d7b9cee98d370f405b1fc52d0b864e548ec3172dd2661"} Dec 01 09:02:46 crc kubenswrapper[4689]: I1201 09:02:46.330199 4689 generic.go:334] "Generic (PLEG): container finished" podID="4e0a2cf0-4edb-4f5e-90d2-e276ddd43fd1" containerID="e5d7e508a15b7fbc78e47144c9b5278a60dab21b547acd114dee6298729f4e40" exitCode=137 Dec 01 09:02:46 crc kubenswrapper[4689]: I1201 09:02:46.330293 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-5c56f" event={"ID":"4e0a2cf0-4edb-4f5e-90d2-e276ddd43fd1","Type":"ContainerDied","Data":"e5d7e508a15b7fbc78e47144c9b5278a60dab21b547acd114dee6298729f4e40"} Dec 01 09:02:46 crc kubenswrapper[4689]: I1201 09:02:46.411736 4689 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-29dmp container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Dec 01 09:02:46 crc kubenswrapper[4689]: I1201 09:02:46.411959 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-29dmp" podUID="21eaf97a-bf73-4e70-a9bc-153b17b8a799" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" Dec 01 09:02:46 crc kubenswrapper[4689]: I1201 09:02:46.412961 4689 generic.go:334] "Generic (PLEG): container finished" podID="3751be2a-8675-4b07-8198-101bfdd71d72" containerID="9c68b68d67785316e6f6553dfce056ace95416dc5b9b69feb61079e214f7a7d8" exitCode=1 Dec 01 09:02:46 crc kubenswrapper[4689]: I1201 09:02:46.413027 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-6546668bfd-x722t" event={"ID":"3751be2a-8675-4b07-8198-101bfdd71d72","Type":"ContainerDied","Data":"9c68b68d67785316e6f6553dfce056ace95416dc5b9b69feb61079e214f7a7d8"} Dec 01 09:02:46 crc kubenswrapper[4689]: I1201 09:02:46.413617 4689 scope.go:117] "RemoveContainer" containerID="9c68b68d67785316e6f6553dfce056ace95416dc5b9b69feb61079e214f7a7d8" Dec 01 09:02:46 crc kubenswrapper[4689]: I1201 09:02:46.468430 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Dec 01 09:02:46 crc kubenswrapper[4689]: I1201 09:02:46.468684 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="b0e9419c-e23b-4c71-b88e-736138bcdd65" containerName="nova-metadata-metadata" containerID="cri-o://8c05dd7b740399b0328f87adf6cb02e022e1b8c89810cbebc53f1a7560107c72" gracePeriod=30 Dec 01 09:02:46 crc kubenswrapper[4689]: I1201 09:02:46.496059 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"26809261429df4e4043e1ef21c0d0c7c40fd32853106675d508577e52fa54ad0"} Dec 01 09:02:46 crc kubenswrapper[4689]: I1201 09:02:46.740855 4689 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-jhh4c container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.57:8080/healthz\": dial tcp 10.217.0.57:8080: connect: connection refused" start-of-body= Dec 01 09:02:46 crc kubenswrapper[4689]: I1201 09:02:46.740911 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-jhh4c" podUID="0cd9ccf0-2f85-4649-ac80-931f337566ca" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.57:8080/healthz\": dial tcp 10.217.0.57:8080: connect: connection refused" Dec 01 09:02:47 crc kubenswrapper[4689]: I1201 09:02:47.391950 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-c6fb994fd-5lzsb" Dec 01 09:02:47 crc kubenswrapper[4689]: I1201 09:02:47.507224 4689 generic.go:334] "Generic (PLEG): container finished" podID="ffc5e400-7853-4b1d-ae11-d6ffa553093a" containerID="850d2e57e694b90d1a1651336d47ee83e893a902353ad0441565600133bf8b81" exitCode=1 Dec 01 09:02:47 crc kubenswrapper[4689]: I1201 09:02:47.507287 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-dp8gl" event={"ID":"ffc5e400-7853-4b1d-ae11-d6ffa553093a","Type":"ContainerDied","Data":"850d2e57e694b90d1a1651336d47ee83e893a902353ad0441565600133bf8b81"} Dec 01 09:02:47 crc kubenswrapper[4689]: I1201 09:02:47.507754 4689 scope.go:117] "RemoveContainer" containerID="850d2e57e694b90d1a1651336d47ee83e893a902353ad0441565600133bf8b81" Dec 01 09:02:47 crc kubenswrapper[4689]: I1201 09:02:47.509662 4689 generic.go:334] "Generic (PLEG): container finished" podID="5266d333-3337-4481-9478-2e1df848bfa2" containerID="7a67bb518fb7bd48a319d7d948d0b42751dc40acc5391c3844cbecd6a50823c3" exitCode=1 Dec 01 09:02:47 crc kubenswrapper[4689]: I1201 09:02:47.509705 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-7vrt5" event={"ID":"5266d333-3337-4481-9478-2e1df848bfa2","Type":"ContainerDied","Data":"7a67bb518fb7bd48a319d7d948d0b42751dc40acc5391c3844cbecd6a50823c3"} Dec 01 09:02:47 crc kubenswrapper[4689]: I1201 09:02:47.509985 4689 scope.go:117] "RemoveContainer" containerID="7a67bb518fb7bd48a319d7d948d0b42751dc40acc5391c3844cbecd6a50823c3" Dec 01 09:02:47 crc kubenswrapper[4689]: I1201 09:02:47.512311 4689 generic.go:334] "Generic (PLEG): container finished" podID="2974e300-3f26-4ec0-912a-9ee6b78f33ce" containerID="f0bd44809a243032f4330eb55da2d873db0cefecf714240eb92fe0d9ad165ac2" exitCode=1 Dec 01 09:02:47 crc kubenswrapper[4689]: I1201 09:02:47.512379 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-758d67db86-z298n" event={"ID":"2974e300-3f26-4ec0-912a-9ee6b78f33ce","Type":"ContainerDied","Data":"f0bd44809a243032f4330eb55da2d873db0cefecf714240eb92fe0d9ad165ac2"} Dec 01 09:02:47 crc kubenswrapper[4689]: I1201 09:02:47.512824 4689 scope.go:117] "RemoveContainer" containerID="f0bd44809a243032f4330eb55da2d873db0cefecf714240eb92fe0d9ad165ac2" Dec 01 09:02:47 crc kubenswrapper[4689]: I1201 09:02:47.514781 4689 generic.go:334] "Generic (PLEG): container finished" podID="fc02885a-340a-4800-bd0b-360c0476b456" containerID="4bfc28c452869dd01be53694f326b71644b275ca79ce9ed226ff50d415747351" exitCode=1 Dec 01 09:02:47 crc kubenswrapper[4689]: I1201 09:02:47.514828 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-w6qx2" event={"ID":"fc02885a-340a-4800-bd0b-360c0476b456","Type":"ContainerDied","Data":"4bfc28c452869dd01be53694f326b71644b275ca79ce9ed226ff50d415747351"} Dec 01 09:02:47 crc kubenswrapper[4689]: I1201 09:02:47.516446 4689 scope.go:117] "RemoveContainer" containerID="4bfc28c452869dd01be53694f326b71644b275ca79ce9ed226ff50d415747351" Dec 01 09:02:47 crc kubenswrapper[4689]: I1201 09:02:47.520625 4689 generic.go:334] "Generic (PLEG): container finished" podID="6d24ac0e-a14a-4644-ba9a-bc0a6bb0c538" containerID="9f0bbc7e2d5153cebd64da616c8eefaaba1deba76bf8cd281a7c8794eab90f8f" exitCode=1 Dec 01 09:02:47 crc kubenswrapper[4689]: I1201 09:02:47.520723 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4sbfl9" event={"ID":"6d24ac0e-a14a-4644-ba9a-bc0a6bb0c538","Type":"ContainerDied","Data":"9f0bbc7e2d5153cebd64da616c8eefaaba1deba76bf8cd281a7c8794eab90f8f"} Dec 01 09:02:47 crc kubenswrapper[4689]: I1201 09:02:47.521500 4689 scope.go:117] "RemoveContainer" containerID="9f0bbc7e2d5153cebd64da616c8eefaaba1deba76bf8cd281a7c8794eab90f8f" Dec 01 09:02:47 crc kubenswrapper[4689]: I1201 09:02:47.525935 4689 generic.go:334] "Generic (PLEG): container finished" podID="af92d0ca-8211-49a0-9362-bd5749143fff" containerID="82c462e52b8439d753bdc4006c7581c3634018c7793ad9dbf614d34a3df77478" exitCode=1 Dec 01 09:02:47 crc kubenswrapper[4689]: I1201 09:02:47.525974 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-prvxn" event={"ID":"af92d0ca-8211-49a0-9362-bd5749143fff","Type":"ContainerDied","Data":"82c462e52b8439d753bdc4006c7581c3634018c7793ad9dbf614d34a3df77478"} Dec 01 09:02:47 crc kubenswrapper[4689]: I1201 09:02:47.526632 4689 scope.go:117] "RemoveContainer" containerID="82c462e52b8439d753bdc4006c7581c3634018c7793ad9dbf614d34a3df77478" Dec 01 09:02:47 crc kubenswrapper[4689]: I1201 09:02:47.528523 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-vrzqb_ef543e1b-8068-4ea3-b32a-61027b32e95d/approver/0.log" Dec 01 09:02:47 crc kubenswrapper[4689]: I1201 09:02:47.529710 4689 generic.go:334] "Generic (PLEG): container finished" podID="ef543e1b-8068-4ea3-b32a-61027b32e95d" containerID="62ff4a03d2e50e2d881287008cfedc7ff067e04d3dc24ace25b56677b0eb117c" exitCode=1 Dec 01 09:02:47 crc kubenswrapper[4689]: I1201 09:02:47.529787 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerDied","Data":"62ff4a03d2e50e2d881287008cfedc7ff067e04d3dc24ace25b56677b0eb117c"} Dec 01 09:02:47 crc kubenswrapper[4689]: I1201 09:02:47.530552 4689 scope.go:117] "RemoveContainer" containerID="62ff4a03d2e50e2d881287008cfedc7ff067e04d3dc24ace25b56677b0eb117c" Dec 01 09:02:47 crc kubenswrapper[4689]: I1201 09:02:47.532676 4689 generic.go:334] "Generic (PLEG): container finished" podID="0cd9ccf0-2f85-4649-ac80-931f337566ca" containerID="a44c6ac6c569313a1279c318dc779effd9720f53f67036c49136a7b38e19dcc8" exitCode=0 Dec 01 09:02:47 crc kubenswrapper[4689]: I1201 09:02:47.532731 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-jhh4c" event={"ID":"0cd9ccf0-2f85-4649-ac80-931f337566ca","Type":"ContainerDied","Data":"a44c6ac6c569313a1279c318dc779effd9720f53f67036c49136a7b38e19dcc8"} Dec 01 09:02:47 crc kubenswrapper[4689]: I1201 09:02:47.540710 4689 generic.go:334] "Generic (PLEG): container finished" podID="0d311ded-de3a-42e8-87d3-23c50c4fbd8a" containerID="30cce1a511301fd9f4cdb6c5e73063560df763e8e94ce72cd015af9364d80bb9" exitCode=1 Dec 01 09:02:47 crc kubenswrapper[4689]: I1201 09:02:47.540775 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-fm9bv" event={"ID":"0d311ded-de3a-42e8-87d3-23c50c4fbd8a","Type":"ContainerDied","Data":"30cce1a511301fd9f4cdb6c5e73063560df763e8e94ce72cd015af9364d80bb9"} Dec 01 09:02:47 crc kubenswrapper[4689]: I1201 09:02:47.541450 4689 scope.go:117] "RemoveContainer" containerID="30cce1a511301fd9f4cdb6c5e73063560df763e8e94ce72cd015af9364d80bb9" Dec 01 09:02:47 crc kubenswrapper[4689]: I1201 09:02:47.549004 4689 generic.go:334] "Generic (PLEG): container finished" podID="d4a1d78c-9486-4b3b-afac-2d51d2cb14df" containerID="580ab1486fbeebc6a290a48124fc05016d87f3ed618dcb818814955c2c5f1fc7" exitCode=1 Dec 01 09:02:47 crc kubenswrapper[4689]: I1201 09:02:47.549095 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-pssbg" event={"ID":"d4a1d78c-9486-4b3b-afac-2d51d2cb14df","Type":"ContainerDied","Data":"580ab1486fbeebc6a290a48124fc05016d87f3ed618dcb818814955c2c5f1fc7"} Dec 01 09:02:47 crc kubenswrapper[4689]: I1201 09:02:47.549792 4689 scope.go:117] "RemoveContainer" containerID="580ab1486fbeebc6a290a48124fc05016d87f3ed618dcb818814955c2c5f1fc7" Dec 01 09:02:47 crc kubenswrapper[4689]: I1201 09:02:47.557752 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-sfplx" event={"ID":"5f9861d6-2700-4af6-b385-e79220c14b2e","Type":"ContainerStarted","Data":"da41714841833f90af4966edc36f218725a598f603d9e451b8dc2943ee656b84"} Dec 01 09:02:47 crc kubenswrapper[4689]: I1201 09:02:47.561173 4689 generic.go:334] "Generic (PLEG): container finished" podID="21eaf97a-bf73-4e70-a9bc-153b17b8a799" containerID="a85e9de4547f742cf05dd249b25308b5a4189c6cb2d4fc75a6012f15e9f9e0ed" exitCode=0 Dec 01 09:02:47 crc kubenswrapper[4689]: I1201 09:02:47.561326 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-29dmp" event={"ID":"21eaf97a-bf73-4e70-a9bc-153b17b8a799","Type":"ContainerDied","Data":"a85e9de4547f742cf05dd249b25308b5a4189c6cb2d4fc75a6012f15e9f9e0ed"} Dec 01 09:02:47 crc kubenswrapper[4689]: I1201 09:02:47.567672 4689 generic.go:334] "Generic (PLEG): container finished" podID="4f43cf3a-d166-44ba-8d44-9e81b0666e0a" containerID="d483bddb39b7d383a7886426aa5489fec1a40e6807ae3ae4ea1d31b9334898a5" exitCode=1 Dec 01 09:02:47 crc kubenswrapper[4689]: I1201 09:02:47.567737 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6fc767d767-8r9dw" event={"ID":"4f43cf3a-d166-44ba-8d44-9e81b0666e0a","Type":"ContainerDied","Data":"d483bddb39b7d383a7886426aa5489fec1a40e6807ae3ae4ea1d31b9334898a5"} Dec 01 09:02:47 crc kubenswrapper[4689]: I1201 09:02:47.568344 4689 scope.go:117] "RemoveContainer" containerID="d483bddb39b7d383a7886426aa5489fec1a40e6807ae3ae4ea1d31b9334898a5" Dec 01 09:02:47 crc kubenswrapper[4689]: I1201 09:02:47.573962 4689 generic.go:334] "Generic (PLEG): container finished" podID="7085b604-e50c-4940-ac21-b6fe208c82cd" containerID="b444725f23a637106d845d7ad3c9b06ca2f998ada5bc009210dbada3c38c6417" exitCode=1 Dec 01 09:02:47 crc kubenswrapper[4689]: I1201 09:02:47.574024 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-t56mz" event={"ID":"7085b604-e50c-4940-ac21-b6fe208c82cd","Type":"ContainerDied","Data":"b444725f23a637106d845d7ad3c9b06ca2f998ada5bc009210dbada3c38c6417"} Dec 01 09:02:47 crc kubenswrapper[4689]: I1201 09:02:47.574517 4689 scope.go:117] "RemoveContainer" containerID="b444725f23a637106d845d7ad3c9b06ca2f998ada5bc009210dbada3c38c6417" Dec 01 09:02:47 crc kubenswrapper[4689]: I1201 09:02:47.584778 4689 generic.go:334] "Generic (PLEG): container finished" podID="dcf239d3-b34c-4fa0-aafa-4e82e29e0c9b" containerID="1986c8ce300cf1cdb7c3455a45a9b0bf56a6767193d9c1fb845bbe4e7a5cbfb9" exitCode=0 Dec 01 09:02:47 crc kubenswrapper[4689]: I1201 09:02:47.584861 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ltkzh" event={"ID":"dcf239d3-b34c-4fa0-aafa-4e82e29e0c9b","Type":"ContainerDied","Data":"1986c8ce300cf1cdb7c3455a45a9b0bf56a6767193d9c1fb845bbe4e7a5cbfb9"} Dec 01 09:02:47 crc kubenswrapper[4689]: I1201 09:02:47.590683 4689 generic.go:334] "Generic (PLEG): container finished" podID="8b33263b-a51c-49e4-b301-b975791e098a" containerID="ed2ea08912c28122f8907b80dcd93c9f150e2e1faadc823dfb674c7affe39859" exitCode=1 Dec 01 09:02:47 crc kubenswrapper[4689]: I1201 09:02:47.591009 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-5d8x5" event={"ID":"8b33263b-a51c-49e4-b301-b975791e098a","Type":"ContainerDied","Data":"ed2ea08912c28122f8907b80dcd93c9f150e2e1faadc823dfb674c7affe39859"} Dec 01 09:02:47 crc kubenswrapper[4689]: I1201 09:02:47.591689 4689 scope.go:117] "RemoveContainer" containerID="ed2ea08912c28122f8907b80dcd93c9f150e2e1faadc823dfb674c7affe39859" Dec 01 09:02:47 crc kubenswrapper[4689]: I1201 09:02:47.592118 4689 patch_prober.go:28] interesting pod/downloads-7954f5f757-xx949 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" start-of-body= Dec 01 09:02:47 crc kubenswrapper[4689]: I1201 09:02:47.592163 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-xx949" Dec 01 09:02:47 crc kubenswrapper[4689]: I1201 09:02:47.592249 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-xx949" podUID="bd24264f-fc40-410e-9bed-3f8e340035b5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" Dec 01 09:02:47 crc kubenswrapper[4689]: I1201 09:02:47.605184 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="555543d8-21bb-4dba-9c08-ab82e90ea894" containerName="galera" containerID="cri-o://dd35cdcc63b59bd0ec3b1fcfc4e426e3585823f6c176ab62c8ed5bfd6bedec01" gracePeriod=9 Dec 01 09:02:48 crc kubenswrapper[4689]: I1201 09:02:48.626716 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_olm-operator-6b444d44fb-lz96b_b1b970c0-59a2-4782-8664-b17a7d7a8202/olm-operator/0.log" Dec 01 09:02:48 crc kubenswrapper[4689]: I1201 09:02:48.628288 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lz96b" event={"ID":"b1b970c0-59a2-4782-8664-b17a7d7a8202","Type":"ContainerStarted","Data":"547df6f077f8cd6dd382c7e9f92401f6970cea5d7f8bfa7eb8f81ba9d7d4add2"} Dec 01 09:02:48 crc kubenswrapper[4689]: I1201 09:02:48.628711 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lz96b" Dec 01 09:02:48 crc kubenswrapper[4689]: I1201 09:02:48.628965 4689 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-lz96b container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" start-of-body= Dec 01 09:02:48 crc kubenswrapper[4689]: I1201 09:02:48.629102 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lz96b" podUID="b1b970c0-59a2-4782-8664-b17a7d7a8202" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" Dec 01 09:02:48 crc kubenswrapper[4689]: I1201 09:02:48.636945 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-vfnzm" event={"ID":"12885cbd-1d3e-40c1-b7f5-73bdb6572db9","Type":"ContainerStarted","Data":"96a5c4f2353f6af88fba72e1dbbacc261e441cbbff7d7cf709eb4cbd912ca06f"} Dec 01 09:02:48 crc kubenswrapper[4689]: I1201 09:02:48.637837 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-998648c74-vfnzm" Dec 01 09:02:48 crc kubenswrapper[4689]: I1201 09:02:48.653440 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-58897d9998-z629s_304e31dc-6fcd-4654-9c3d-ef693f7c71a6/console-operator/0.log" Dec 01 09:02:48 crc kubenswrapper[4689]: I1201 09:02:48.653665 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-z629s" event={"ID":"304e31dc-6fcd-4654-9c3d-ef693f7c71a6","Type":"ContainerStarted","Data":"b2aac60d40082ba8775f081653f0289860b3c10e11de446b7bc4142ec8caddcc"} Dec 01 09:02:48 crc kubenswrapper[4689]: I1201 09:02:48.655437 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-z629s" Dec 01 09:02:48 crc kubenswrapper[4689]: I1201 09:02:48.655587 4689 patch_prober.go:28] interesting pod/console-operator-58897d9998-z629s container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/readyz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Dec 01 09:02:48 crc kubenswrapper[4689]: I1201 09:02:48.655670 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-z629s" podUID="304e31dc-6fcd-4654-9c3d-ef693f7c71a6" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/readyz\": dial tcp 10.217.0.20:8443: connect: connection refused" Dec 01 09:02:48 crc kubenswrapper[4689]: I1201 09:02:48.728128 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-p296h" event={"ID":"b3049390-311d-46ed-b472-d32a22f2f8d2","Type":"ContainerStarted","Data":"991baca63f6d922fb75652970500143c4d78efdeb91e26a2607a86ee052ad820"} Dec 01 09:02:48 crc kubenswrapper[4689]: I1201 09:02:48.729674 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-p296h" Dec 01 09:02:48 crc kubenswrapper[4689]: I1201 09:02:48.782543 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-f7xtr" event={"ID":"ea3e4b08-090d-444e-ba53-a3df490fbaf8","Type":"ContainerStarted","Data":"4a95a78c63236e22c793738a94ce924a7ab219f48645c13865fbd249b3730395"} Dec 01 09:02:48 crc kubenswrapper[4689]: I1201 09:02:48.782765 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-f7xtr" Dec 01 09:02:48 crc kubenswrapper[4689]: I1201 09:02:48.807936 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-vbkrn" event={"ID":"f94d79da-740a-4080-81d0-ff3bf1867b3d","Type":"ContainerStarted","Data":"f3a6a068ff7b821d32589a6276af46d85135f47d4289c3802234c4e6f83f60bd"} Dec 01 09:02:48 crc kubenswrapper[4689]: I1201 09:02:48.811087 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5854674fcc-vbkrn" Dec 01 09:02:48 crc kubenswrapper[4689]: I1201 09:02:48.854172 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-7vlqn" event={"ID":"7ce2f328-3ee3-4800-89e4-9141c841c258","Type":"ContainerStarted","Data":"380b21c1743d27acff426825d76839709c1d51f5d728b3e125e8aeb483c1b2de"} Dec 01 09:02:48 crc kubenswrapper[4689]: I1201 09:02:48.854990 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-7vlqn" Dec 01 09:02:48 crc kubenswrapper[4689]: I1201 09:02:48.909878 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-6546668bfd-x722t" event={"ID":"3751be2a-8675-4b07-8198-101bfdd71d72","Type":"ContainerStarted","Data":"a63eb3638759d459530bf58cbd22915304034f4628d5b95ef7ac23451876db27"} Dec 01 09:02:48 crc kubenswrapper[4689]: I1201 09:02:48.911195 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-6546668bfd-x722t" Dec 01 09:02:48 crc kubenswrapper[4689]: I1201 09:02:48.933202 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-tgmx9" event={"ID":"e44ef73a-e172-4557-920d-42f84488390e","Type":"ContainerStarted","Data":"d0d1b9ec2ecf0c5ece58c9789d92a9e3b11b04d58625fa6722e50352dd0c3163"} Dec 01 09:02:48 crc kubenswrapper[4689]: I1201 09:02:48.934195 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-57548d458d-tgmx9" Dec 01 09:02:48 crc kubenswrapper[4689]: I1201 09:02:48.959475 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-ghq5b" event={"ID":"4d923f8c-103b-4b12-b2e7-ea926440e5e7","Type":"ContainerStarted","Data":"dabc854c3913d2e9d952c3bcc6f94545741cef2dce336bf020016c3d1e2b01aa"} Dec 01 09:02:48 crc kubenswrapper[4689]: I1201 09:02:48.960562 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-ghq5b" Dec 01 09:02:48 crc kubenswrapper[4689]: I1201 09:02:48.995946 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6599c4498-sh7sl" event={"ID":"7d09395b-ad54-4b96-af05-ea6ce866de71","Type":"ContainerStarted","Data":"240671caaf0c178c5a332eddbed126f83ffbf9c4111828a339b1b642e8b012af"} Dec 01 09:02:48 crc kubenswrapper[4689]: I1201 09:02:48.997042 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-6599c4498-sh7sl" Dec 01 09:02:49 crc kubenswrapper[4689]: I1201 09:02:49.064586 4689 patch_prober.go:28] interesting pod/downloads-7954f5f757-xx949 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" start-of-body= Dec 01 09:02:49 crc kubenswrapper[4689]: I1201 09:02:49.064658 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-xx949" podUID="bd24264f-fc40-410e-9bed-3f8e340035b5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" Dec 01 09:02:49 crc kubenswrapper[4689]: I1201 09:02:49.151267 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-5c56f" podUID="4e0a2cf0-4edb-4f5e-90d2-e276ddd43fd1" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": dial tcp [::1]:29150: connect: connection refused" Dec 01 09:02:49 crc kubenswrapper[4689]: I1201 09:02:49.169533 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-xhrp7" event={"ID":"ae47d16a-5025-44f4-8fa4-f5aa08b126b8","Type":"ContainerStarted","Data":"33df9a5826db466c416147e9c30e8c768d38bc253d6535975ba091e312ec71b9"} Dec 01 09:02:49 crc kubenswrapper[4689]: I1201 09:02:49.169591 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-sfplx" Dec 01 09:02:49 crc kubenswrapper[4689]: I1201 09:02:49.169610 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-xhrp7" Dec 01 09:02:49 crc kubenswrapper[4689]: I1201 09:02:49.394629 4689 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-29dmp container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Dec 01 09:02:49 crc kubenswrapper[4689]: I1201 09:02:49.394936 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-29dmp" podUID="21eaf97a-bf73-4e70-a9bc-153b17b8a799" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" Dec 01 09:02:50 crc kubenswrapper[4689]: I1201 09:02:50.143503 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-vrzqb_ef543e1b-8068-4ea3-b32a-61027b32e95d/approver/0.log" Dec 01 09:02:50 crc kubenswrapper[4689]: I1201 09:02:50.145848 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"2a654411e262ed343c2272a1882ecf68f86573c23c218322ec4f9edbcb7d7f86"} Dec 01 09:02:50 crc kubenswrapper[4689]: I1201 09:02:50.149347 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ltkzh" event={"ID":"dcf239d3-b34c-4fa0-aafa-4e82e29e0c9b","Type":"ContainerStarted","Data":"2ea040c17315f4d9c78d8a9c6d527587353df83b6b8e1a3366c4643eff321fda"} Dec 01 09:02:50 crc kubenswrapper[4689]: I1201 09:02:50.151232 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ltkzh" Dec 01 09:02:50 crc kubenswrapper[4689]: I1201 09:02:50.156456 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ltkzh" Dec 01 09:02:50 crc kubenswrapper[4689]: I1201 09:02:50.166746 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-29dmp" event={"ID":"21eaf97a-bf73-4e70-a9bc-153b17b8a799","Type":"ContainerStarted","Data":"a1251c2bd440744e0155b4e6b7bdd720ee1948c534bec7d4a59dfa1d72584736"} Dec 01 09:02:50 crc kubenswrapper[4689]: I1201 09:02:50.166897 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-29dmp" Dec 01 09:02:50 crc kubenswrapper[4689]: I1201 09:02:50.171664 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-866776c457-g542r_c24dc181-1b13-4a51-a87c-16a0b8d1d11d/console/0.log" Dec 01 09:02:50 crc kubenswrapper[4689]: I1201 09:02:50.171765 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-866776c457-g542r" event={"ID":"c24dc181-1b13-4a51-a87c-16a0b8d1d11d","Type":"ContainerStarted","Data":"a07b1b1cb4f22c4daab749c797d77742a2d8ba3d270f4a47d427a1311ae13d9d"} Dec 01 09:02:50 crc kubenswrapper[4689]: I1201 09:02:50.184669 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a3a578c7-bcdf-46f5-a781-5759e3c6da45","Type":"ContainerStarted","Data":"17bab21c495ff6cef4c374fd9f5069040bb8587287121e36cbc0c3901471083c"} Dec 01 09:02:50 crc kubenswrapper[4689]: I1201 09:02:50.186036 4689 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="nova-api-api" containerStatusID={"Type":"cri-o","ID":"bb59a2934681384cde14be0af192111e76c0b676289cc8b22d1500102f817b95"} pod="openstack/nova-api-0" containerMessage="Container nova-api-api failed startup probe, will be restarted" Dec 01 09:02:50 crc kubenswrapper[4689]: I1201 09:02:50.186094 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a3a578c7-bcdf-46f5-a781-5759e3c6da45" containerName="nova-api-api" containerID="cri-o://bb59a2934681384cde14be0af192111e76c0b676289cc8b22d1500102f817b95" gracePeriod=30 Dec 01 09:02:50 crc kubenswrapper[4689]: I1201 09:02:50.200290 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6fc767d767-8r9dw" event={"ID":"4f43cf3a-d166-44ba-8d44-9e81b0666e0a","Type":"ContainerStarted","Data":"b3b5d8ea3fe10652ba5146802ba5bb86ecf14e3e0ee9494ebfffb65f10cee643"} Dec 01 09:02:50 crc kubenswrapper[4689]: I1201 09:02:50.201432 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-6fc767d767-8r9dw" Dec 01 09:02:50 crc kubenswrapper[4689]: I1201 09:02:50.208725 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-prvxn" event={"ID":"af92d0ca-8211-49a0-9362-bd5749143fff","Type":"ContainerStarted","Data":"38fe5c8747369298fdd3a4e9891f3cd9e976b2ac3a68dcdec45ea966c99f3677"} Dec 01 09:02:50 crc kubenswrapper[4689]: I1201 09:02:50.209216 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-prvxn" Dec 01 09:02:50 crc kubenswrapper[4689]: I1201 09:02:50.308449 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lz96b" Dec 01 09:02:50 crc kubenswrapper[4689]: I1201 09:02:50.347743 4689 patch_prober.go:28] interesting pod/downloads-7954f5f757-xx949 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" start-of-body= Dec 01 09:02:50 crc kubenswrapper[4689]: I1201 09:02:50.347791 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-xx949" podUID="bd24264f-fc40-410e-9bed-3f8e340035b5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" Dec 01 09:02:50 crc kubenswrapper[4689]: I1201 09:02:50.347936 4689 patch_prober.go:28] interesting pod/downloads-7954f5f757-xx949 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" start-of-body= Dec 01 09:02:50 crc kubenswrapper[4689]: I1201 09:02:50.347992 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-xx949" podUID="bd24264f-fc40-410e-9bed-3f8e340035b5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" Dec 01 09:02:50 crc kubenswrapper[4689]: I1201 09:02:50.533786 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 09:02:50 crc kubenswrapper[4689]: I1201 09:02:50.686472 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-z629s" Dec 01 09:02:51 crc kubenswrapper[4689]: I1201 09:02:51.117610 4689 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4sbfl9" Dec 01 09:02:51 crc kubenswrapper[4689]: I1201 09:02:51.261614 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-w6qx2" event={"ID":"fc02885a-340a-4800-bd0b-360c0476b456","Type":"ContainerStarted","Data":"3efbe11fafc9a48fce845d326ad05cf51f32f820cbbe45d397e07d6a75e09488"} Dec 01 09:02:51 crc kubenswrapper[4689]: I1201 09:02:51.274821 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="0556c1c8-69cc-4fa6-a3df-46a4ed439312" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 09:02:51 crc kubenswrapper[4689]: I1201 09:02:51.281709 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-fm9bv" event={"ID":"0d311ded-de3a-42e8-87d3-23c50c4fbd8a","Type":"ContainerStarted","Data":"2328b850a1b13721e3a99cef713e4694a94647a4a19fe6b96d303b9274014e43"} Dec 01 09:02:51 crc kubenswrapper[4689]: I1201 09:02:51.293212 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-5c56f" event={"ID":"4e0a2cf0-4edb-4f5e-90d2-e276ddd43fd1","Type":"ContainerStarted","Data":"639254b47760dca2db1031fdd5bfd34b881d013c300a8f9365403db8bc9baed9"} Dec 01 09:02:51 crc kubenswrapper[4689]: I1201 09:02:51.299701 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-29dmp" Dec 01 09:02:52 crc kubenswrapper[4689]: I1201 09:02:52.310058 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-pssbg" event={"ID":"d4a1d78c-9486-4b3b-afac-2d51d2cb14df","Type":"ContainerStarted","Data":"615969a9317fe4d53c3094820ca5467204f03982dbf30ac5c18bf06195c97c00"} Dec 01 09:02:52 crc kubenswrapper[4689]: I1201 09:02:52.314206 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-jhh4c" event={"ID":"0cd9ccf0-2f85-4649-ac80-931f337566ca","Type":"ContainerStarted","Data":"182cfc50aceaed8b381e502ff8ff45a7940ea86ae01836bfd33a253fdfd18408"} Dec 01 09:02:52 crc kubenswrapper[4689]: I1201 09:02:52.316994 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-dp8gl" event={"ID":"ffc5e400-7853-4b1d-ae11-d6ffa553093a","Type":"ContainerStarted","Data":"4a8657ceefc5929ed88bed59f775fdb3623c7b667d775b3c6be9866a293ca4e5"} Dec 01 09:02:52 crc kubenswrapper[4689]: I1201 09:02:52.320538 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-7vrt5" event={"ID":"5266d333-3337-4481-9478-2e1df848bfa2","Type":"ContainerStarted","Data":"69c751e976097948641190e78840689e3fdbf579de1d1bf6b678347463202344"} Dec 01 09:02:52 crc kubenswrapper[4689]: I1201 09:02:52.322476 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4sbfl9" event={"ID":"6d24ac0e-a14a-4644-ba9a-bc0a6bb0c538","Type":"ContainerStarted","Data":"48f086aed40e3294ab33ca7b6412b930ff42ffaa468c3a2da99daf8e14721f80"} Dec 01 09:02:52 crc kubenswrapper[4689]: I1201 09:02:52.324420 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-t56mz" event={"ID":"7085b604-e50c-4940-ac21-b6fe208c82cd","Type":"ContainerStarted","Data":"d4f9246f4a1a86d3e7cc5678330476ff00aa3be150a3db51f46e39f27e7af1fc"} Dec 01 09:02:52 crc kubenswrapper[4689]: I1201 09:02:52.326590 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-758d67db86-z298n" event={"ID":"2974e300-3f26-4ec0-912a-9ee6b78f33ce","Type":"ContainerStarted","Data":"37c5c0fedd7dac6a8c97327c1e5b88fd3e7cfefe02e11d9b0e213ee386f53f02"} Dec 01 09:02:52 crc kubenswrapper[4689]: I1201 09:02:52.330964 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-5d8x5" event={"ID":"8b33263b-a51c-49e4-b301-b975791e098a","Type":"ContainerStarted","Data":"e3388fe1e8c1241e03e7e69f4e4f62354a53bde46e5d8cd2c2d00cef62583a79"} Dec 01 09:02:52 crc kubenswrapper[4689]: I1201 09:02:52.333193 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-5c56f" Dec 01 09:02:52 crc kubenswrapper[4689]: I1201 09:02:52.333238 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-fm9bv" Dec 01 09:02:53 crc kubenswrapper[4689]: I1201 09:02:53.338854 4689 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-jhh4c container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.57:8080/healthz\": dial tcp 10.217.0.57:8080: connect: connection refused" start-of-body= Dec 01 09:02:53 crc kubenswrapper[4689]: I1201 09:02:53.340260 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-jhh4c" podUID="0cd9ccf0-2f85-4649-ac80-931f337566ca" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.57:8080/healthz\": dial tcp 10.217.0.57:8080: connect: connection refused" Dec 01 09:02:53 crc kubenswrapper[4689]: I1201 09:02:53.339133 4689 status_manager.go:317] "Container readiness changed for unknown container" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-pssbg" containerID="cri-o://580ab1486fbeebc6a290a48124fc05016d87f3ed618dcb818814955c2c5f1fc7" Dec 01 09:02:53 crc kubenswrapper[4689]: I1201 09:02:53.340533 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-pssbg" Dec 01 09:02:53 crc kubenswrapper[4689]: I1201 09:02:53.340571 4689 status_manager.go:317] "Container readiness changed for unknown container" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-w6qx2" containerID="cri-o://4bfc28c452869dd01be53694f326b71644b275ca79ce9ed226ff50d415747351" Dec 01 09:02:53 crc kubenswrapper[4689]: I1201 09:02:53.340583 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-w6qx2" Dec 01 09:02:53 crc kubenswrapper[4689]: I1201 09:02:53.340600 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-7vrt5" Dec 01 09:02:53 crc kubenswrapper[4689]: I1201 09:02:53.340613 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-5d8x5" Dec 01 09:02:53 crc kubenswrapper[4689]: I1201 09:02:53.340626 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-758d67db86-z298n" Dec 01 09:02:53 crc kubenswrapper[4689]: I1201 09:02:53.340637 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-jhh4c" Dec 01 09:02:53 crc kubenswrapper[4689]: I1201 09:02:53.341266 4689 status_manager.go:317] "Container readiness changed for unknown container" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-dp8gl" containerID="cri-o://850d2e57e694b90d1a1651336d47ee83e893a902353ad0441565600133bf8b81" Dec 01 09:02:53 crc kubenswrapper[4689]: I1201 09:02:53.341291 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-dp8gl" Dec 01 09:02:53 crc kubenswrapper[4689]: I1201 09:02:53.501517 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="0556c1c8-69cc-4fa6-a3df-46a4ed439312" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 09:02:53 crc kubenswrapper[4689]: I1201 09:02:53.647455 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="432574e7-df30-4103-a396-c758c4df932c" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.0.183:8081/readyz\": dial tcp 10.217.0.183:8081: connect: connection refused" Dec 01 09:02:53 crc kubenswrapper[4689]: I1201 09:02:53.869803 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 09:02:53 crc kubenswrapper[4689]: I1201 09:02:53.874825 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 09:02:54 crc kubenswrapper[4689]: I1201 09:02:54.348205 4689 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-jhh4c container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.57:8080/healthz\": dial tcp 10.217.0.57:8080: connect: connection refused" start-of-body= Dec 01 09:02:54 crc kubenswrapper[4689]: I1201 09:02:54.348262 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-jhh4c" podUID="0cd9ccf0-2f85-4649-ac80-931f337566ca" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.57:8080/healthz\": dial tcp 10.217.0.57:8080: connect: connection refused" Dec 01 09:02:54 crc kubenswrapper[4689]: I1201 09:02:54.348584 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-w6qx2" Dec 01 09:02:54 crc kubenswrapper[4689]: I1201 09:02:54.349461 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-pssbg" Dec 01 09:02:54 crc kubenswrapper[4689]: I1201 09:02:54.349486 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-7vrt5" Dec 01 09:02:54 crc kubenswrapper[4689]: I1201 09:02:54.349499 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-5d8x5" Dec 01 09:02:54 crc kubenswrapper[4689]: I1201 09:02:54.349511 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-758d67db86-z298n" Dec 01 09:02:54 crc kubenswrapper[4689]: I1201 09:02:54.349526 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-dp8gl" Dec 01 09:02:54 crc kubenswrapper[4689]: E1201 09:02:54.463466 4689 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="dd35cdcc63b59bd0ec3b1fcfc4e426e3585823f6c176ab62c8ed5bfd6bedec01" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Dec 01 09:02:54 crc kubenswrapper[4689]: E1201 09:02:54.470434 4689 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="dd35cdcc63b59bd0ec3b1fcfc4e426e3585823f6c176ab62c8ed5bfd6bedec01" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Dec 01 09:02:54 crc kubenswrapper[4689]: E1201 09:02:54.473026 4689 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="dd35cdcc63b59bd0ec3b1fcfc4e426e3585823f6c176ab62c8ed5bfd6bedec01" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Dec 01 09:02:54 crc kubenswrapper[4689]: E1201 09:02:54.473084 4689 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="555543d8-21bb-4dba-9c08-ab82e90ea894" containerName="galera" Dec 01 09:02:54 crc kubenswrapper[4689]: I1201 09:02:54.493053 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-7vlqn" Dec 01 09:02:54 crc kubenswrapper[4689]: I1201 09:02:54.826465 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-f7xtr" Dec 01 09:02:54 crc kubenswrapper[4689]: I1201 09:02:54.843440 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-xhrp7" Dec 01 09:02:54 crc kubenswrapper[4689]: I1201 09:02:54.907177 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-6546668bfd-x722t" Dec 01 09:02:54 crc kubenswrapper[4689]: I1201 09:02:54.980793 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-866776c457-g542r" Dec 01 09:02:54 crc kubenswrapper[4689]: I1201 09:02:54.980919 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-866776c457-g542r" Dec 01 09:02:54 crc kubenswrapper[4689]: I1201 09:02:54.986266 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-866776c457-g542r" Dec 01 09:02:55 crc kubenswrapper[4689]: I1201 09:02:55.013302 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-ghq5b" Dec 01 09:02:55 crc kubenswrapper[4689]: I1201 09:02:55.294025 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-p296h" Dec 01 09:02:55 crc kubenswrapper[4689]: I1201 09:02:55.316139 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-78f8948974-nsnm9" Dec 01 09:02:55 crc kubenswrapper[4689]: I1201 09:02:55.318151 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-78f8948974-nsnm9" Dec 01 09:02:55 crc kubenswrapper[4689]: I1201 09:02:55.357871 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-w6qx2" Dec 01 09:02:55 crc kubenswrapper[4689]: I1201 09:02:55.362781 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-866776c457-g542r" Dec 01 09:02:55 crc kubenswrapper[4689]: I1201 09:02:55.444185 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-998648c74-vfnzm" Dec 01 09:02:55 crc kubenswrapper[4689]: I1201 09:02:55.589688 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-sfplx" Dec 01 09:02:55 crc kubenswrapper[4689]: I1201 09:02:55.592257 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-prvxn" Dec 01 09:02:55 crc kubenswrapper[4689]: I1201 09:02:55.609095 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5854674fcc-vbkrn" Dec 01 09:02:55 crc kubenswrapper[4689]: I1201 09:02:55.723276 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 01 09:02:56 crc kubenswrapper[4689]: I1201 09:02:56.478475 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="0556c1c8-69cc-4fa6-a3df-46a4ed439312" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 09:02:56 crc kubenswrapper[4689]: I1201 09:02:56.478617 4689 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/cinder-scheduler-0" Dec 01 09:02:56 crc kubenswrapper[4689]: I1201 09:02:56.479549 4689 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cinder-scheduler" containerStatusID={"Type":"cri-o","ID":"8b3034d3593a24a14d2b067c67c7cd4728b6706fb5845bc7936fe44037091d07"} pod="openstack/cinder-scheduler-0" containerMessage="Container cinder-scheduler failed liveness probe, will be restarted" Dec 01 09:02:56 crc kubenswrapper[4689]: I1201 09:02:56.479617 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="0556c1c8-69cc-4fa6-a3df-46a4ed439312" containerName="cinder-scheduler" containerID="cri-o://8b3034d3593a24a14d2b067c67c7cd4728b6706fb5845bc7936fe44037091d07" gracePeriod=30 Dec 01 09:02:56 crc kubenswrapper[4689]: I1201 09:02:56.723504 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a3a578c7-bcdf-46f5-a781-5759e3c6da45" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.207:8774/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 01 09:02:56 crc kubenswrapper[4689]: I1201 09:02:56.737312 4689 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-jhh4c container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.57:8080/healthz\": dial tcp 10.217.0.57:8080: connect: connection refused" start-of-body= Dec 01 09:02:56 crc kubenswrapper[4689]: I1201 09:02:56.737384 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-jhh4c" podUID="0cd9ccf0-2f85-4649-ac80-931f337566ca" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.57:8080/healthz\": dial tcp 10.217.0.57:8080: connect: connection refused" Dec 01 09:02:56 crc kubenswrapper[4689]: I1201 09:02:56.738998 4689 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-jhh4c container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.57:8080/healthz\": dial tcp 10.217.0.57:8080: connect: connection refused" start-of-body= Dec 01 09:02:56 crc kubenswrapper[4689]: I1201 09:02:56.739074 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-jhh4c" podUID="0cd9ccf0-2f85-4649-ac80-931f337566ca" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.57:8080/healthz\": dial tcp 10.217.0.57:8080: connect: connection refused" Dec 01 09:02:59 crc kubenswrapper[4689]: I1201 09:02:59.405907 4689 generic.go:334] "Generic (PLEG): container finished" podID="555543d8-21bb-4dba-9c08-ab82e90ea894" containerID="dd35cdcc63b59bd0ec3b1fcfc4e426e3585823f6c176ab62c8ed5bfd6bedec01" exitCode=137 Dec 01 09:02:59 crc kubenswrapper[4689]: I1201 09:02:59.405994 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"555543d8-21bb-4dba-9c08-ab82e90ea894","Type":"ContainerDied","Data":"dd35cdcc63b59bd0ec3b1fcfc4e426e3585823f6c176ab62c8ed5bfd6bedec01"} Dec 01 09:02:59 crc kubenswrapper[4689]: I1201 09:02:59.847440 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="cert-manager/cert-manager-5b446d88c5-jxq2j" podUID="159eaec1-709b-4f6b-9c2d-271433805055" containerName="cert-manager-controller" probeResult="failure" output="Get \"http://10.217.0.70:9403/livez\": dial tcp 10.217.0.70:9403: connect: connection refused" Dec 01 09:03:00 crc kubenswrapper[4689]: I1201 09:03:00.348006 4689 patch_prober.go:28] interesting pod/downloads-7954f5f757-xx949 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" start-of-body= Dec 01 09:03:00 crc kubenswrapper[4689]: I1201 09:03:00.348418 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-xx949" podUID="bd24264f-fc40-410e-9bed-3f8e340035b5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" Dec 01 09:03:00 crc kubenswrapper[4689]: I1201 09:03:00.348031 4689 patch_prober.go:28] interesting pod/downloads-7954f5f757-xx949 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" start-of-body= Dec 01 09:03:00 crc kubenswrapper[4689]: I1201 09:03:00.348670 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-xx949" podUID="bd24264f-fc40-410e-9bed-3f8e340035b5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" Dec 01 09:03:00 crc kubenswrapper[4689]: I1201 09:03:00.425629 4689 generic.go:334] "Generic (PLEG): container finished" podID="159eaec1-709b-4f6b-9c2d-271433805055" containerID="2f435985bb1cd85efc5116ba3d002c45625d783b669fe67371895bb7f118a125" exitCode=1 Dec 01 09:03:00 crc kubenswrapper[4689]: I1201 09:03:00.425702 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-jxq2j" event={"ID":"159eaec1-709b-4f6b-9c2d-271433805055","Type":"ContainerDied","Data":"2f435985bb1cd85efc5116ba3d002c45625d783b669fe67371895bb7f118a125"} Dec 01 09:03:00 crc kubenswrapper[4689]: I1201 09:03:00.426413 4689 scope.go:117] "RemoveContainer" containerID="2f435985bb1cd85efc5116ba3d002c45625d783b669fe67371895bb7f118a125" Dec 01 09:03:00 crc kubenswrapper[4689]: I1201 09:03:00.430663 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"432574e7-df30-4103-a396-c758c4df932c","Type":"ContainerStarted","Data":"23b25f8bd6b86c8b68ef4263d1b695e97f94aad4c6e1fe1dd5788cada88436f8"} Dec 01 09:03:00 crc kubenswrapper[4689]: I1201 09:03:00.430897 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Dec 01 09:03:00 crc kubenswrapper[4689]: I1201 09:03:00.432634 4689 generic.go:334] "Generic (PLEG): container finished" podID="f166eac0-2073-4aa8-9b0b-6b3c6e43b19e" containerID="30ed4fe6b91c26e9d585487c1a1e70b72333466136471c6b637c8d1cf47bedad" exitCode=1 Dec 01 09:03:00 crc kubenswrapper[4689]: I1201 09:03:00.432666 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-lhzz2" event={"ID":"f166eac0-2073-4aa8-9b0b-6b3c6e43b19e","Type":"ContainerDied","Data":"30ed4fe6b91c26e9d585487c1a1e70b72333466136471c6b637c8d1cf47bedad"} Dec 01 09:03:00 crc kubenswrapper[4689]: I1201 09:03:00.433331 4689 scope.go:117] "RemoveContainer" containerID="30ed4fe6b91c26e9d585487c1a1e70b72333466136471c6b637c8d1cf47bedad" Dec 01 09:03:00 crc kubenswrapper[4689]: I1201 09:03:00.538902 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 09:03:00 crc kubenswrapper[4689]: I1201 09:03:00.604443 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-57548d458d-tgmx9" Dec 01 09:03:02 crc kubenswrapper[4689]: I1201 09:03:01.117960 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4sbfl9" Dec 01 09:03:02 crc kubenswrapper[4689]: I1201 09:03:01.127560 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4sbfl9" Dec 01 09:03:02 crc kubenswrapper[4689]: I1201 09:03:01.341818 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-6fc767d767-8r9dw" Dec 01 09:03:02 crc kubenswrapper[4689]: I1201 09:03:01.499231 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-jxq2j" event={"ID":"159eaec1-709b-4f6b-9c2d-271433805055","Type":"ContainerStarted","Data":"d88c4f9e1684662b0f422a9683f260822c387706fb9d878d4cb9806c5c40e03d"} Dec 01 09:03:02 crc kubenswrapper[4689]: I1201 09:03:01.555750 4689 generic.go:334] "Generic (PLEG): container finished" podID="b0e9419c-e23b-4c71-b88e-736138bcdd65" containerID="8c05dd7b740399b0328f87adf6cb02e022e1b8c89810cbebc53f1a7560107c72" exitCode=0 Dec 01 09:03:02 crc kubenswrapper[4689]: I1201 09:03:01.555842 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b0e9419c-e23b-4c71-b88e-736138bcdd65","Type":"ContainerDied","Data":"8c05dd7b740399b0328f87adf6cb02e022e1b8c89810cbebc53f1a7560107c72"} Dec 01 09:03:02 crc kubenswrapper[4689]: I1201 09:03:01.612588 4689 generic.go:334] "Generic (PLEG): container finished" podID="0556c1c8-69cc-4fa6-a3df-46a4ed439312" containerID="8b3034d3593a24a14d2b067c67c7cd4728b6706fb5845bc7936fe44037091d07" exitCode=0 Dec 01 09:03:02 crc kubenswrapper[4689]: I1201 09:03:01.612666 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0556c1c8-69cc-4fa6-a3df-46a4ed439312","Type":"ContainerDied","Data":"8b3034d3593a24a14d2b067c67c7cd4728b6706fb5845bc7936fe44037091d07"} Dec 01 09:03:02 crc kubenswrapper[4689]: I1201 09:03:01.640887 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-lhzz2" event={"ID":"f166eac0-2073-4aa8-9b0b-6b3c6e43b19e","Type":"ContainerStarted","Data":"8368d49425356bf81afdd02e729e10bf8fff93fa93479a3f8e8372d46b9d457f"} Dec 01 09:03:02 crc kubenswrapper[4689]: I1201 09:03:01.676299 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"555543d8-21bb-4dba-9c08-ab82e90ea894","Type":"ContainerStarted","Data":"620c4f288b0289b3342f1450e9bf3a882ba9a8884a5856008ee7a2892c5e409e"} Dec 01 09:03:02 crc kubenswrapper[4689]: I1201 09:03:02.689607 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b0e9419c-e23b-4c71-b88e-736138bcdd65","Type":"ContainerStarted","Data":"0be1abfcaa75039bc0abe8a621d69d547c12bb47f339d6796205a83680b4e517"} Dec 01 09:03:03 crc kubenswrapper[4689]: I1201 09:03:03.722822 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b0e9419c-e23b-4c71-b88e-736138bcdd65","Type":"ContainerStarted","Data":"de23bad06b13ef2c3c73644c3f0d5048183ad6fabec72e34bec782b48ee07667"} Dec 01 09:03:04 crc kubenswrapper[4689]: I1201 09:03:04.462129 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Dec 01 09:03:04 crc kubenswrapper[4689]: I1201 09:03:04.462196 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Dec 01 09:03:04 crc kubenswrapper[4689]: I1201 09:03:04.519590 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-7vrt5" Dec 01 09:03:04 crc kubenswrapper[4689]: I1201 09:03:04.753759 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-758d67db86-z298n" Dec 01 09:03:04 crc kubenswrapper[4689]: I1201 09:03:04.870289 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0556c1c8-69cc-4fa6-a3df-46a4ed439312","Type":"ContainerStarted","Data":"9c50a818de9052b85077fd615f08a021dc6830b3327217d4fb8350e97c42f356"} Dec 01 09:03:04 crc kubenswrapper[4689]: I1201 09:03:04.967198 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-dp8gl" Dec 01 09:03:05 crc kubenswrapper[4689]: I1201 09:03:05.022666 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-fm9bv" Dec 01 09:03:05 crc kubenswrapper[4689]: I1201 09:03:05.099169 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-pssbg" Dec 01 09:03:05 crc kubenswrapper[4689]: I1201 09:03:05.693051 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-5d8x5" Dec 01 09:03:05 crc kubenswrapper[4689]: I1201 09:03:05.724147 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a3a578c7-bcdf-46f5-a781-5759e3c6da45" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.207:8774/\": dial tcp 10.217.0.207:8774: connect: connection refused" Dec 01 09:03:05 crc kubenswrapper[4689]: I1201 09:03:05.724342 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 01 09:03:06 crc kubenswrapper[4689]: I1201 09:03:06.070280 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 01 09:03:06 crc kubenswrapper[4689]: I1201 09:03:06.071762 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 01 09:03:06 crc kubenswrapper[4689]: I1201 09:03:06.747242 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-jhh4c" Dec 01 09:03:06 crc kubenswrapper[4689]: I1201 09:03:06.889220 4689 generic.go:334] "Generic (PLEG): container finished" podID="a3a578c7-bcdf-46f5-a781-5759e3c6da45" containerID="bb59a2934681384cde14be0af192111e76c0b676289cc8b22d1500102f817b95" exitCode=0 Dec 01 09:03:06 crc kubenswrapper[4689]: I1201 09:03:06.890621 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a3a578c7-bcdf-46f5-a781-5759e3c6da45","Type":"ContainerDied","Data":"bb59a2934681384cde14be0af192111e76c0b676289cc8b22d1500102f817b95"} Dec 01 09:03:09 crc kubenswrapper[4689]: I1201 09:03:09.146826 4689 patch_prober.go:28] interesting pod/machine-config-daemon-hmdnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 09:03:09 crc kubenswrapper[4689]: I1201 09:03:09.148506 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 09:03:09 crc kubenswrapper[4689]: I1201 09:03:09.150068 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-5c56f" Dec 01 09:03:09 crc kubenswrapper[4689]: I1201 09:03:09.458486 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Dec 01 09:03:09 crc kubenswrapper[4689]: I1201 09:03:09.558919 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="0556c1c8-69cc-4fa6-a3df-46a4ed439312" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 09:03:10 crc kubenswrapper[4689]: I1201 09:03:10.347457 4689 patch_prober.go:28] interesting pod/downloads-7954f5f757-xx949 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" start-of-body= Dec 01 09:03:10 crc kubenswrapper[4689]: I1201 09:03:10.347518 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-xx949" podUID="bd24264f-fc40-410e-9bed-3f8e340035b5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" Dec 01 09:03:10 crc kubenswrapper[4689]: I1201 09:03:10.347551 4689 patch_prober.go:28] interesting pod/downloads-7954f5f757-xx949 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" start-of-body= Dec 01 09:03:10 crc kubenswrapper[4689]: I1201 09:03:10.347601 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-xx949" podUID="bd24264f-fc40-410e-9bed-3f8e340035b5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" Dec 01 09:03:10 crc kubenswrapper[4689]: I1201 09:03:10.347565 4689 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-xx949" Dec 01 09:03:10 crc kubenswrapper[4689]: I1201 09:03:10.348339 4689 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"e5fc8f64853773373310d29668966bab7f77963df58530ad88dd85640d99cfb1"} pod="openshift-console/downloads-7954f5f757-xx949" containerMessage="Container download-server failed liveness probe, will be restarted" Dec 01 09:03:10 crc kubenswrapper[4689]: I1201 09:03:10.348385 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-xx949" podUID="bd24264f-fc40-410e-9bed-3f8e340035b5" containerName="download-server" containerID="cri-o://e5fc8f64853773373310d29668966bab7f77963df58530ad88dd85640d99cfb1" gracePeriod=2 Dec 01 09:03:10 crc kubenswrapper[4689]: I1201 09:03:10.349025 4689 patch_prober.go:28] interesting pod/downloads-7954f5f757-xx949 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" start-of-body= Dec 01 09:03:10 crc kubenswrapper[4689]: I1201 09:03:10.349048 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-xx949" podUID="bd24264f-fc40-410e-9bed-3f8e340035b5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" Dec 01 09:03:10 crc kubenswrapper[4689]: I1201 09:03:10.928243 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a3a578c7-bcdf-46f5-a781-5759e3c6da45","Type":"ContainerStarted","Data":"42633e4142ca8775c36b985588bc3a6f1740fffc84fd99e61e826cd9be7dac51"} Dec 01 09:03:11 crc kubenswrapper[4689]: I1201 09:03:11.069798 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 01 09:03:11 crc kubenswrapper[4689]: I1201 09:03:11.069852 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 01 09:03:11 crc kubenswrapper[4689]: I1201 09:03:11.971253 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_downloads-7954f5f757-xx949_bd24264f-fc40-410e-9bed-3f8e340035b5/download-server/0.log" Dec 01 09:03:11 crc kubenswrapper[4689]: I1201 09:03:11.971520 4689 generic.go:334] "Generic (PLEG): container finished" podID="bd24264f-fc40-410e-9bed-3f8e340035b5" containerID="e5fc8f64853773373310d29668966bab7f77963df58530ad88dd85640d99cfb1" exitCode=0 Dec 01 09:03:11 crc kubenswrapper[4689]: I1201 09:03:11.971965 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-xx949" event={"ID":"bd24264f-fc40-410e-9bed-3f8e340035b5","Type":"ContainerDied","Data":"e5fc8f64853773373310d29668966bab7f77963df58530ad88dd85640d99cfb1"} Dec 01 09:03:11 crc kubenswrapper[4689]: I1201 09:03:11.972029 4689 scope.go:117] "RemoveContainer" containerID="efa025fea1ec8337bd709a13be3919080f774ea2493595d595a90da6dd2b01d3" Dec 01 09:03:12 crc kubenswrapper[4689]: I1201 09:03:12.081538 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="b0e9419c-e23b-4c71-b88e-736138bcdd65" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.205:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 01 09:03:12 crc kubenswrapper[4689]: I1201 09:03:12.081587 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="b0e9419c-e23b-4c71-b88e-736138bcdd65" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.205:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 01 09:03:13 crc kubenswrapper[4689]: I1201 09:03:13.671613 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Dec 01 09:03:13 crc kubenswrapper[4689]: I1201 09:03:13.992257 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-xx949" event={"ID":"bd24264f-fc40-410e-9bed-3f8e340035b5","Type":"ContainerStarted","Data":"ceb481c6749e5746a95a2f9cb7d08a49f4a80cfb5d734dff1f4e9f41309244a9"} Dec 01 09:03:13 crc kubenswrapper[4689]: I1201 09:03:13.994066 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-xx949" Dec 01 09:03:13 crc kubenswrapper[4689]: I1201 09:03:13.994137 4689 patch_prober.go:28] interesting pod/downloads-7954f5f757-xx949 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" start-of-body= Dec 01 09:03:13 crc kubenswrapper[4689]: I1201 09:03:13.994164 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-xx949" podUID="bd24264f-fc40-410e-9bed-3f8e340035b5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" Dec 01 09:03:14 crc kubenswrapper[4689]: I1201 09:03:14.533215 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="0556c1c8-69cc-4fa6-a3df-46a4ed439312" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 09:03:15 crc kubenswrapper[4689]: I1201 09:03:15.014102 4689 patch_prober.go:28] interesting pod/downloads-7954f5f757-xx949 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" start-of-body= Dec 01 09:03:15 crc kubenswrapper[4689]: I1201 09:03:15.014163 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-xx949" podUID="bd24264f-fc40-410e-9bed-3f8e340035b5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" Dec 01 09:03:15 crc kubenswrapper[4689]: I1201 09:03:15.723119 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 01 09:03:16 crc kubenswrapper[4689]: I1201 09:03:16.031228 4689 patch_prober.go:28] interesting pod/downloads-7954f5f757-xx949 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" start-of-body= Dec 01 09:03:16 crc kubenswrapper[4689]: I1201 09:03:16.031323 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-xx949" podUID="bd24264f-fc40-410e-9bed-3f8e340035b5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" Dec 01 09:03:16 crc kubenswrapper[4689]: I1201 09:03:16.732608 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a3a578c7-bcdf-46f5-a781-5759e3c6da45" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.207:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 01 09:03:16 crc kubenswrapper[4689]: I1201 09:03:16.732608 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a3a578c7-bcdf-46f5-a781-5759e3c6da45" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.207:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 01 09:03:19 crc kubenswrapper[4689]: I1201 09:03:19.507205 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="0556c1c8-69cc-4fa6-a3df-46a4ed439312" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 09:03:20 crc kubenswrapper[4689]: I1201 09:03:20.348164 4689 patch_prober.go:28] interesting pod/downloads-7954f5f757-xx949 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" start-of-body= Dec 01 09:03:20 crc kubenswrapper[4689]: I1201 09:03:20.348260 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-xx949" podUID="bd24264f-fc40-410e-9bed-3f8e340035b5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" Dec 01 09:03:20 crc kubenswrapper[4689]: I1201 09:03:20.349305 4689 patch_prober.go:28] interesting pod/downloads-7954f5f757-xx949 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" start-of-body= Dec 01 09:03:20 crc kubenswrapper[4689]: I1201 09:03:20.349351 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-xx949" podUID="bd24264f-fc40-410e-9bed-3f8e340035b5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" Dec 01 09:03:22 crc kubenswrapper[4689]: I1201 09:03:22.080564 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="b0e9419c-e23b-4c71-b88e-736138bcdd65" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.205:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 01 09:03:22 crc kubenswrapper[4689]: I1201 09:03:22.080596 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="b0e9419c-e23b-4c71-b88e-736138bcdd65" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.205:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 01 09:03:24 crc kubenswrapper[4689]: I1201 09:03:24.486828 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="0556c1c8-69cc-4fa6-a3df-46a4ed439312" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 09:03:26 crc kubenswrapper[4689]: I1201 09:03:26.420553 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-6599c4498-sh7sl" Dec 01 09:03:26 crc kubenswrapper[4689]: I1201 09:03:26.732533 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a3a578c7-bcdf-46f5-a781-5759e3c6da45" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.207:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 01 09:03:26 crc kubenswrapper[4689]: I1201 09:03:26.732574 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a3a578c7-bcdf-46f5-a781-5759e3c6da45" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.207:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 01 09:03:29 crc kubenswrapper[4689]: I1201 09:03:29.560491 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="0556c1c8-69cc-4fa6-a3df-46a4ed439312" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 09:03:30 crc kubenswrapper[4689]: I1201 09:03:30.408152 4689 patch_prober.go:28] interesting pod/downloads-7954f5f757-xx949 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" start-of-body= Dec 01 09:03:30 crc kubenswrapper[4689]: I1201 09:03:30.408265 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-xx949" podUID="bd24264f-fc40-410e-9bed-3f8e340035b5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" Dec 01 09:03:30 crc kubenswrapper[4689]: I1201 09:03:30.408157 4689 patch_prober.go:28] interesting pod/downloads-7954f5f757-xx949 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" start-of-body= Dec 01 09:03:30 crc kubenswrapper[4689]: I1201 09:03:30.408591 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-xx949" podUID="bd24264f-fc40-410e-9bed-3f8e340035b5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.22:8080/\": dial tcp 10.217.0.22:8080: connect: connection refused" Dec 01 09:03:30 crc kubenswrapper[4689]: I1201 09:03:30.763089 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 01 09:03:30 crc kubenswrapper[4689]: I1201 09:03:30.768866 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="913d1dab-72d0-4f7b-bea3-78aabac0d13f" containerName="ceilometer-central-agent" containerID="cri-o://d769d761b4d40da117b4ff372d555d57d6b2b2f243310bfbfbf3e7c5f695228d" gracePeriod=30 Dec 01 09:03:30 crc kubenswrapper[4689]: I1201 09:03:30.768927 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="913d1dab-72d0-4f7b-bea3-78aabac0d13f" containerName="proxy-httpd" containerID="cri-o://8563f4e92b70dc58be8b849ffe0655a1d9882ee5599553c78b789ec0b174e24a" gracePeriod=30 Dec 01 09:03:30 crc kubenswrapper[4689]: I1201 09:03:30.768990 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="913d1dab-72d0-4f7b-bea3-78aabac0d13f" containerName="sg-core" containerID="cri-o://1ef4915f4e401e0048b873858950aee437da30846a84498cdc9ff067f4a35aad" gracePeriod=30 Dec 01 09:03:30 crc kubenswrapper[4689]: I1201 09:03:30.769033 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="913d1dab-72d0-4f7b-bea3-78aabac0d13f" containerName="ceilometer-notification-agent" containerID="cri-o://0c9b87e278508c3cdb1e859c1f1a98b5deb677eed17b3b78f60a66821918d297" gracePeriod=30 Dec 01 09:03:31 crc kubenswrapper[4689]: I1201 09:03:31.237519 4689 generic.go:334] "Generic (PLEG): container finished" podID="913d1dab-72d0-4f7b-bea3-78aabac0d13f" containerID="8563f4e92b70dc58be8b849ffe0655a1d9882ee5599553c78b789ec0b174e24a" exitCode=0 Dec 01 09:03:31 crc kubenswrapper[4689]: I1201 09:03:31.237567 4689 generic.go:334] "Generic (PLEG): container finished" podID="913d1dab-72d0-4f7b-bea3-78aabac0d13f" containerID="1ef4915f4e401e0048b873858950aee437da30846a84498cdc9ff067f4a35aad" exitCode=2 Dec 01 09:03:31 crc kubenswrapper[4689]: I1201 09:03:31.237589 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"913d1dab-72d0-4f7b-bea3-78aabac0d13f","Type":"ContainerDied","Data":"8563f4e92b70dc58be8b849ffe0655a1d9882ee5599553c78b789ec0b174e24a"} Dec 01 09:03:31 crc kubenswrapper[4689]: I1201 09:03:31.237615 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"913d1dab-72d0-4f7b-bea3-78aabac0d13f","Type":"ContainerDied","Data":"1ef4915f4e401e0048b873858950aee437da30846a84498cdc9ff067f4a35aad"} Dec 01 09:03:32 crc kubenswrapper[4689]: I1201 09:03:32.079581 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="b0e9419c-e23b-4c71-b88e-736138bcdd65" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.205:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 01 09:03:32 crc kubenswrapper[4689]: I1201 09:03:32.079587 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="b0e9419c-e23b-4c71-b88e-736138bcdd65" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.205:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 01 09:03:32 crc kubenswrapper[4689]: I1201 09:03:32.250556 4689 generic.go:334] "Generic (PLEG): container finished" podID="913d1dab-72d0-4f7b-bea3-78aabac0d13f" containerID="0c9b87e278508c3cdb1e859c1f1a98b5deb677eed17b3b78f60a66821918d297" exitCode=0 Dec 01 09:03:32 crc kubenswrapper[4689]: I1201 09:03:32.250587 4689 generic.go:334] "Generic (PLEG): container finished" podID="913d1dab-72d0-4f7b-bea3-78aabac0d13f" containerID="d769d761b4d40da117b4ff372d555d57d6b2b2f243310bfbfbf3e7c5f695228d" exitCode=0 Dec 01 09:03:32 crc kubenswrapper[4689]: I1201 09:03:32.250584 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"913d1dab-72d0-4f7b-bea3-78aabac0d13f","Type":"ContainerDied","Data":"0c9b87e278508c3cdb1e859c1f1a98b5deb677eed17b3b78f60a66821918d297"} Dec 01 09:03:32 crc kubenswrapper[4689]: I1201 09:03:32.250619 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"913d1dab-72d0-4f7b-bea3-78aabac0d13f","Type":"ContainerDied","Data":"d769d761b4d40da117b4ff372d555d57d6b2b2f243310bfbfbf3e7c5f695228d"} Dec 01 09:03:33 crc kubenswrapper[4689]: I1201 09:03:33.760559 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 09:03:33 crc kubenswrapper[4689]: I1201 09:03:33.901977 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nbh96\" (UniqueName: \"kubernetes.io/projected/913d1dab-72d0-4f7b-bea3-78aabac0d13f-kube-api-access-nbh96\") pod \"913d1dab-72d0-4f7b-bea3-78aabac0d13f\" (UID: \"913d1dab-72d0-4f7b-bea3-78aabac0d13f\") " Dec 01 09:03:33 crc kubenswrapper[4689]: I1201 09:03:33.902062 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/913d1dab-72d0-4f7b-bea3-78aabac0d13f-scripts\") pod \"913d1dab-72d0-4f7b-bea3-78aabac0d13f\" (UID: \"913d1dab-72d0-4f7b-bea3-78aabac0d13f\") " Dec 01 09:03:33 crc kubenswrapper[4689]: I1201 09:03:33.902138 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/913d1dab-72d0-4f7b-bea3-78aabac0d13f-run-httpd\") pod \"913d1dab-72d0-4f7b-bea3-78aabac0d13f\" (UID: \"913d1dab-72d0-4f7b-bea3-78aabac0d13f\") " Dec 01 09:03:33 crc kubenswrapper[4689]: I1201 09:03:33.902156 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/913d1dab-72d0-4f7b-bea3-78aabac0d13f-combined-ca-bundle\") pod \"913d1dab-72d0-4f7b-bea3-78aabac0d13f\" (UID: \"913d1dab-72d0-4f7b-bea3-78aabac0d13f\") " Dec 01 09:03:33 crc kubenswrapper[4689]: I1201 09:03:33.902189 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/913d1dab-72d0-4f7b-bea3-78aabac0d13f-sg-core-conf-yaml\") pod \"913d1dab-72d0-4f7b-bea3-78aabac0d13f\" (UID: \"913d1dab-72d0-4f7b-bea3-78aabac0d13f\") " Dec 01 09:03:33 crc kubenswrapper[4689]: I1201 09:03:33.902217 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/913d1dab-72d0-4f7b-bea3-78aabac0d13f-config-data\") pod \"913d1dab-72d0-4f7b-bea3-78aabac0d13f\" (UID: \"913d1dab-72d0-4f7b-bea3-78aabac0d13f\") " Dec 01 09:03:33 crc kubenswrapper[4689]: I1201 09:03:33.902277 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/913d1dab-72d0-4f7b-bea3-78aabac0d13f-log-httpd\") pod \"913d1dab-72d0-4f7b-bea3-78aabac0d13f\" (UID: \"913d1dab-72d0-4f7b-bea3-78aabac0d13f\") " Dec 01 09:03:33 crc kubenswrapper[4689]: I1201 09:03:33.902349 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/913d1dab-72d0-4f7b-bea3-78aabac0d13f-ceilometer-tls-certs\") pod \"913d1dab-72d0-4f7b-bea3-78aabac0d13f\" (UID: \"913d1dab-72d0-4f7b-bea3-78aabac0d13f\") " Dec 01 09:03:33 crc kubenswrapper[4689]: I1201 09:03:33.902786 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/913d1dab-72d0-4f7b-bea3-78aabac0d13f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "913d1dab-72d0-4f7b-bea3-78aabac0d13f" (UID: "913d1dab-72d0-4f7b-bea3-78aabac0d13f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:03:33 crc kubenswrapper[4689]: I1201 09:03:33.903037 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/913d1dab-72d0-4f7b-bea3-78aabac0d13f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "913d1dab-72d0-4f7b-bea3-78aabac0d13f" (UID: "913d1dab-72d0-4f7b-bea3-78aabac0d13f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:03:33 crc kubenswrapper[4689]: I1201 09:03:33.918550 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/913d1dab-72d0-4f7b-bea3-78aabac0d13f-scripts" (OuterVolumeSpecName: "scripts") pod "913d1dab-72d0-4f7b-bea3-78aabac0d13f" (UID: "913d1dab-72d0-4f7b-bea3-78aabac0d13f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:03:33 crc kubenswrapper[4689]: I1201 09:03:33.969602 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/913d1dab-72d0-4f7b-bea3-78aabac0d13f-kube-api-access-nbh96" (OuterVolumeSpecName: "kube-api-access-nbh96") pod "913d1dab-72d0-4f7b-bea3-78aabac0d13f" (UID: "913d1dab-72d0-4f7b-bea3-78aabac0d13f"). InnerVolumeSpecName "kube-api-access-nbh96". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:03:34 crc kubenswrapper[4689]: I1201 09:03:34.012981 4689 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/913d1dab-72d0-4f7b-bea3-78aabac0d13f-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 01 09:03:34 crc kubenswrapper[4689]: I1201 09:03:34.013031 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nbh96\" (UniqueName: \"kubernetes.io/projected/913d1dab-72d0-4f7b-bea3-78aabac0d13f-kube-api-access-nbh96\") on node \"crc\" DevicePath \"\"" Dec 01 09:03:34 crc kubenswrapper[4689]: I1201 09:03:34.013046 4689 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/913d1dab-72d0-4f7b-bea3-78aabac0d13f-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 09:03:34 crc kubenswrapper[4689]: I1201 09:03:34.013058 4689 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/913d1dab-72d0-4f7b-bea3-78aabac0d13f-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 01 09:03:34 crc kubenswrapper[4689]: I1201 09:03:34.013228 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/913d1dab-72d0-4f7b-bea3-78aabac0d13f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "913d1dab-72d0-4f7b-bea3-78aabac0d13f" (UID: "913d1dab-72d0-4f7b-bea3-78aabac0d13f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:03:34 crc kubenswrapper[4689]: I1201 09:03:34.072841 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/913d1dab-72d0-4f7b-bea3-78aabac0d13f-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "913d1dab-72d0-4f7b-bea3-78aabac0d13f" (UID: "913d1dab-72d0-4f7b-bea3-78aabac0d13f"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:03:34 crc kubenswrapper[4689]: I1201 09:03:34.117898 4689 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/913d1dab-72d0-4f7b-bea3-78aabac0d13f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 01 09:03:34 crc kubenswrapper[4689]: I1201 09:03:34.117952 4689 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/913d1dab-72d0-4f7b-bea3-78aabac0d13f-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 01 09:03:34 crc kubenswrapper[4689]: I1201 09:03:34.165539 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/913d1dab-72d0-4f7b-bea3-78aabac0d13f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "913d1dab-72d0-4f7b-bea3-78aabac0d13f" (UID: "913d1dab-72d0-4f7b-bea3-78aabac0d13f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:03:34 crc kubenswrapper[4689]: I1201 09:03:34.165826 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Dec 01 09:03:34 crc kubenswrapper[4689]: I1201 09:03:34.196566 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/913d1dab-72d0-4f7b-bea3-78aabac0d13f-config-data" (OuterVolumeSpecName: "config-data") pod "913d1dab-72d0-4f7b-bea3-78aabac0d13f" (UID: "913d1dab-72d0-4f7b-bea3-78aabac0d13f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:03:34 crc kubenswrapper[4689]: I1201 09:03:34.219946 4689 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/913d1dab-72d0-4f7b-bea3-78aabac0d13f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 09:03:34 crc kubenswrapper[4689]: I1201 09:03:34.219981 4689 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/913d1dab-72d0-4f7b-bea3-78aabac0d13f-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 09:03:34 crc kubenswrapper[4689]: I1201 09:03:34.280641 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="555543d8-21bb-4dba-9c08-ab82e90ea894" containerName="galera" probeResult="failure" output=< Dec 01 09:03:34 crc kubenswrapper[4689]: wsrep_local_state_comment (Joined) differs from Synced Dec 01 09:03:34 crc kubenswrapper[4689]: > Dec 01 09:03:34 crc kubenswrapper[4689]: I1201 09:03:34.305715 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"913d1dab-72d0-4f7b-bea3-78aabac0d13f","Type":"ContainerDied","Data":"583c4d8df03c0620722be3d85efe0072b50994b33ea0afb1321c62b195c8e771"} Dec 01 09:03:34 crc kubenswrapper[4689]: I1201 09:03:34.305782 4689 scope.go:117] "RemoveContainer" containerID="8563f4e92b70dc58be8b849ffe0655a1d9882ee5599553c78b789ec0b174e24a" Dec 01 09:03:34 crc kubenswrapper[4689]: I1201 09:03:34.306013 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 09:03:34 crc kubenswrapper[4689]: I1201 09:03:34.331594 4689 scope.go:117] "RemoveContainer" containerID="1ef4915f4e401e0048b873858950aee437da30846a84498cdc9ff067f4a35aad" Dec 01 09:03:34 crc kubenswrapper[4689]: I1201 09:03:34.344180 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 01 09:03:34 crc kubenswrapper[4689]: I1201 09:03:34.361135 4689 scope.go:117] "RemoveContainer" containerID="0c9b87e278508c3cdb1e859c1f1a98b5deb677eed17b3b78f60a66821918d297" Dec 01 09:03:34 crc kubenswrapper[4689]: I1201 09:03:34.361310 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 01 09:03:34 crc kubenswrapper[4689]: I1201 09:03:34.393388 4689 scope.go:117] "RemoveContainer" containerID="d769d761b4d40da117b4ff372d555d57d6b2b2f243310bfbfbf3e7c5f695228d" Dec 01 09:03:34 crc kubenswrapper[4689]: I1201 09:03:34.405468 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 01 09:03:34 crc kubenswrapper[4689]: E1201 09:03:34.415667 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="913d1dab-72d0-4f7b-bea3-78aabac0d13f" containerName="sg-core" Dec 01 09:03:34 crc kubenswrapper[4689]: I1201 09:03:34.415702 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="913d1dab-72d0-4f7b-bea3-78aabac0d13f" containerName="sg-core" Dec 01 09:03:34 crc kubenswrapper[4689]: E1201 09:03:34.415728 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="913d1dab-72d0-4f7b-bea3-78aabac0d13f" containerName="proxy-httpd" Dec 01 09:03:34 crc kubenswrapper[4689]: I1201 09:03:34.415735 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="913d1dab-72d0-4f7b-bea3-78aabac0d13f" containerName="proxy-httpd" Dec 01 09:03:34 crc kubenswrapper[4689]: E1201 09:03:34.415744 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06af101b-855c-409b-8f88-171d7e9aaffc" containerName="keystone-cron" Dec 01 09:03:34 crc kubenswrapper[4689]: I1201 09:03:34.415752 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="06af101b-855c-409b-8f88-171d7e9aaffc" containerName="keystone-cron" Dec 01 09:03:34 crc kubenswrapper[4689]: E1201 09:03:34.415763 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="913d1dab-72d0-4f7b-bea3-78aabac0d13f" containerName="ceilometer-central-agent" Dec 01 09:03:34 crc kubenswrapper[4689]: I1201 09:03:34.415772 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="913d1dab-72d0-4f7b-bea3-78aabac0d13f" containerName="ceilometer-central-agent" Dec 01 09:03:34 crc kubenswrapper[4689]: E1201 09:03:34.415791 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="913d1dab-72d0-4f7b-bea3-78aabac0d13f" containerName="ceilometer-notification-agent" Dec 01 09:03:34 crc kubenswrapper[4689]: I1201 09:03:34.415799 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="913d1dab-72d0-4f7b-bea3-78aabac0d13f" containerName="ceilometer-notification-agent" Dec 01 09:03:34 crc kubenswrapper[4689]: I1201 09:03:34.416124 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="913d1dab-72d0-4f7b-bea3-78aabac0d13f" containerName="ceilometer-notification-agent" Dec 01 09:03:34 crc kubenswrapper[4689]: I1201 09:03:34.416157 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="06af101b-855c-409b-8f88-171d7e9aaffc" containerName="keystone-cron" Dec 01 09:03:34 crc kubenswrapper[4689]: I1201 09:03:34.416169 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="913d1dab-72d0-4f7b-bea3-78aabac0d13f" containerName="sg-core" Dec 01 09:03:34 crc kubenswrapper[4689]: I1201 09:03:34.416181 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="913d1dab-72d0-4f7b-bea3-78aabac0d13f" containerName="proxy-httpd" Dec 01 09:03:34 crc kubenswrapper[4689]: I1201 09:03:34.416191 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="913d1dab-72d0-4f7b-bea3-78aabac0d13f" containerName="ceilometer-central-agent" Dec 01 09:03:34 crc kubenswrapper[4689]: I1201 09:03:34.417949 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 09:03:34 crc kubenswrapper[4689]: I1201 09:03:34.427587 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Dec 01 09:03:34 crc kubenswrapper[4689]: I1201 09:03:34.427715 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 01 09:03:34 crc kubenswrapper[4689]: I1201 09:03:34.430488 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 01 09:03:34 crc kubenswrapper[4689]: I1201 09:03:34.481357 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="0556c1c8-69cc-4fa6-a3df-46a4ed439312" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 09:03:34 crc kubenswrapper[4689]: I1201 09:03:34.519870 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 01 09:03:34 crc kubenswrapper[4689]: I1201 09:03:34.528506 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6ncg\" (UniqueName: \"kubernetes.io/projected/caa8aeec-456e-4d93-883a-efd63ca6f8a8-kube-api-access-f6ncg\") pod \"ceilometer-0\" (UID: \"caa8aeec-456e-4d93-883a-efd63ca6f8a8\") " pod="openstack/ceilometer-0" Dec 01 09:03:34 crc kubenswrapper[4689]: I1201 09:03:34.528559 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/caa8aeec-456e-4d93-883a-efd63ca6f8a8-log-httpd\") pod \"ceilometer-0\" (UID: \"caa8aeec-456e-4d93-883a-efd63ca6f8a8\") " pod="openstack/ceilometer-0" Dec 01 09:03:34 crc kubenswrapper[4689]: I1201 09:03:34.528584 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caa8aeec-456e-4d93-883a-efd63ca6f8a8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"caa8aeec-456e-4d93-883a-efd63ca6f8a8\") " pod="openstack/ceilometer-0" Dec 01 09:03:34 crc kubenswrapper[4689]: I1201 09:03:34.528629 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/caa8aeec-456e-4d93-883a-efd63ca6f8a8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"caa8aeec-456e-4d93-883a-efd63ca6f8a8\") " pod="openstack/ceilometer-0" Dec 01 09:03:34 crc kubenswrapper[4689]: I1201 09:03:34.528663 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/caa8aeec-456e-4d93-883a-efd63ca6f8a8-scripts\") pod \"ceilometer-0\" (UID: \"caa8aeec-456e-4d93-883a-efd63ca6f8a8\") " pod="openstack/ceilometer-0" Dec 01 09:03:34 crc kubenswrapper[4689]: I1201 09:03:34.528706 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/caa8aeec-456e-4d93-883a-efd63ca6f8a8-run-httpd\") pod \"ceilometer-0\" (UID: \"caa8aeec-456e-4d93-883a-efd63ca6f8a8\") " pod="openstack/ceilometer-0" Dec 01 09:03:34 crc kubenswrapper[4689]: I1201 09:03:34.528742 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/caa8aeec-456e-4d93-883a-efd63ca6f8a8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"caa8aeec-456e-4d93-883a-efd63ca6f8a8\") " pod="openstack/ceilometer-0" Dec 01 09:03:34 crc kubenswrapper[4689]: I1201 09:03:34.528772 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/caa8aeec-456e-4d93-883a-efd63ca6f8a8-config-data\") pod \"ceilometer-0\" (UID: \"caa8aeec-456e-4d93-883a-efd63ca6f8a8\") " pod="openstack/ceilometer-0" Dec 01 09:03:34 crc kubenswrapper[4689]: I1201 09:03:34.630475 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6ncg\" (UniqueName: \"kubernetes.io/projected/caa8aeec-456e-4d93-883a-efd63ca6f8a8-kube-api-access-f6ncg\") pod \"ceilometer-0\" (UID: \"caa8aeec-456e-4d93-883a-efd63ca6f8a8\") " pod="openstack/ceilometer-0" Dec 01 09:03:34 crc kubenswrapper[4689]: I1201 09:03:34.630559 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/caa8aeec-456e-4d93-883a-efd63ca6f8a8-log-httpd\") pod \"ceilometer-0\" (UID: \"caa8aeec-456e-4d93-883a-efd63ca6f8a8\") " pod="openstack/ceilometer-0" Dec 01 09:03:34 crc kubenswrapper[4689]: I1201 09:03:34.630590 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caa8aeec-456e-4d93-883a-efd63ca6f8a8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"caa8aeec-456e-4d93-883a-efd63ca6f8a8\") " pod="openstack/ceilometer-0" Dec 01 09:03:34 crc kubenswrapper[4689]: I1201 09:03:34.630660 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/caa8aeec-456e-4d93-883a-efd63ca6f8a8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"caa8aeec-456e-4d93-883a-efd63ca6f8a8\") " pod="openstack/ceilometer-0" Dec 01 09:03:34 crc kubenswrapper[4689]: I1201 09:03:34.630730 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/caa8aeec-456e-4d93-883a-efd63ca6f8a8-scripts\") pod \"ceilometer-0\" (UID: \"caa8aeec-456e-4d93-883a-efd63ca6f8a8\") " pod="openstack/ceilometer-0" Dec 01 09:03:34 crc kubenswrapper[4689]: I1201 09:03:34.630796 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/caa8aeec-456e-4d93-883a-efd63ca6f8a8-run-httpd\") pod \"ceilometer-0\" (UID: \"caa8aeec-456e-4d93-883a-efd63ca6f8a8\") " pod="openstack/ceilometer-0" Dec 01 09:03:34 crc kubenswrapper[4689]: I1201 09:03:34.630851 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/caa8aeec-456e-4d93-883a-efd63ca6f8a8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"caa8aeec-456e-4d93-883a-efd63ca6f8a8\") " pod="openstack/ceilometer-0" Dec 01 09:03:34 crc kubenswrapper[4689]: I1201 09:03:34.630899 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/caa8aeec-456e-4d93-883a-efd63ca6f8a8-config-data\") pod \"ceilometer-0\" (UID: \"caa8aeec-456e-4d93-883a-efd63ca6f8a8\") " pod="openstack/ceilometer-0" Dec 01 09:03:34 crc kubenswrapper[4689]: I1201 09:03:34.631940 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Dec 01 09:03:34 crc kubenswrapper[4689]: I1201 09:03:34.632172 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/caa8aeec-456e-4d93-883a-efd63ca6f8a8-log-httpd\") pod \"ceilometer-0\" (UID: \"caa8aeec-456e-4d93-883a-efd63ca6f8a8\") " pod="openstack/ceilometer-0" Dec 01 09:03:34 crc kubenswrapper[4689]: I1201 09:03:34.632572 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/caa8aeec-456e-4d93-883a-efd63ca6f8a8-run-httpd\") pod \"ceilometer-0\" (UID: \"caa8aeec-456e-4d93-883a-efd63ca6f8a8\") " pod="openstack/ceilometer-0" Dec 01 09:03:34 crc kubenswrapper[4689]: I1201 09:03:34.636394 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/caa8aeec-456e-4d93-883a-efd63ca6f8a8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"caa8aeec-456e-4d93-883a-efd63ca6f8a8\") " pod="openstack/ceilometer-0" Dec 01 09:03:34 crc kubenswrapper[4689]: I1201 09:03:34.636725 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/caa8aeec-456e-4d93-883a-efd63ca6f8a8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"caa8aeec-456e-4d93-883a-efd63ca6f8a8\") " pod="openstack/ceilometer-0" Dec 01 09:03:34 crc kubenswrapper[4689]: I1201 09:03:34.637388 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/caa8aeec-456e-4d93-883a-efd63ca6f8a8-config-data\") pod \"ceilometer-0\" (UID: \"caa8aeec-456e-4d93-883a-efd63ca6f8a8\") " pod="openstack/ceilometer-0" Dec 01 09:03:34 crc kubenswrapper[4689]: I1201 09:03:34.646042 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/caa8aeec-456e-4d93-883a-efd63ca6f8a8-scripts\") pod \"ceilometer-0\" (UID: \"caa8aeec-456e-4d93-883a-efd63ca6f8a8\") " pod="openstack/ceilometer-0" Dec 01 09:03:34 crc kubenswrapper[4689]: I1201 09:03:34.653177 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caa8aeec-456e-4d93-883a-efd63ca6f8a8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"caa8aeec-456e-4d93-883a-efd63ca6f8a8\") " pod="openstack/ceilometer-0" Dec 01 09:03:34 crc kubenswrapper[4689]: I1201 09:03:34.701081 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6ncg\" (UniqueName: \"kubernetes.io/projected/caa8aeec-456e-4d93-883a-efd63ca6f8a8-kube-api-access-f6ncg\") pod \"ceilometer-0\" (UID: \"caa8aeec-456e-4d93-883a-efd63ca6f8a8\") " pod="openstack/ceilometer-0" Dec 01 09:03:34 crc kubenswrapper[4689]: I1201 09:03:34.759164 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 09:03:35 crc kubenswrapper[4689]: I1201 09:03:35.058093 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="913d1dab-72d0-4f7b-bea3-78aabac0d13f" path="/var/lib/kubelet/pods/913d1dab-72d0-4f7b-bea3-78aabac0d13f/volumes" Dec 01 09:03:35 crc kubenswrapper[4689]: I1201 09:03:35.335471 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 01 09:03:35 crc kubenswrapper[4689]: I1201 09:03:35.723184 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 01 09:03:36 crc kubenswrapper[4689]: I1201 09:03:36.328221 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"caa8aeec-456e-4d93-883a-efd63ca6f8a8","Type":"ContainerStarted","Data":"d663ba6d21e79ed167e2c307c04a07a836941634519bcc5f74a76deabe98f30d"} Dec 01 09:03:36 crc kubenswrapper[4689]: I1201 09:03:36.730637 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a3a578c7-bcdf-46f5-a781-5759e3c6da45" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.207:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 01 09:03:36 crc kubenswrapper[4689]: I1201 09:03:36.730686 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a3a578c7-bcdf-46f5-a781-5759e3c6da45" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.207:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 01 09:03:37 crc kubenswrapper[4689]: I1201 09:03:37.340740 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"caa8aeec-456e-4d93-883a-efd63ca6f8a8","Type":"ContainerStarted","Data":"fa8917866975334f21eeaf4e125bd14f3ac81b2396640ff84f42956a9475339c"} Dec 01 09:03:37 crc kubenswrapper[4689]: I1201 09:03:37.476204 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 01 09:03:39 crc kubenswrapper[4689]: I1201 09:03:39.146809 4689 patch_prober.go:28] interesting pod/machine-config-daemon-hmdnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 09:03:39 crc kubenswrapper[4689]: I1201 09:03:39.147136 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 09:03:39 crc kubenswrapper[4689]: I1201 09:03:39.147228 4689 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" Dec 01 09:03:39 crc kubenswrapper[4689]: I1201 09:03:39.148135 4689 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3ff597f084dfd52ccbf3aad1aa4bc0d1a664ed524aff9fe436ddb28c1463db49"} pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 01 09:03:39 crc kubenswrapper[4689]: I1201 09:03:39.148228 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" containerName="machine-config-daemon" containerID="cri-o://3ff597f084dfd52ccbf3aad1aa4bc0d1a664ed524aff9fe436ddb28c1463db49" gracePeriod=600 Dec 01 09:03:39 crc kubenswrapper[4689]: E1201 09:03:39.320483 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:03:39 crc kubenswrapper[4689]: I1201 09:03:39.364239 4689 generic.go:334] "Generic (PLEG): container finished" podID="3947625d-75bf-4332-a233-1491b2ee9d96" containerID="3ff597f084dfd52ccbf3aad1aa4bc0d1a664ed524aff9fe436ddb28c1463db49" exitCode=0 Dec 01 09:03:39 crc kubenswrapper[4689]: I1201 09:03:39.364297 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" event={"ID":"3947625d-75bf-4332-a233-1491b2ee9d96","Type":"ContainerDied","Data":"3ff597f084dfd52ccbf3aad1aa4bc0d1a664ed524aff9fe436ddb28c1463db49"} Dec 01 09:03:39 crc kubenswrapper[4689]: I1201 09:03:39.364338 4689 scope.go:117] "RemoveContainer" containerID="a73b6758eaf1af9bc3a327d8874afb8d2ff28265d999a583ab055845b6607b6a" Dec 01 09:03:39 crc kubenswrapper[4689]: I1201 09:03:39.365127 4689 scope.go:117] "RemoveContainer" containerID="3ff597f084dfd52ccbf3aad1aa4bc0d1a664ed524aff9fe436ddb28c1463db49" Dec 01 09:03:39 crc kubenswrapper[4689]: E1201 09:03:39.365610 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:03:39 crc kubenswrapper[4689]: I1201 09:03:39.368610 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"caa8aeec-456e-4d93-883a-efd63ca6f8a8","Type":"ContainerStarted","Data":"9cfe3a83e8e86947728dbc8df4de33fc5a8aeea5fbb5d2af672bcd4a06f371e8"} Dec 01 09:03:39 crc kubenswrapper[4689]: I1201 09:03:39.537559 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Dec 01 09:03:40 crc kubenswrapper[4689]: I1201 09:03:40.354471 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-xx949" Dec 01 09:03:40 crc kubenswrapper[4689]: I1201 09:03:40.418406 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"caa8aeec-456e-4d93-883a-efd63ca6f8a8","Type":"ContainerStarted","Data":"90bb6b55069198500a0fa2f638ad97a5e99b7165957bb632f0390556e102206c"} Dec 01 09:03:41 crc kubenswrapper[4689]: I1201 09:03:41.082220 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 01 09:03:41 crc kubenswrapper[4689]: I1201 09:03:41.097700 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 01 09:03:41 crc kubenswrapper[4689]: I1201 09:03:41.105878 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 01 09:03:41 crc kubenswrapper[4689]: I1201 09:03:41.438897 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 01 09:03:42 crc kubenswrapper[4689]: I1201 09:03:42.444838 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"caa8aeec-456e-4d93-883a-efd63ca6f8a8","Type":"ContainerStarted","Data":"9a43d1d4a35b8546da78c003286a026d41f85c23e0392458a2a663958b7272fb"} Dec 01 09:03:42 crc kubenswrapper[4689]: I1201 09:03:42.445120 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="caa8aeec-456e-4d93-883a-efd63ca6f8a8" containerName="sg-core" containerID="cri-o://90bb6b55069198500a0fa2f638ad97a5e99b7165957bb632f0390556e102206c" gracePeriod=30 Dec 01 09:03:42 crc kubenswrapper[4689]: I1201 09:03:42.445174 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="caa8aeec-456e-4d93-883a-efd63ca6f8a8" containerName="ceilometer-notification-agent" containerID="cri-o://9cfe3a83e8e86947728dbc8df4de33fc5a8aeea5fbb5d2af672bcd4a06f371e8" gracePeriod=30 Dec 01 09:03:42 crc kubenswrapper[4689]: I1201 09:03:42.445124 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="caa8aeec-456e-4d93-883a-efd63ca6f8a8" containerName="proxy-httpd" containerID="cri-o://9a43d1d4a35b8546da78c003286a026d41f85c23e0392458a2a663958b7272fb" gracePeriod=30 Dec 01 09:03:42 crc kubenswrapper[4689]: I1201 09:03:42.445386 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 01 09:03:42 crc kubenswrapper[4689]: I1201 09:03:42.445652 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="caa8aeec-456e-4d93-883a-efd63ca6f8a8" containerName="ceilometer-central-agent" containerID="cri-o://fa8917866975334f21eeaf4e125bd14f3ac81b2396640ff84f42956a9475339c" gracePeriod=30 Dec 01 09:03:42 crc kubenswrapper[4689]: I1201 09:03:42.483275 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.517948848 podStartE2EDuration="8.48325634s" podCreationTimestamp="2025-12-01 09:03:34 +0000 UTC" firstStartedPulling="2025-12-01 09:03:35.347492398 +0000 UTC m=+1495.419780302" lastFinishedPulling="2025-12-01 09:03:41.31279989 +0000 UTC m=+1501.385087794" observedRunningTime="2025-12-01 09:03:42.480323282 +0000 UTC m=+1502.552611186" watchObservedRunningTime="2025-12-01 09:03:42.48325634 +0000 UTC m=+1502.555544244" Dec 01 09:03:43 crc kubenswrapper[4689]: I1201 09:03:43.470061 4689 generic.go:334] "Generic (PLEG): container finished" podID="caa8aeec-456e-4d93-883a-efd63ca6f8a8" containerID="9a43d1d4a35b8546da78c003286a026d41f85c23e0392458a2a663958b7272fb" exitCode=0 Dec 01 09:03:43 crc kubenswrapper[4689]: I1201 09:03:43.470461 4689 generic.go:334] "Generic (PLEG): container finished" podID="caa8aeec-456e-4d93-883a-efd63ca6f8a8" containerID="90bb6b55069198500a0fa2f638ad97a5e99b7165957bb632f0390556e102206c" exitCode=2 Dec 01 09:03:43 crc kubenswrapper[4689]: I1201 09:03:43.470478 4689 generic.go:334] "Generic (PLEG): container finished" podID="caa8aeec-456e-4d93-883a-efd63ca6f8a8" containerID="9cfe3a83e8e86947728dbc8df4de33fc5a8aeea5fbb5d2af672bcd4a06f371e8" exitCode=0 Dec 01 09:03:43 crc kubenswrapper[4689]: I1201 09:03:43.471279 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"caa8aeec-456e-4d93-883a-efd63ca6f8a8","Type":"ContainerDied","Data":"9a43d1d4a35b8546da78c003286a026d41f85c23e0392458a2a663958b7272fb"} Dec 01 09:03:43 crc kubenswrapper[4689]: I1201 09:03:43.471454 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"caa8aeec-456e-4d93-883a-efd63ca6f8a8","Type":"ContainerDied","Data":"90bb6b55069198500a0fa2f638ad97a5e99b7165957bb632f0390556e102206c"} Dec 01 09:03:43 crc kubenswrapper[4689]: I1201 09:03:43.471520 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"caa8aeec-456e-4d93-883a-efd63ca6f8a8","Type":"ContainerDied","Data":"9cfe3a83e8e86947728dbc8df4de33fc5a8aeea5fbb5d2af672bcd4a06f371e8"} Dec 01 09:03:45 crc kubenswrapper[4689]: I1201 09:03:45.092038 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 09:03:45 crc kubenswrapper[4689]: I1201 09:03:45.223127 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/caa8aeec-456e-4d93-883a-efd63ca6f8a8-run-httpd\") pod \"caa8aeec-456e-4d93-883a-efd63ca6f8a8\" (UID: \"caa8aeec-456e-4d93-883a-efd63ca6f8a8\") " Dec 01 09:03:45 crc kubenswrapper[4689]: I1201 09:03:45.223223 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f6ncg\" (UniqueName: \"kubernetes.io/projected/caa8aeec-456e-4d93-883a-efd63ca6f8a8-kube-api-access-f6ncg\") pod \"caa8aeec-456e-4d93-883a-efd63ca6f8a8\" (UID: \"caa8aeec-456e-4d93-883a-efd63ca6f8a8\") " Dec 01 09:03:45 crc kubenswrapper[4689]: I1201 09:03:45.223267 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/caa8aeec-456e-4d93-883a-efd63ca6f8a8-sg-core-conf-yaml\") pod \"caa8aeec-456e-4d93-883a-efd63ca6f8a8\" (UID: \"caa8aeec-456e-4d93-883a-efd63ca6f8a8\") " Dec 01 09:03:45 crc kubenswrapper[4689]: I1201 09:03:45.223479 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/caa8aeec-456e-4d93-883a-efd63ca6f8a8-scripts\") pod \"caa8aeec-456e-4d93-883a-efd63ca6f8a8\" (UID: \"caa8aeec-456e-4d93-883a-efd63ca6f8a8\") " Dec 01 09:03:45 crc kubenswrapper[4689]: I1201 09:03:45.224266 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/caa8aeec-456e-4d93-883a-efd63ca6f8a8-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "caa8aeec-456e-4d93-883a-efd63ca6f8a8" (UID: "caa8aeec-456e-4d93-883a-efd63ca6f8a8"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:03:45 crc kubenswrapper[4689]: I1201 09:03:45.224410 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/caa8aeec-456e-4d93-883a-efd63ca6f8a8-ceilometer-tls-certs\") pod \"caa8aeec-456e-4d93-883a-efd63ca6f8a8\" (UID: \"caa8aeec-456e-4d93-883a-efd63ca6f8a8\") " Dec 01 09:03:45 crc kubenswrapper[4689]: I1201 09:03:45.224464 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caa8aeec-456e-4d93-883a-efd63ca6f8a8-combined-ca-bundle\") pod \"caa8aeec-456e-4d93-883a-efd63ca6f8a8\" (UID: \"caa8aeec-456e-4d93-883a-efd63ca6f8a8\") " Dec 01 09:03:45 crc kubenswrapper[4689]: I1201 09:03:45.224493 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/caa8aeec-456e-4d93-883a-efd63ca6f8a8-config-data\") pod \"caa8aeec-456e-4d93-883a-efd63ca6f8a8\" (UID: \"caa8aeec-456e-4d93-883a-efd63ca6f8a8\") " Dec 01 09:03:45 crc kubenswrapper[4689]: I1201 09:03:45.224542 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/caa8aeec-456e-4d93-883a-efd63ca6f8a8-log-httpd\") pod \"caa8aeec-456e-4d93-883a-efd63ca6f8a8\" (UID: \"caa8aeec-456e-4d93-883a-efd63ca6f8a8\") " Dec 01 09:03:45 crc kubenswrapper[4689]: I1201 09:03:45.225118 4689 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/caa8aeec-456e-4d93-883a-efd63ca6f8a8-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 01 09:03:45 crc kubenswrapper[4689]: I1201 09:03:45.225344 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/caa8aeec-456e-4d93-883a-efd63ca6f8a8-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "caa8aeec-456e-4d93-883a-efd63ca6f8a8" (UID: "caa8aeec-456e-4d93-883a-efd63ca6f8a8"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:03:45 crc kubenswrapper[4689]: I1201 09:03:45.231547 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/caa8aeec-456e-4d93-883a-efd63ca6f8a8-kube-api-access-f6ncg" (OuterVolumeSpecName: "kube-api-access-f6ncg") pod "caa8aeec-456e-4d93-883a-efd63ca6f8a8" (UID: "caa8aeec-456e-4d93-883a-efd63ca6f8a8"). InnerVolumeSpecName "kube-api-access-f6ncg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:03:45 crc kubenswrapper[4689]: I1201 09:03:45.233567 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/caa8aeec-456e-4d93-883a-efd63ca6f8a8-scripts" (OuterVolumeSpecName: "scripts") pod "caa8aeec-456e-4d93-883a-efd63ca6f8a8" (UID: "caa8aeec-456e-4d93-883a-efd63ca6f8a8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:03:45 crc kubenswrapper[4689]: I1201 09:03:45.264969 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/caa8aeec-456e-4d93-883a-efd63ca6f8a8-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "caa8aeec-456e-4d93-883a-efd63ca6f8a8" (UID: "caa8aeec-456e-4d93-883a-efd63ca6f8a8"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:03:45 crc kubenswrapper[4689]: I1201 09:03:45.294906 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/caa8aeec-456e-4d93-883a-efd63ca6f8a8-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "caa8aeec-456e-4d93-883a-efd63ca6f8a8" (UID: "caa8aeec-456e-4d93-883a-efd63ca6f8a8"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:03:45 crc kubenswrapper[4689]: I1201 09:03:45.321483 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/caa8aeec-456e-4d93-883a-efd63ca6f8a8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "caa8aeec-456e-4d93-883a-efd63ca6f8a8" (UID: "caa8aeec-456e-4d93-883a-efd63ca6f8a8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:03:45 crc kubenswrapper[4689]: I1201 09:03:45.329427 4689 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/caa8aeec-456e-4d93-883a-efd63ca6f8a8-scripts\") on node \"crc\" DevicePath \"\"" Dec 01 09:03:45 crc kubenswrapper[4689]: I1201 09:03:45.329631 4689 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/caa8aeec-456e-4d93-883a-efd63ca6f8a8-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 01 09:03:45 crc kubenswrapper[4689]: I1201 09:03:45.329727 4689 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caa8aeec-456e-4d93-883a-efd63ca6f8a8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 09:03:45 crc kubenswrapper[4689]: I1201 09:03:45.330091 4689 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/caa8aeec-456e-4d93-883a-efd63ca6f8a8-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 01 09:03:45 crc kubenswrapper[4689]: I1201 09:03:45.330163 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f6ncg\" (UniqueName: \"kubernetes.io/projected/caa8aeec-456e-4d93-883a-efd63ca6f8a8-kube-api-access-f6ncg\") on node \"crc\" DevicePath \"\"" Dec 01 09:03:45 crc kubenswrapper[4689]: I1201 09:03:45.330230 4689 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/caa8aeec-456e-4d93-883a-efd63ca6f8a8-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 01 09:03:45 crc kubenswrapper[4689]: I1201 09:03:45.360063 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/caa8aeec-456e-4d93-883a-efd63ca6f8a8-config-data" (OuterVolumeSpecName: "config-data") pod "caa8aeec-456e-4d93-883a-efd63ca6f8a8" (UID: "caa8aeec-456e-4d93-883a-efd63ca6f8a8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:03:45 crc kubenswrapper[4689]: I1201 09:03:45.431672 4689 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/caa8aeec-456e-4d93-883a-efd63ca6f8a8-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 09:03:45 crc kubenswrapper[4689]: I1201 09:03:45.454784 4689 scope.go:117] "RemoveContainer" containerID="37c994c606873636a62994ed7609e596e42de192a923f11a618d108755886555" Dec 01 09:03:45 crc kubenswrapper[4689]: I1201 09:03:45.494167 4689 generic.go:334] "Generic (PLEG): container finished" podID="caa8aeec-456e-4d93-883a-efd63ca6f8a8" containerID="fa8917866975334f21eeaf4e125bd14f3ac81b2396640ff84f42956a9475339c" exitCode=0 Dec 01 09:03:45 crc kubenswrapper[4689]: I1201 09:03:45.494271 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 09:03:45 crc kubenswrapper[4689]: I1201 09:03:45.494272 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"caa8aeec-456e-4d93-883a-efd63ca6f8a8","Type":"ContainerDied","Data":"fa8917866975334f21eeaf4e125bd14f3ac81b2396640ff84f42956a9475339c"} Dec 01 09:03:45 crc kubenswrapper[4689]: I1201 09:03:45.494654 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"caa8aeec-456e-4d93-883a-efd63ca6f8a8","Type":"ContainerDied","Data":"d663ba6d21e79ed167e2c307c04a07a836941634519bcc5f74a76deabe98f30d"} Dec 01 09:03:45 crc kubenswrapper[4689]: I1201 09:03:45.494760 4689 scope.go:117] "RemoveContainer" containerID="9a43d1d4a35b8546da78c003286a026d41f85c23e0392458a2a663958b7272fb" Dec 01 09:03:45 crc kubenswrapper[4689]: I1201 09:03:45.497937 4689 scope.go:117] "RemoveContainer" containerID="a1c61da5e27022bc3257486369e76e7126d83239ca19ba64d25e4cb765522ca6" Dec 01 09:03:45 crc kubenswrapper[4689]: I1201 09:03:45.522728 4689 scope.go:117] "RemoveContainer" containerID="90bb6b55069198500a0fa2f638ad97a5e99b7165957bb632f0390556e102206c" Dec 01 09:03:45 crc kubenswrapper[4689]: I1201 09:03:45.536204 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 01 09:03:45 crc kubenswrapper[4689]: I1201 09:03:45.547832 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 01 09:03:45 crc kubenswrapper[4689]: I1201 09:03:45.560130 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 01 09:03:45 crc kubenswrapper[4689]: E1201 09:03:45.560596 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="caa8aeec-456e-4d93-883a-efd63ca6f8a8" containerName="ceilometer-notification-agent" Dec 01 09:03:45 crc kubenswrapper[4689]: I1201 09:03:45.560617 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="caa8aeec-456e-4d93-883a-efd63ca6f8a8" containerName="ceilometer-notification-agent" Dec 01 09:03:45 crc kubenswrapper[4689]: E1201 09:03:45.560628 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="caa8aeec-456e-4d93-883a-efd63ca6f8a8" containerName="ceilometer-central-agent" Dec 01 09:03:45 crc kubenswrapper[4689]: I1201 09:03:45.560634 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="caa8aeec-456e-4d93-883a-efd63ca6f8a8" containerName="ceilometer-central-agent" Dec 01 09:03:45 crc kubenswrapper[4689]: E1201 09:03:45.560654 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="caa8aeec-456e-4d93-883a-efd63ca6f8a8" containerName="sg-core" Dec 01 09:03:45 crc kubenswrapper[4689]: I1201 09:03:45.560660 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="caa8aeec-456e-4d93-883a-efd63ca6f8a8" containerName="sg-core" Dec 01 09:03:45 crc kubenswrapper[4689]: E1201 09:03:45.560674 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="caa8aeec-456e-4d93-883a-efd63ca6f8a8" containerName="proxy-httpd" Dec 01 09:03:45 crc kubenswrapper[4689]: I1201 09:03:45.560681 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="caa8aeec-456e-4d93-883a-efd63ca6f8a8" containerName="proxy-httpd" Dec 01 09:03:45 crc kubenswrapper[4689]: I1201 09:03:45.560914 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="caa8aeec-456e-4d93-883a-efd63ca6f8a8" containerName="proxy-httpd" Dec 01 09:03:45 crc kubenswrapper[4689]: I1201 09:03:45.560938 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="caa8aeec-456e-4d93-883a-efd63ca6f8a8" containerName="sg-core" Dec 01 09:03:45 crc kubenswrapper[4689]: I1201 09:03:45.560961 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="caa8aeec-456e-4d93-883a-efd63ca6f8a8" containerName="ceilometer-notification-agent" Dec 01 09:03:45 crc kubenswrapper[4689]: I1201 09:03:45.560972 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="caa8aeec-456e-4d93-883a-efd63ca6f8a8" containerName="ceilometer-central-agent" Dec 01 09:03:45 crc kubenswrapper[4689]: I1201 09:03:45.563297 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 09:03:45 crc kubenswrapper[4689]: I1201 09:03:45.565018 4689 scope.go:117] "RemoveContainer" containerID="5d84c0bc33fa0c594dd2e4ac53c19ea7e3a986eb17d8a353c63f16fb5ad089d6" Dec 01 09:03:45 crc kubenswrapper[4689]: I1201 09:03:45.575885 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 01 09:03:45 crc kubenswrapper[4689]: I1201 09:03:45.576345 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 01 09:03:45 crc kubenswrapper[4689]: I1201 09:03:45.577014 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Dec 01 09:03:45 crc kubenswrapper[4689]: I1201 09:03:45.616074 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 01 09:03:45 crc kubenswrapper[4689]: I1201 09:03:45.623985 4689 scope.go:117] "RemoveContainer" containerID="9cfe3a83e8e86947728dbc8df4de33fc5a8aeea5fbb5d2af672bcd4a06f371e8" Dec 01 09:03:45 crc kubenswrapper[4689]: I1201 09:03:45.633458 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5971de46-c278-4f0d-80be-0a7a25d7678c-scripts\") pod \"ceilometer-0\" (UID: \"5971de46-c278-4f0d-80be-0a7a25d7678c\") " pod="openstack/ceilometer-0" Dec 01 09:03:45 crc kubenswrapper[4689]: I1201 09:03:45.633497 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5971de46-c278-4f0d-80be-0a7a25d7678c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5971de46-c278-4f0d-80be-0a7a25d7678c\") " pod="openstack/ceilometer-0" Dec 01 09:03:45 crc kubenswrapper[4689]: I1201 09:03:45.633544 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5971de46-c278-4f0d-80be-0a7a25d7678c-config-data\") pod \"ceilometer-0\" (UID: \"5971de46-c278-4f0d-80be-0a7a25d7678c\") " pod="openstack/ceilometer-0" Dec 01 09:03:45 crc kubenswrapper[4689]: I1201 09:03:45.633573 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5971de46-c278-4f0d-80be-0a7a25d7678c-run-httpd\") pod \"ceilometer-0\" (UID: \"5971de46-c278-4f0d-80be-0a7a25d7678c\") " pod="openstack/ceilometer-0" Dec 01 09:03:45 crc kubenswrapper[4689]: I1201 09:03:45.633615 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5971de46-c278-4f0d-80be-0a7a25d7678c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"5971de46-c278-4f0d-80be-0a7a25d7678c\") " pod="openstack/ceilometer-0" Dec 01 09:03:45 crc kubenswrapper[4689]: I1201 09:03:45.633839 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5971de46-c278-4f0d-80be-0a7a25d7678c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5971de46-c278-4f0d-80be-0a7a25d7678c\") " pod="openstack/ceilometer-0" Dec 01 09:03:45 crc kubenswrapper[4689]: I1201 09:03:45.633872 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wc5n\" (UniqueName: \"kubernetes.io/projected/5971de46-c278-4f0d-80be-0a7a25d7678c-kube-api-access-8wc5n\") pod \"ceilometer-0\" (UID: \"5971de46-c278-4f0d-80be-0a7a25d7678c\") " pod="openstack/ceilometer-0" Dec 01 09:03:45 crc kubenswrapper[4689]: I1201 09:03:45.633930 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5971de46-c278-4f0d-80be-0a7a25d7678c-log-httpd\") pod \"ceilometer-0\" (UID: \"5971de46-c278-4f0d-80be-0a7a25d7678c\") " pod="openstack/ceilometer-0" Dec 01 09:03:45 crc kubenswrapper[4689]: I1201 09:03:45.663660 4689 scope.go:117] "RemoveContainer" containerID="fa8917866975334f21eeaf4e125bd14f3ac81b2396640ff84f42956a9475339c" Dec 01 09:03:45 crc kubenswrapper[4689]: I1201 09:03:45.714699 4689 scope.go:117] "RemoveContainer" containerID="9a43d1d4a35b8546da78c003286a026d41f85c23e0392458a2a663958b7272fb" Dec 01 09:03:45 crc kubenswrapper[4689]: E1201 09:03:45.715193 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a43d1d4a35b8546da78c003286a026d41f85c23e0392458a2a663958b7272fb\": container with ID starting with 9a43d1d4a35b8546da78c003286a026d41f85c23e0392458a2a663958b7272fb not found: ID does not exist" containerID="9a43d1d4a35b8546da78c003286a026d41f85c23e0392458a2a663958b7272fb" Dec 01 09:03:45 crc kubenswrapper[4689]: I1201 09:03:45.715261 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a43d1d4a35b8546da78c003286a026d41f85c23e0392458a2a663958b7272fb"} err="failed to get container status \"9a43d1d4a35b8546da78c003286a026d41f85c23e0392458a2a663958b7272fb\": rpc error: code = NotFound desc = could not find container \"9a43d1d4a35b8546da78c003286a026d41f85c23e0392458a2a663958b7272fb\": container with ID starting with 9a43d1d4a35b8546da78c003286a026d41f85c23e0392458a2a663958b7272fb not found: ID does not exist" Dec 01 09:03:45 crc kubenswrapper[4689]: I1201 09:03:45.715285 4689 scope.go:117] "RemoveContainer" containerID="90bb6b55069198500a0fa2f638ad97a5e99b7165957bb632f0390556e102206c" Dec 01 09:03:45 crc kubenswrapper[4689]: E1201 09:03:45.715701 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90bb6b55069198500a0fa2f638ad97a5e99b7165957bb632f0390556e102206c\": container with ID starting with 90bb6b55069198500a0fa2f638ad97a5e99b7165957bb632f0390556e102206c not found: ID does not exist" containerID="90bb6b55069198500a0fa2f638ad97a5e99b7165957bb632f0390556e102206c" Dec 01 09:03:45 crc kubenswrapper[4689]: I1201 09:03:45.715727 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90bb6b55069198500a0fa2f638ad97a5e99b7165957bb632f0390556e102206c"} err="failed to get container status \"90bb6b55069198500a0fa2f638ad97a5e99b7165957bb632f0390556e102206c\": rpc error: code = NotFound desc = could not find container \"90bb6b55069198500a0fa2f638ad97a5e99b7165957bb632f0390556e102206c\": container with ID starting with 90bb6b55069198500a0fa2f638ad97a5e99b7165957bb632f0390556e102206c not found: ID does not exist" Dec 01 09:03:45 crc kubenswrapper[4689]: I1201 09:03:45.715794 4689 scope.go:117] "RemoveContainer" containerID="9cfe3a83e8e86947728dbc8df4de33fc5a8aeea5fbb5d2af672bcd4a06f371e8" Dec 01 09:03:45 crc kubenswrapper[4689]: E1201 09:03:45.717881 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9cfe3a83e8e86947728dbc8df4de33fc5a8aeea5fbb5d2af672bcd4a06f371e8\": container with ID starting with 9cfe3a83e8e86947728dbc8df4de33fc5a8aeea5fbb5d2af672bcd4a06f371e8 not found: ID does not exist" containerID="9cfe3a83e8e86947728dbc8df4de33fc5a8aeea5fbb5d2af672bcd4a06f371e8" Dec 01 09:03:45 crc kubenswrapper[4689]: I1201 09:03:45.717912 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cfe3a83e8e86947728dbc8df4de33fc5a8aeea5fbb5d2af672bcd4a06f371e8"} err="failed to get container status \"9cfe3a83e8e86947728dbc8df4de33fc5a8aeea5fbb5d2af672bcd4a06f371e8\": rpc error: code = NotFound desc = could not find container \"9cfe3a83e8e86947728dbc8df4de33fc5a8aeea5fbb5d2af672bcd4a06f371e8\": container with ID starting with 9cfe3a83e8e86947728dbc8df4de33fc5a8aeea5fbb5d2af672bcd4a06f371e8 not found: ID does not exist" Dec 01 09:03:45 crc kubenswrapper[4689]: I1201 09:03:45.717929 4689 scope.go:117] "RemoveContainer" containerID="fa8917866975334f21eeaf4e125bd14f3ac81b2396640ff84f42956a9475339c" Dec 01 09:03:45 crc kubenswrapper[4689]: E1201 09:03:45.718234 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa8917866975334f21eeaf4e125bd14f3ac81b2396640ff84f42956a9475339c\": container with ID starting with fa8917866975334f21eeaf4e125bd14f3ac81b2396640ff84f42956a9475339c not found: ID does not exist" containerID="fa8917866975334f21eeaf4e125bd14f3ac81b2396640ff84f42956a9475339c" Dec 01 09:03:45 crc kubenswrapper[4689]: I1201 09:03:45.718259 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa8917866975334f21eeaf4e125bd14f3ac81b2396640ff84f42956a9475339c"} err="failed to get container status \"fa8917866975334f21eeaf4e125bd14f3ac81b2396640ff84f42956a9475339c\": rpc error: code = NotFound desc = could not find container \"fa8917866975334f21eeaf4e125bd14f3ac81b2396640ff84f42956a9475339c\": container with ID starting with fa8917866975334f21eeaf4e125bd14f3ac81b2396640ff84f42956a9475339c not found: ID does not exist" Dec 01 09:03:45 crc kubenswrapper[4689]: I1201 09:03:45.734853 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 01 09:03:45 crc kubenswrapper[4689]: I1201 09:03:45.735853 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 01 09:03:45 crc kubenswrapper[4689]: I1201 09:03:45.736173 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5971de46-c278-4f0d-80be-0a7a25d7678c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5971de46-c278-4f0d-80be-0a7a25d7678c\") " pod="openstack/ceilometer-0" Dec 01 09:03:45 crc kubenswrapper[4689]: I1201 09:03:45.736216 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wc5n\" (UniqueName: \"kubernetes.io/projected/5971de46-c278-4f0d-80be-0a7a25d7678c-kube-api-access-8wc5n\") pod \"ceilometer-0\" (UID: \"5971de46-c278-4f0d-80be-0a7a25d7678c\") " pod="openstack/ceilometer-0" Dec 01 09:03:45 crc kubenswrapper[4689]: I1201 09:03:45.736243 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5971de46-c278-4f0d-80be-0a7a25d7678c-log-httpd\") pod \"ceilometer-0\" (UID: \"5971de46-c278-4f0d-80be-0a7a25d7678c\") " pod="openstack/ceilometer-0" Dec 01 09:03:45 crc kubenswrapper[4689]: I1201 09:03:45.736312 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5971de46-c278-4f0d-80be-0a7a25d7678c-scripts\") pod \"ceilometer-0\" (UID: \"5971de46-c278-4f0d-80be-0a7a25d7678c\") " pod="openstack/ceilometer-0" Dec 01 09:03:45 crc kubenswrapper[4689]: I1201 09:03:45.736334 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5971de46-c278-4f0d-80be-0a7a25d7678c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5971de46-c278-4f0d-80be-0a7a25d7678c\") " pod="openstack/ceilometer-0" Dec 01 09:03:45 crc kubenswrapper[4689]: I1201 09:03:45.736402 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5971de46-c278-4f0d-80be-0a7a25d7678c-config-data\") pod \"ceilometer-0\" (UID: \"5971de46-c278-4f0d-80be-0a7a25d7678c\") " pod="openstack/ceilometer-0" Dec 01 09:03:45 crc kubenswrapper[4689]: I1201 09:03:45.736437 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5971de46-c278-4f0d-80be-0a7a25d7678c-run-httpd\") pod \"ceilometer-0\" (UID: \"5971de46-c278-4f0d-80be-0a7a25d7678c\") " pod="openstack/ceilometer-0" Dec 01 09:03:45 crc kubenswrapper[4689]: I1201 09:03:45.736489 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5971de46-c278-4f0d-80be-0a7a25d7678c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"5971de46-c278-4f0d-80be-0a7a25d7678c\") " pod="openstack/ceilometer-0" Dec 01 09:03:45 crc kubenswrapper[4689]: I1201 09:03:45.738225 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5971de46-c278-4f0d-80be-0a7a25d7678c-log-httpd\") pod \"ceilometer-0\" (UID: \"5971de46-c278-4f0d-80be-0a7a25d7678c\") " pod="openstack/ceilometer-0" Dec 01 09:03:45 crc kubenswrapper[4689]: I1201 09:03:45.738362 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5971de46-c278-4f0d-80be-0a7a25d7678c-run-httpd\") pod \"ceilometer-0\" (UID: \"5971de46-c278-4f0d-80be-0a7a25d7678c\") " pod="openstack/ceilometer-0" Dec 01 09:03:45 crc kubenswrapper[4689]: I1201 09:03:45.741274 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5971de46-c278-4f0d-80be-0a7a25d7678c-config-data\") pod \"ceilometer-0\" (UID: \"5971de46-c278-4f0d-80be-0a7a25d7678c\") " pod="openstack/ceilometer-0" Dec 01 09:03:45 crc kubenswrapper[4689]: I1201 09:03:45.742690 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5971de46-c278-4f0d-80be-0a7a25d7678c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"5971de46-c278-4f0d-80be-0a7a25d7678c\") " pod="openstack/ceilometer-0" Dec 01 09:03:45 crc kubenswrapper[4689]: I1201 09:03:45.751274 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5971de46-c278-4f0d-80be-0a7a25d7678c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5971de46-c278-4f0d-80be-0a7a25d7678c\") " pod="openstack/ceilometer-0" Dec 01 09:03:45 crc kubenswrapper[4689]: I1201 09:03:45.753204 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5971de46-c278-4f0d-80be-0a7a25d7678c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5971de46-c278-4f0d-80be-0a7a25d7678c\") " pod="openstack/ceilometer-0" Dec 01 09:03:45 crc kubenswrapper[4689]: I1201 09:03:45.753333 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5971de46-c278-4f0d-80be-0a7a25d7678c-scripts\") pod \"ceilometer-0\" (UID: \"5971de46-c278-4f0d-80be-0a7a25d7678c\") " pod="openstack/ceilometer-0" Dec 01 09:03:45 crc kubenswrapper[4689]: I1201 09:03:45.771271 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wc5n\" (UniqueName: \"kubernetes.io/projected/5971de46-c278-4f0d-80be-0a7a25d7678c-kube-api-access-8wc5n\") pod \"ceilometer-0\" (UID: \"5971de46-c278-4f0d-80be-0a7a25d7678c\") " pod="openstack/ceilometer-0" Dec 01 09:03:45 crc kubenswrapper[4689]: I1201 09:03:45.786518 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 01 09:03:45 crc kubenswrapper[4689]: I1201 09:03:45.899394 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 01 09:03:46 crc kubenswrapper[4689]: I1201 09:03:46.457201 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 01 09:03:46 crc kubenswrapper[4689]: W1201 09:03:46.460829 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5971de46_c278_4f0d_80be_0a7a25d7678c.slice/crio-e8c1cafe170cbad4213399b9d62f4bafe0619301dc5ddaa8239abb3f47a0f026 WatchSource:0}: Error finding container e8c1cafe170cbad4213399b9d62f4bafe0619301dc5ddaa8239abb3f47a0f026: Status 404 returned error can't find the container with id e8c1cafe170cbad4213399b9d62f4bafe0619301dc5ddaa8239abb3f47a0f026 Dec 01 09:03:46 crc kubenswrapper[4689]: I1201 09:03:46.506873 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5971de46-c278-4f0d-80be-0a7a25d7678c","Type":"ContainerStarted","Data":"e8c1cafe170cbad4213399b9d62f4bafe0619301dc5ddaa8239abb3f47a0f026"} Dec 01 09:03:46 crc kubenswrapper[4689]: I1201 09:03:46.514902 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 01 09:03:47 crc kubenswrapper[4689]: I1201 09:03:47.058557 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="caa8aeec-456e-4d93-883a-efd63ca6f8a8" path="/var/lib/kubelet/pods/caa8aeec-456e-4d93-883a-efd63ca6f8a8/volumes" Dec 01 09:03:48 crc kubenswrapper[4689]: I1201 09:03:48.535629 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5971de46-c278-4f0d-80be-0a7a25d7678c","Type":"ContainerStarted","Data":"b79f9a737f9c86fd993276271d6bbabb149ee8fe440ea71d440f365f607684cf"} Dec 01 09:03:50 crc kubenswrapper[4689]: I1201 09:03:50.046706 4689 scope.go:117] "RemoveContainer" containerID="3ff597f084dfd52ccbf3aad1aa4bc0d1a664ed524aff9fe436ddb28c1463db49" Dec 01 09:03:50 crc kubenswrapper[4689]: E1201 09:03:50.047427 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:03:50 crc kubenswrapper[4689]: I1201 09:03:50.562035 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5971de46-c278-4f0d-80be-0a7a25d7678c","Type":"ContainerStarted","Data":"85b53618eed37d7864b06ef9329d1df1d026c79d370502874b4ecd6db2835aad"} Dec 01 09:03:51 crc kubenswrapper[4689]: I1201 09:03:51.574933 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5971de46-c278-4f0d-80be-0a7a25d7678c","Type":"ContainerStarted","Data":"8ef1404f4fa1076a8d3778c059e0bdfecc991281deeae7f293a9023320b76f43"} Dec 01 09:03:52 crc kubenswrapper[4689]: I1201 09:03:52.587606 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5971de46-c278-4f0d-80be-0a7a25d7678c","Type":"ContainerStarted","Data":"2ee477de2f06f1ef9d15af2bd7f01169a2ad674ec608cf4a13bfffbdbdea1cf1"} Dec 01 09:03:52 crc kubenswrapper[4689]: I1201 09:03:52.587905 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 01 09:03:52 crc kubenswrapper[4689]: I1201 09:03:52.613121 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.8429642560000001 podStartE2EDuration="7.61308154s" podCreationTimestamp="2025-12-01 09:03:45 +0000 UTC" firstStartedPulling="2025-12-01 09:03:46.463314928 +0000 UTC m=+1506.535602832" lastFinishedPulling="2025-12-01 09:03:52.233432202 +0000 UTC m=+1512.305720116" observedRunningTime="2025-12-01 09:03:52.607691776 +0000 UTC m=+1512.679979680" watchObservedRunningTime="2025-12-01 09:03:52.61308154 +0000 UTC m=+1512.685388365" Dec 01 09:04:01 crc kubenswrapper[4689]: I1201 09:04:01.054215 4689 scope.go:117] "RemoveContainer" containerID="3ff597f084dfd52ccbf3aad1aa4bc0d1a664ed524aff9fe436ddb28c1463db49" Dec 01 09:04:01 crc kubenswrapper[4689]: E1201 09:04:01.055231 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:04:03 crc kubenswrapper[4689]: I1201 09:04:03.708345 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="913d1dab-72d0-4f7b-bea3-78aabac0d13f" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.201:3000/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 01 09:04:13 crc kubenswrapper[4689]: I1201 09:04:13.049096 4689 scope.go:117] "RemoveContainer" containerID="3ff597f084dfd52ccbf3aad1aa4bc0d1a664ed524aff9fe436ddb28c1463db49" Dec 01 09:04:13 crc kubenswrapper[4689]: E1201 09:04:13.050041 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:04:15 crc kubenswrapper[4689]: I1201 09:04:15.912454 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 01 09:04:25 crc kubenswrapper[4689]: I1201 09:04:25.366799 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 01 09:04:26 crc kubenswrapper[4689]: I1201 09:04:26.270688 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 01 09:04:28 crc kubenswrapper[4689]: I1201 09:04:28.047442 4689 scope.go:117] "RemoveContainer" containerID="3ff597f084dfd52ccbf3aad1aa4bc0d1a664ed524aff9fe436ddb28c1463db49" Dec 01 09:04:28 crc kubenswrapper[4689]: E1201 09:04:28.047745 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:04:30 crc kubenswrapper[4689]: I1201 09:04:30.427558 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="edc6a475-296b-4f29-a48b-6876138662fd" containerName="rabbitmq" containerID="cri-o://a0ba8e18b86610c8300a20ae45e8e15decc2210ba29bef24323021ce9062f808" gracePeriod=604795 Dec 01 09:04:30 crc kubenswrapper[4689]: I1201 09:04:30.866594 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="50bb385d-f9f3-4a0d-8d26-c0a69a6eba87" containerName="rabbitmq" containerID="cri-o://328e6e999625f74b66c204928991821928098f5fd4af8cc7282a217fdf897259" gracePeriod=604796 Dec 01 09:04:32 crc kubenswrapper[4689]: I1201 09:04:32.903053 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="edc6a475-296b-4f29-a48b-6876138662fd" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.97:5671: connect: connection refused" Dec 01 09:04:33 crc kubenswrapper[4689]: I1201 09:04:33.240865 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="50bb385d-f9f3-4a0d-8d26-c0a69a6eba87" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.98:5671: connect: connection refused" Dec 01 09:04:37 crc kubenswrapper[4689]: I1201 09:04:37.148222 4689 generic.go:334] "Generic (PLEG): container finished" podID="edc6a475-296b-4f29-a48b-6876138662fd" containerID="a0ba8e18b86610c8300a20ae45e8e15decc2210ba29bef24323021ce9062f808" exitCode=0 Dec 01 09:04:37 crc kubenswrapper[4689]: I1201 09:04:37.148287 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"edc6a475-296b-4f29-a48b-6876138662fd","Type":"ContainerDied","Data":"a0ba8e18b86610c8300a20ae45e8e15decc2210ba29bef24323021ce9062f808"} Dec 01 09:04:37 crc kubenswrapper[4689]: I1201 09:04:37.152865 4689 generic.go:334] "Generic (PLEG): container finished" podID="50bb385d-f9f3-4a0d-8d26-c0a69a6eba87" containerID="328e6e999625f74b66c204928991821928098f5fd4af8cc7282a217fdf897259" exitCode=0 Dec 01 09:04:37 crc kubenswrapper[4689]: I1201 09:04:37.152920 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"50bb385d-f9f3-4a0d-8d26-c0a69a6eba87","Type":"ContainerDied","Data":"328e6e999625f74b66c204928991821928098f5fd4af8cc7282a217fdf897259"} Dec 01 09:04:37 crc kubenswrapper[4689]: I1201 09:04:37.602939 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 01 09:04:37 crc kubenswrapper[4689]: I1201 09:04:37.610992 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:04:37 crc kubenswrapper[4689]: I1201 09:04:37.662686 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/edc6a475-296b-4f29-a48b-6876138662fd-erlang-cookie-secret\") pod \"edc6a475-296b-4f29-a48b-6876138662fd\" (UID: \"edc6a475-296b-4f29-a48b-6876138662fd\") " Dec 01 09:04:37 crc kubenswrapper[4689]: I1201 09:04:37.662947 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/50bb385d-f9f3-4a0d-8d26-c0a69a6eba87-rabbitmq-plugins\") pod \"50bb385d-f9f3-4a0d-8d26-c0a69a6eba87\" (UID: \"50bb385d-f9f3-4a0d-8d26-c0a69a6eba87\") " Dec 01 09:04:37 crc kubenswrapper[4689]: I1201 09:04:37.663139 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/edc6a475-296b-4f29-a48b-6876138662fd-pod-info\") pod \"edc6a475-296b-4f29-a48b-6876138662fd\" (UID: \"edc6a475-296b-4f29-a48b-6876138662fd\") " Dec 01 09:04:37 crc kubenswrapper[4689]: I1201 09:04:37.663322 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nvbxm\" (UniqueName: \"kubernetes.io/projected/50bb385d-f9f3-4a0d-8d26-c0a69a6eba87-kube-api-access-nvbxm\") pod \"50bb385d-f9f3-4a0d-8d26-c0a69a6eba87\" (UID: \"50bb385d-f9f3-4a0d-8d26-c0a69a6eba87\") " Dec 01 09:04:37 crc kubenswrapper[4689]: I1201 09:04:37.663496 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/edc6a475-296b-4f29-a48b-6876138662fd-rabbitmq-confd\") pod \"edc6a475-296b-4f29-a48b-6876138662fd\" (UID: \"edc6a475-296b-4f29-a48b-6876138662fd\") " Dec 01 09:04:37 crc kubenswrapper[4689]: I1201 09:04:37.663612 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/edc6a475-296b-4f29-a48b-6876138662fd-rabbitmq-erlang-cookie\") pod \"edc6a475-296b-4f29-a48b-6876138662fd\" (UID: \"edc6a475-296b-4f29-a48b-6876138662fd\") " Dec 01 09:04:37 crc kubenswrapper[4689]: I1201 09:04:37.663730 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"edc6a475-296b-4f29-a48b-6876138662fd\" (UID: \"edc6a475-296b-4f29-a48b-6876138662fd\") " Dec 01 09:04:37 crc kubenswrapper[4689]: I1201 09:04:37.663866 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/50bb385d-f9f3-4a0d-8d26-c0a69a6eba87-config-data\") pod \"50bb385d-f9f3-4a0d-8d26-c0a69a6eba87\" (UID: \"50bb385d-f9f3-4a0d-8d26-c0a69a6eba87\") " Dec 01 09:04:37 crc kubenswrapper[4689]: I1201 09:04:37.663976 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/50bb385d-f9f3-4a0d-8d26-c0a69a6eba87-pod-info\") pod \"50bb385d-f9f3-4a0d-8d26-c0a69a6eba87\" (UID: \"50bb385d-f9f3-4a0d-8d26-c0a69a6eba87\") " Dec 01 09:04:37 crc kubenswrapper[4689]: I1201 09:04:37.664089 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/edc6a475-296b-4f29-a48b-6876138662fd-server-conf\") pod \"edc6a475-296b-4f29-a48b-6876138662fd\" (UID: \"edc6a475-296b-4f29-a48b-6876138662fd\") " Dec 01 09:04:37 crc kubenswrapper[4689]: I1201 09:04:37.664256 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/50bb385d-f9f3-4a0d-8d26-c0a69a6eba87-erlang-cookie-secret\") pod \"50bb385d-f9f3-4a0d-8d26-c0a69a6eba87\" (UID: \"50bb385d-f9f3-4a0d-8d26-c0a69a6eba87\") " Dec 01 09:04:37 crc kubenswrapper[4689]: I1201 09:04:37.664358 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/edc6a475-296b-4f29-a48b-6876138662fd-plugins-conf\") pod \"edc6a475-296b-4f29-a48b-6876138662fd\" (UID: \"edc6a475-296b-4f29-a48b-6876138662fd\") " Dec 01 09:04:37 crc kubenswrapper[4689]: I1201 09:04:37.664480 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/edc6a475-296b-4f29-a48b-6876138662fd-rabbitmq-tls\") pod \"edc6a475-296b-4f29-a48b-6876138662fd\" (UID: \"edc6a475-296b-4f29-a48b-6876138662fd\") " Dec 01 09:04:37 crc kubenswrapper[4689]: I1201 09:04:37.664585 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/50bb385d-f9f3-4a0d-8d26-c0a69a6eba87-plugins-conf\") pod \"50bb385d-f9f3-4a0d-8d26-c0a69a6eba87\" (UID: \"50bb385d-f9f3-4a0d-8d26-c0a69a6eba87\") " Dec 01 09:04:37 crc kubenswrapper[4689]: I1201 09:04:37.664805 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/edc6a475-296b-4f29-a48b-6876138662fd-config-data\") pod \"edc6a475-296b-4f29-a48b-6876138662fd\" (UID: \"edc6a475-296b-4f29-a48b-6876138662fd\") " Dec 01 09:04:37 crc kubenswrapper[4689]: I1201 09:04:37.664928 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/50bb385d-f9f3-4a0d-8d26-c0a69a6eba87-rabbitmq-erlang-cookie\") pod \"50bb385d-f9f3-4a0d-8d26-c0a69a6eba87\" (UID: \"50bb385d-f9f3-4a0d-8d26-c0a69a6eba87\") " Dec 01 09:04:37 crc kubenswrapper[4689]: I1201 09:04:37.665062 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/50bb385d-f9f3-4a0d-8d26-c0a69a6eba87-server-conf\") pod \"50bb385d-f9f3-4a0d-8d26-c0a69a6eba87\" (UID: \"50bb385d-f9f3-4a0d-8d26-c0a69a6eba87\") " Dec 01 09:04:37 crc kubenswrapper[4689]: I1201 09:04:37.665156 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/50bb385d-f9f3-4a0d-8d26-c0a69a6eba87-rabbitmq-confd\") pod \"50bb385d-f9f3-4a0d-8d26-c0a69a6eba87\" (UID: \"50bb385d-f9f3-4a0d-8d26-c0a69a6eba87\") " Dec 01 09:04:37 crc kubenswrapper[4689]: I1201 09:04:37.665289 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"50bb385d-f9f3-4a0d-8d26-c0a69a6eba87\" (UID: \"50bb385d-f9f3-4a0d-8d26-c0a69a6eba87\") " Dec 01 09:04:37 crc kubenswrapper[4689]: I1201 09:04:37.665539 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/50bb385d-f9f3-4a0d-8d26-c0a69a6eba87-rabbitmq-tls\") pod \"50bb385d-f9f3-4a0d-8d26-c0a69a6eba87\" (UID: \"50bb385d-f9f3-4a0d-8d26-c0a69a6eba87\") " Dec 01 09:04:37 crc kubenswrapper[4689]: I1201 09:04:37.665669 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bxzqp\" (UniqueName: \"kubernetes.io/projected/edc6a475-296b-4f29-a48b-6876138662fd-kube-api-access-bxzqp\") pod \"edc6a475-296b-4f29-a48b-6876138662fd\" (UID: \"edc6a475-296b-4f29-a48b-6876138662fd\") " Dec 01 09:04:37 crc kubenswrapper[4689]: I1201 09:04:37.665806 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/edc6a475-296b-4f29-a48b-6876138662fd-rabbitmq-plugins\") pod \"edc6a475-296b-4f29-a48b-6876138662fd\" (UID: \"edc6a475-296b-4f29-a48b-6876138662fd\") " Dec 01 09:04:37 crc kubenswrapper[4689]: I1201 09:04:37.668631 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/edc6a475-296b-4f29-a48b-6876138662fd-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "edc6a475-296b-4f29-a48b-6876138662fd" (UID: "edc6a475-296b-4f29-a48b-6876138662fd"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:04:37 crc kubenswrapper[4689]: I1201 09:04:37.669446 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50bb385d-f9f3-4a0d-8d26-c0a69a6eba87-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "50bb385d-f9f3-4a0d-8d26-c0a69a6eba87" (UID: "50bb385d-f9f3-4a0d-8d26-c0a69a6eba87"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:04:37 crc kubenswrapper[4689]: I1201 09:04:37.675199 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50bb385d-f9f3-4a0d-8d26-c0a69a6eba87-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "50bb385d-f9f3-4a0d-8d26-c0a69a6eba87" (UID: "50bb385d-f9f3-4a0d-8d26-c0a69a6eba87"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:04:37 crc kubenswrapper[4689]: I1201 09:04:37.677968 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/edc6a475-296b-4f29-a48b-6876138662fd-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "edc6a475-296b-4f29-a48b-6876138662fd" (UID: "edc6a475-296b-4f29-a48b-6876138662fd"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:04:37 crc kubenswrapper[4689]: I1201 09:04:37.696705 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edc6a475-296b-4f29-a48b-6876138662fd-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "edc6a475-296b-4f29-a48b-6876138662fd" (UID: "edc6a475-296b-4f29-a48b-6876138662fd"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:04:37 crc kubenswrapper[4689]: I1201 09:04:37.697986 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/edc6a475-296b-4f29-a48b-6876138662fd-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "edc6a475-296b-4f29-a48b-6876138662fd" (UID: "edc6a475-296b-4f29-a48b-6876138662fd"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:04:37 crc kubenswrapper[4689]: I1201 09:04:37.709881 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50bb385d-f9f3-4a0d-8d26-c0a69a6eba87-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "50bb385d-f9f3-4a0d-8d26-c0a69a6eba87" (UID: "50bb385d-f9f3-4a0d-8d26-c0a69a6eba87"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:04:37 crc kubenswrapper[4689]: I1201 09:04:37.720591 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edc6a475-296b-4f29-a48b-6876138662fd-kube-api-access-bxzqp" (OuterVolumeSpecName: "kube-api-access-bxzqp") pod "edc6a475-296b-4f29-a48b-6876138662fd" (UID: "edc6a475-296b-4f29-a48b-6876138662fd"). InnerVolumeSpecName "kube-api-access-bxzqp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:04:37 crc kubenswrapper[4689]: I1201 09:04:37.720926 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50bb385d-f9f3-4a0d-8d26-c0a69a6eba87-kube-api-access-nvbxm" (OuterVolumeSpecName: "kube-api-access-nvbxm") pod "50bb385d-f9f3-4a0d-8d26-c0a69a6eba87" (UID: "50bb385d-f9f3-4a0d-8d26-c0a69a6eba87"). InnerVolumeSpecName "kube-api-access-nvbxm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:04:37 crc kubenswrapper[4689]: I1201 09:04:37.721140 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50bb385d-f9f3-4a0d-8d26-c0a69a6eba87-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "50bb385d-f9f3-4a0d-8d26-c0a69a6eba87" (UID: "50bb385d-f9f3-4a0d-8d26-c0a69a6eba87"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:04:37 crc kubenswrapper[4689]: I1201 09:04:37.721276 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edc6a475-296b-4f29-a48b-6876138662fd-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "edc6a475-296b-4f29-a48b-6876138662fd" (UID: "edc6a475-296b-4f29-a48b-6876138662fd"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:04:37 crc kubenswrapper[4689]: I1201 09:04:37.723702 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/edc6a475-296b-4f29-a48b-6876138662fd-pod-info" (OuterVolumeSpecName: "pod-info") pod "edc6a475-296b-4f29-a48b-6876138662fd" (UID: "edc6a475-296b-4f29-a48b-6876138662fd"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Dec 01 09:04:37 crc kubenswrapper[4689]: I1201 09:04:37.723971 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50bb385d-f9f3-4a0d-8d26-c0a69a6eba87-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "50bb385d-f9f3-4a0d-8d26-c0a69a6eba87" (UID: "50bb385d-f9f3-4a0d-8d26-c0a69a6eba87"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:04:37 crc kubenswrapper[4689]: I1201 09:04:37.735451 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "persistence") pod "50bb385d-f9f3-4a0d-8d26-c0a69a6eba87" (UID: "50bb385d-f9f3-4a0d-8d26-c0a69a6eba87"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 01 09:04:37 crc kubenswrapper[4689]: I1201 09:04:37.737192 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/50bb385d-f9f3-4a0d-8d26-c0a69a6eba87-pod-info" (OuterVolumeSpecName: "pod-info") pod "50bb385d-f9f3-4a0d-8d26-c0a69a6eba87" (UID: "50bb385d-f9f3-4a0d-8d26-c0a69a6eba87"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Dec 01 09:04:37 crc kubenswrapper[4689]: I1201 09:04:37.752551 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "persistence") pod "edc6a475-296b-4f29-a48b-6876138662fd" (UID: "edc6a475-296b-4f29-a48b-6876138662fd"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 01 09:04:37 crc kubenswrapper[4689]: I1201 09:04:37.772733 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/edc6a475-296b-4f29-a48b-6876138662fd-config-data" (OuterVolumeSpecName: "config-data") pod "edc6a475-296b-4f29-a48b-6876138662fd" (UID: "edc6a475-296b-4f29-a48b-6876138662fd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:04:37 crc kubenswrapper[4689]: I1201 09:04:37.778400 4689 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/50bb385d-f9f3-4a0d-8d26-c0a69a6eba87-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Dec 01 09:04:37 crc kubenswrapper[4689]: I1201 09:04:37.778461 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bxzqp\" (UniqueName: \"kubernetes.io/projected/edc6a475-296b-4f29-a48b-6876138662fd-kube-api-access-bxzqp\") on node \"crc\" DevicePath \"\"" Dec 01 09:04:37 crc kubenswrapper[4689]: I1201 09:04:37.778474 4689 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/edc6a475-296b-4f29-a48b-6876138662fd-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Dec 01 09:04:37 crc kubenswrapper[4689]: I1201 09:04:37.778482 4689 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/edc6a475-296b-4f29-a48b-6876138662fd-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Dec 01 09:04:37 crc kubenswrapper[4689]: I1201 09:04:37.778491 4689 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/50bb385d-f9f3-4a0d-8d26-c0a69a6eba87-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Dec 01 09:04:37 crc kubenswrapper[4689]: I1201 09:04:37.778501 4689 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/edc6a475-296b-4f29-a48b-6876138662fd-pod-info\") on node \"crc\" DevicePath \"\"" Dec 01 09:04:37 crc kubenswrapper[4689]: I1201 09:04:37.778509 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nvbxm\" (UniqueName: \"kubernetes.io/projected/50bb385d-f9f3-4a0d-8d26-c0a69a6eba87-kube-api-access-nvbxm\") on node \"crc\" DevicePath \"\"" Dec 01 09:04:37 crc kubenswrapper[4689]: I1201 09:04:37.778519 4689 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/edc6a475-296b-4f29-a48b-6876138662fd-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Dec 01 09:04:37 crc kubenswrapper[4689]: I1201 09:04:37.778563 4689 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Dec 01 09:04:37 crc kubenswrapper[4689]: I1201 09:04:37.778573 4689 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/50bb385d-f9f3-4a0d-8d26-c0a69a6eba87-pod-info\") on node \"crc\" DevicePath \"\"" Dec 01 09:04:37 crc kubenswrapper[4689]: I1201 09:04:37.778581 4689 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/50bb385d-f9f3-4a0d-8d26-c0a69a6eba87-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Dec 01 09:04:37 crc kubenswrapper[4689]: I1201 09:04:37.778590 4689 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/edc6a475-296b-4f29-a48b-6876138662fd-plugins-conf\") on node \"crc\" DevicePath \"\"" Dec 01 09:04:37 crc kubenswrapper[4689]: I1201 09:04:37.778599 4689 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/edc6a475-296b-4f29-a48b-6876138662fd-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Dec 01 09:04:37 crc kubenswrapper[4689]: I1201 09:04:37.778609 4689 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/50bb385d-f9f3-4a0d-8d26-c0a69a6eba87-plugins-conf\") on node \"crc\" DevicePath \"\"" Dec 01 09:04:37 crc kubenswrapper[4689]: I1201 09:04:37.778617 4689 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/edc6a475-296b-4f29-a48b-6876138662fd-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 09:04:37 crc kubenswrapper[4689]: I1201 09:04:37.778626 4689 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/50bb385d-f9f3-4a0d-8d26-c0a69a6eba87-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Dec 01 09:04:37 crc kubenswrapper[4689]: I1201 09:04:37.778640 4689 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Dec 01 09:04:37 crc kubenswrapper[4689]: I1201 09:04:37.811115 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50bb385d-f9f3-4a0d-8d26-c0a69a6eba87-config-data" (OuterVolumeSpecName: "config-data") pod "50bb385d-f9f3-4a0d-8d26-c0a69a6eba87" (UID: "50bb385d-f9f3-4a0d-8d26-c0a69a6eba87"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:04:37 crc kubenswrapper[4689]: I1201 09:04:37.844262 4689 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Dec 01 09:04:37 crc kubenswrapper[4689]: I1201 09:04:37.853246 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50bb385d-f9f3-4a0d-8d26-c0a69a6eba87-server-conf" (OuterVolumeSpecName: "server-conf") pod "50bb385d-f9f3-4a0d-8d26-c0a69a6eba87" (UID: "50bb385d-f9f3-4a0d-8d26-c0a69a6eba87"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:04:37 crc kubenswrapper[4689]: I1201 09:04:37.862954 4689 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Dec 01 09:04:37 crc kubenswrapper[4689]: I1201 09:04:37.880210 4689 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/50bb385d-f9f3-4a0d-8d26-c0a69a6eba87-server-conf\") on node \"crc\" DevicePath \"\"" Dec 01 09:04:37 crc kubenswrapper[4689]: I1201 09:04:37.880250 4689 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Dec 01 09:04:37 crc kubenswrapper[4689]: I1201 09:04:37.880262 4689 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Dec 01 09:04:37 crc kubenswrapper[4689]: I1201 09:04:37.880271 4689 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/50bb385d-f9f3-4a0d-8d26-c0a69a6eba87-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 09:04:37 crc kubenswrapper[4689]: I1201 09:04:37.885871 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/edc6a475-296b-4f29-a48b-6876138662fd-server-conf" (OuterVolumeSpecName: "server-conf") pod "edc6a475-296b-4f29-a48b-6876138662fd" (UID: "edc6a475-296b-4f29-a48b-6876138662fd"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:04:37 crc kubenswrapper[4689]: I1201 09:04:37.976486 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edc6a475-296b-4f29-a48b-6876138662fd-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "edc6a475-296b-4f29-a48b-6876138662fd" (UID: "edc6a475-296b-4f29-a48b-6876138662fd"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:04:37 crc kubenswrapper[4689]: I1201 09:04:37.981970 4689 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/edc6a475-296b-4f29-a48b-6876138662fd-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Dec 01 09:04:37 crc kubenswrapper[4689]: I1201 09:04:37.982002 4689 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/edc6a475-296b-4f29-a48b-6876138662fd-server-conf\") on node \"crc\" DevicePath \"\"" Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.003627 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50bb385d-f9f3-4a0d-8d26-c0a69a6eba87-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "50bb385d-f9f3-4a0d-8d26-c0a69a6eba87" (UID: "50bb385d-f9f3-4a0d-8d26-c0a69a6eba87"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.083796 4689 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/50bb385d-f9f3-4a0d-8d26-c0a69a6eba87-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.175156 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"50bb385d-f9f3-4a0d-8d26-c0a69a6eba87","Type":"ContainerDied","Data":"553c3448bc49ceb428b6d46962082fe472b898b74e385eacaf8726dcf2345e34"} Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.175212 4689 scope.go:117] "RemoveContainer" containerID="328e6e999625f74b66c204928991821928098f5fd4af8cc7282a217fdf897259" Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.175333 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.185092 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"edc6a475-296b-4f29-a48b-6876138662fd","Type":"ContainerDied","Data":"a4015cc93f4b621890027a1de825002152a9a951ffa58be6dc55220aaa0a725c"} Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.185348 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.217968 4689 scope.go:117] "RemoveContainer" containerID="6c94d18bc981b2dbe3f34fea2eb76e6d8e0b233b1a1d374cb8c1ed08c12aed49" Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.238020 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.249423 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.257604 4689 scope.go:117] "RemoveContainer" containerID="a0ba8e18b86610c8300a20ae45e8e15decc2210ba29bef24323021ce9062f808" Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.259650 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.280084 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.284543 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 01 09:04:38 crc kubenswrapper[4689]: E1201 09:04:38.285160 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50bb385d-f9f3-4a0d-8d26-c0a69a6eba87" containerName="rabbitmq" Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.285278 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="50bb385d-f9f3-4a0d-8d26-c0a69a6eba87" containerName="rabbitmq" Dec 01 09:04:38 crc kubenswrapper[4689]: E1201 09:04:38.285459 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50bb385d-f9f3-4a0d-8d26-c0a69a6eba87" containerName="setup-container" Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.285515 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="50bb385d-f9f3-4a0d-8d26-c0a69a6eba87" containerName="setup-container" Dec 01 09:04:38 crc kubenswrapper[4689]: E1201 09:04:38.285578 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edc6a475-296b-4f29-a48b-6876138662fd" containerName="rabbitmq" Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.285626 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="edc6a475-296b-4f29-a48b-6876138662fd" containerName="rabbitmq" Dec 01 09:04:38 crc kubenswrapper[4689]: E1201 09:04:38.285702 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edc6a475-296b-4f29-a48b-6876138662fd" containerName="setup-container" Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.285752 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="edc6a475-296b-4f29-a48b-6876138662fd" containerName="setup-container" Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.285995 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="50bb385d-f9f3-4a0d-8d26-c0a69a6eba87" containerName="rabbitmq" Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.286087 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="edc6a475-296b-4f29-a48b-6876138662fd" containerName="rabbitmq" Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.287422 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.287834 4689 scope.go:117] "RemoveContainer" containerID="9696664d4002a6911085236ecd7df6fa9f9eb259ffb735c43de9d7035c79daf7" Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.289779 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.290054 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.290225 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.290337 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.290495 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.290736 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.290868 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-rgwmj" Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.337677 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.340999 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.346891 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.347164 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.347347 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.347533 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.351474 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.351967 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.352273 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-jgdkg" Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.352648 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.361635 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.401205 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5100fd48-e762-41b7-ac48-29b85c21dd3d-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"5100fd48-e762-41b7-ac48-29b85c21dd3d\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.401265 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5100fd48-e762-41b7-ac48-29b85c21dd3d-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"5100fd48-e762-41b7-ac48-29b85c21dd3d\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.401314 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5100fd48-e762-41b7-ac48-29b85c21dd3d-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"5100fd48-e762-41b7-ac48-29b85c21dd3d\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.401485 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5100fd48-e762-41b7-ac48-29b85c21dd3d-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"5100fd48-e762-41b7-ac48-29b85c21dd3d\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.401519 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"5100fd48-e762-41b7-ac48-29b85c21dd3d\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.401557 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6vzz\" (UniqueName: \"kubernetes.io/projected/5100fd48-e762-41b7-ac48-29b85c21dd3d-kube-api-access-v6vzz\") pod \"rabbitmq-cell1-server-0\" (UID: \"5100fd48-e762-41b7-ac48-29b85c21dd3d\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.401645 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5100fd48-e762-41b7-ac48-29b85c21dd3d-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"5100fd48-e762-41b7-ac48-29b85c21dd3d\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.401690 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5100fd48-e762-41b7-ac48-29b85c21dd3d-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"5100fd48-e762-41b7-ac48-29b85c21dd3d\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.403322 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5100fd48-e762-41b7-ac48-29b85c21dd3d-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"5100fd48-e762-41b7-ac48-29b85c21dd3d\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.403397 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5100fd48-e762-41b7-ac48-29b85c21dd3d-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"5100fd48-e762-41b7-ac48-29b85c21dd3d\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.403428 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5100fd48-e762-41b7-ac48-29b85c21dd3d-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"5100fd48-e762-41b7-ac48-29b85c21dd3d\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.505332 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4b5ea820-9372-4a98-8000-75815f156435-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"4b5ea820-9372-4a98-8000-75815f156435\") " pod="openstack/rabbitmq-server-0" Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.505421 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5100fd48-e762-41b7-ac48-29b85c21dd3d-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"5100fd48-e762-41b7-ac48-29b85c21dd3d\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.505454 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5100fd48-e762-41b7-ac48-29b85c21dd3d-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"5100fd48-e762-41b7-ac48-29b85c21dd3d\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.505481 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5100fd48-e762-41b7-ac48-29b85c21dd3d-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"5100fd48-e762-41b7-ac48-29b85c21dd3d\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.505521 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4b5ea820-9372-4a98-8000-75815f156435-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"4b5ea820-9372-4a98-8000-75815f156435\") " pod="openstack/rabbitmq-server-0" Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.505553 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5100fd48-e762-41b7-ac48-29b85c21dd3d-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"5100fd48-e762-41b7-ac48-29b85c21dd3d\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.505583 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5100fd48-e762-41b7-ac48-29b85c21dd3d-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"5100fd48-e762-41b7-ac48-29b85c21dd3d\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.505614 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4b5ea820-9372-4a98-8000-75815f156435-config-data\") pod \"rabbitmq-server-0\" (UID: \"4b5ea820-9372-4a98-8000-75815f156435\") " pod="openstack/rabbitmq-server-0" Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.505642 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5100fd48-e762-41b7-ac48-29b85c21dd3d-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"5100fd48-e762-41b7-ac48-29b85c21dd3d\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.505682 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4b5ea820-9372-4a98-8000-75815f156435-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"4b5ea820-9372-4a98-8000-75815f156435\") " pod="openstack/rabbitmq-server-0" Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.505739 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5100fd48-e762-41b7-ac48-29b85c21dd3d-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"5100fd48-e762-41b7-ac48-29b85c21dd3d\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.505771 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"5100fd48-e762-41b7-ac48-29b85c21dd3d\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.505803 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6vzz\" (UniqueName: \"kubernetes.io/projected/5100fd48-e762-41b7-ac48-29b85c21dd3d-kube-api-access-v6vzz\") pod \"rabbitmq-cell1-server-0\" (UID: \"5100fd48-e762-41b7-ac48-29b85c21dd3d\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.505843 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4b5ea820-9372-4a98-8000-75815f156435-pod-info\") pod \"rabbitmq-server-0\" (UID: \"4b5ea820-9372-4a98-8000-75815f156435\") " pod="openstack/rabbitmq-server-0" Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.505896 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4b5ea820-9372-4a98-8000-75815f156435-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"4b5ea820-9372-4a98-8000-75815f156435\") " pod="openstack/rabbitmq-server-0" Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.505923 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5bg7\" (UniqueName: \"kubernetes.io/projected/4b5ea820-9372-4a98-8000-75815f156435-kube-api-access-n5bg7\") pod \"rabbitmq-server-0\" (UID: \"4b5ea820-9372-4a98-8000-75815f156435\") " pod="openstack/rabbitmq-server-0" Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.505959 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4b5ea820-9372-4a98-8000-75815f156435-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"4b5ea820-9372-4a98-8000-75815f156435\") " pod="openstack/rabbitmq-server-0" Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.505985 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5100fd48-e762-41b7-ac48-29b85c21dd3d-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"5100fd48-e762-41b7-ac48-29b85c21dd3d\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.506018 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5100fd48-e762-41b7-ac48-29b85c21dd3d-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"5100fd48-e762-41b7-ac48-29b85c21dd3d\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.506047 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"4b5ea820-9372-4a98-8000-75815f156435\") " pod="openstack/rabbitmq-server-0" Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.506069 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4b5ea820-9372-4a98-8000-75815f156435-server-conf\") pod \"rabbitmq-server-0\" (UID: \"4b5ea820-9372-4a98-8000-75815f156435\") " pod="openstack/rabbitmq-server-0" Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.506110 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4b5ea820-9372-4a98-8000-75815f156435-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"4b5ea820-9372-4a98-8000-75815f156435\") " pod="openstack/rabbitmq-server-0" Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.507704 4689 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"5100fd48-e762-41b7-ac48-29b85c21dd3d\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.507877 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5100fd48-e762-41b7-ac48-29b85c21dd3d-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"5100fd48-e762-41b7-ac48-29b85c21dd3d\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.508349 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5100fd48-e762-41b7-ac48-29b85c21dd3d-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"5100fd48-e762-41b7-ac48-29b85c21dd3d\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.508882 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5100fd48-e762-41b7-ac48-29b85c21dd3d-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"5100fd48-e762-41b7-ac48-29b85c21dd3d\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.508946 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5100fd48-e762-41b7-ac48-29b85c21dd3d-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"5100fd48-e762-41b7-ac48-29b85c21dd3d\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.509986 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5100fd48-e762-41b7-ac48-29b85c21dd3d-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"5100fd48-e762-41b7-ac48-29b85c21dd3d\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.511875 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5100fd48-e762-41b7-ac48-29b85c21dd3d-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"5100fd48-e762-41b7-ac48-29b85c21dd3d\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.512024 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5100fd48-e762-41b7-ac48-29b85c21dd3d-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"5100fd48-e762-41b7-ac48-29b85c21dd3d\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.513791 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5100fd48-e762-41b7-ac48-29b85c21dd3d-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"5100fd48-e762-41b7-ac48-29b85c21dd3d\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.513932 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5100fd48-e762-41b7-ac48-29b85c21dd3d-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"5100fd48-e762-41b7-ac48-29b85c21dd3d\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.527080 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6vzz\" (UniqueName: \"kubernetes.io/projected/5100fd48-e762-41b7-ac48-29b85c21dd3d-kube-api-access-v6vzz\") pod \"rabbitmq-cell1-server-0\" (UID: \"5100fd48-e762-41b7-ac48-29b85c21dd3d\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.542165 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"5100fd48-e762-41b7-ac48-29b85c21dd3d\") " pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.608263 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4b5ea820-9372-4a98-8000-75815f156435-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"4b5ea820-9372-4a98-8000-75815f156435\") " pod="openstack/rabbitmq-server-0" Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.608337 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4b5ea820-9372-4a98-8000-75815f156435-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"4b5ea820-9372-4a98-8000-75815f156435\") " pod="openstack/rabbitmq-server-0" Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.608388 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4b5ea820-9372-4a98-8000-75815f156435-config-data\") pod \"rabbitmq-server-0\" (UID: \"4b5ea820-9372-4a98-8000-75815f156435\") " pod="openstack/rabbitmq-server-0" Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.608453 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4b5ea820-9372-4a98-8000-75815f156435-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"4b5ea820-9372-4a98-8000-75815f156435\") " pod="openstack/rabbitmq-server-0" Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.608526 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4b5ea820-9372-4a98-8000-75815f156435-pod-info\") pod \"rabbitmq-server-0\" (UID: \"4b5ea820-9372-4a98-8000-75815f156435\") " pod="openstack/rabbitmq-server-0" Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.608577 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4b5ea820-9372-4a98-8000-75815f156435-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"4b5ea820-9372-4a98-8000-75815f156435\") " pod="openstack/rabbitmq-server-0" Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.608596 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5bg7\" (UniqueName: \"kubernetes.io/projected/4b5ea820-9372-4a98-8000-75815f156435-kube-api-access-n5bg7\") pod \"rabbitmq-server-0\" (UID: \"4b5ea820-9372-4a98-8000-75815f156435\") " pod="openstack/rabbitmq-server-0" Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.608618 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4b5ea820-9372-4a98-8000-75815f156435-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"4b5ea820-9372-4a98-8000-75815f156435\") " pod="openstack/rabbitmq-server-0" Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.608664 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"4b5ea820-9372-4a98-8000-75815f156435\") " pod="openstack/rabbitmq-server-0" Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.608680 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4b5ea820-9372-4a98-8000-75815f156435-server-conf\") pod \"rabbitmq-server-0\" (UID: \"4b5ea820-9372-4a98-8000-75815f156435\") " pod="openstack/rabbitmq-server-0" Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.608707 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4b5ea820-9372-4a98-8000-75815f156435-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"4b5ea820-9372-4a98-8000-75815f156435\") " pod="openstack/rabbitmq-server-0" Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.609188 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4b5ea820-9372-4a98-8000-75815f156435-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"4b5ea820-9372-4a98-8000-75815f156435\") " pod="openstack/rabbitmq-server-0" Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.609275 4689 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"4b5ea820-9372-4a98-8000-75815f156435\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-server-0" Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.610411 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4b5ea820-9372-4a98-8000-75815f156435-config-data\") pod \"rabbitmq-server-0\" (UID: \"4b5ea820-9372-4a98-8000-75815f156435\") " pod="openstack/rabbitmq-server-0" Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.610444 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4b5ea820-9372-4a98-8000-75815f156435-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"4b5ea820-9372-4a98-8000-75815f156435\") " pod="openstack/rabbitmq-server-0" Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.610729 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4b5ea820-9372-4a98-8000-75815f156435-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"4b5ea820-9372-4a98-8000-75815f156435\") " pod="openstack/rabbitmq-server-0" Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.610881 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4b5ea820-9372-4a98-8000-75815f156435-server-conf\") pod \"rabbitmq-server-0\" (UID: \"4b5ea820-9372-4a98-8000-75815f156435\") " pod="openstack/rabbitmq-server-0" Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.613102 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4b5ea820-9372-4a98-8000-75815f156435-pod-info\") pod \"rabbitmq-server-0\" (UID: \"4b5ea820-9372-4a98-8000-75815f156435\") " pod="openstack/rabbitmq-server-0" Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.613988 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4b5ea820-9372-4a98-8000-75815f156435-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"4b5ea820-9372-4a98-8000-75815f156435\") " pod="openstack/rabbitmq-server-0" Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.615592 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4b5ea820-9372-4a98-8000-75815f156435-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"4b5ea820-9372-4a98-8000-75815f156435\") " pod="openstack/rabbitmq-server-0" Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.617162 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4b5ea820-9372-4a98-8000-75815f156435-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"4b5ea820-9372-4a98-8000-75815f156435\") " pod="openstack/rabbitmq-server-0" Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.629908 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5bg7\" (UniqueName: \"kubernetes.io/projected/4b5ea820-9372-4a98-8000-75815f156435-kube-api-access-n5bg7\") pod \"rabbitmq-server-0\" (UID: \"4b5ea820-9372-4a98-8000-75815f156435\") " pod="openstack/rabbitmq-server-0" Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.639999 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"4b5ea820-9372-4a98-8000-75815f156435\") " pod="openstack/rabbitmq-server-0" Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.689654 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 01 09:04:38 crc kubenswrapper[4689]: I1201 09:04:38.699889 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:04:39 crc kubenswrapper[4689]: I1201 09:04:39.050665 4689 scope.go:117] "RemoveContainer" containerID="3ff597f084dfd52ccbf3aad1aa4bc0d1a664ed524aff9fe436ddb28c1463db49" Dec 01 09:04:39 crc kubenswrapper[4689]: E1201 09:04:39.051126 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:04:39 crc kubenswrapper[4689]: I1201 09:04:39.066715 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50bb385d-f9f3-4a0d-8d26-c0a69a6eba87" path="/var/lib/kubelet/pods/50bb385d-f9f3-4a0d-8d26-c0a69a6eba87/volumes" Dec 01 09:04:39 crc kubenswrapper[4689]: I1201 09:04:39.067902 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="edc6a475-296b-4f29-a48b-6876138662fd" path="/var/lib/kubelet/pods/edc6a475-296b-4f29-a48b-6876138662fd/volumes" Dec 01 09:04:39 crc kubenswrapper[4689]: I1201 09:04:39.256996 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 01 09:04:39 crc kubenswrapper[4689]: I1201 09:04:39.271816 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 01 09:04:39 crc kubenswrapper[4689]: I1201 09:04:39.481584 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-67b789f86c-db22n"] Dec 01 09:04:39 crc kubenswrapper[4689]: I1201 09:04:39.483644 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67b789f86c-db22n" Dec 01 09:04:39 crc kubenswrapper[4689]: I1201 09:04:39.495405 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Dec 01 09:04:39 crc kubenswrapper[4689]: I1201 09:04:39.500104 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67b789f86c-db22n"] Dec 01 09:04:39 crc kubenswrapper[4689]: I1201 09:04:39.533188 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gq9v\" (UniqueName: \"kubernetes.io/projected/2ed8e502-bcf1-4bf7-8e16-dd545f396d60-kube-api-access-8gq9v\") pod \"dnsmasq-dns-67b789f86c-db22n\" (UID: \"2ed8e502-bcf1-4bf7-8e16-dd545f396d60\") " pod="openstack/dnsmasq-dns-67b789f86c-db22n" Dec 01 09:04:39 crc kubenswrapper[4689]: I1201 09:04:39.533298 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2ed8e502-bcf1-4bf7-8e16-dd545f396d60-dns-swift-storage-0\") pod \"dnsmasq-dns-67b789f86c-db22n\" (UID: \"2ed8e502-bcf1-4bf7-8e16-dd545f396d60\") " pod="openstack/dnsmasq-dns-67b789f86c-db22n" Dec 01 09:04:39 crc kubenswrapper[4689]: I1201 09:04:39.533391 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2ed8e502-bcf1-4bf7-8e16-dd545f396d60-ovsdbserver-sb\") pod \"dnsmasq-dns-67b789f86c-db22n\" (UID: \"2ed8e502-bcf1-4bf7-8e16-dd545f396d60\") " pod="openstack/dnsmasq-dns-67b789f86c-db22n" Dec 01 09:04:39 crc kubenswrapper[4689]: I1201 09:04:39.533416 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ed8e502-bcf1-4bf7-8e16-dd545f396d60-config\") pod \"dnsmasq-dns-67b789f86c-db22n\" (UID: \"2ed8e502-bcf1-4bf7-8e16-dd545f396d60\") " pod="openstack/dnsmasq-dns-67b789f86c-db22n" Dec 01 09:04:39 crc kubenswrapper[4689]: I1201 09:04:39.533493 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2ed8e502-bcf1-4bf7-8e16-dd545f396d60-ovsdbserver-nb\") pod \"dnsmasq-dns-67b789f86c-db22n\" (UID: \"2ed8e502-bcf1-4bf7-8e16-dd545f396d60\") " pod="openstack/dnsmasq-dns-67b789f86c-db22n" Dec 01 09:04:39 crc kubenswrapper[4689]: I1201 09:04:39.533569 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ed8e502-bcf1-4bf7-8e16-dd545f396d60-dns-svc\") pod \"dnsmasq-dns-67b789f86c-db22n\" (UID: \"2ed8e502-bcf1-4bf7-8e16-dd545f396d60\") " pod="openstack/dnsmasq-dns-67b789f86c-db22n" Dec 01 09:04:39 crc kubenswrapper[4689]: I1201 09:04:39.533619 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/2ed8e502-bcf1-4bf7-8e16-dd545f396d60-openstack-edpm-ipam\") pod \"dnsmasq-dns-67b789f86c-db22n\" (UID: \"2ed8e502-bcf1-4bf7-8e16-dd545f396d60\") " pod="openstack/dnsmasq-dns-67b789f86c-db22n" Dec 01 09:04:39 crc kubenswrapper[4689]: I1201 09:04:39.636723 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2ed8e502-bcf1-4bf7-8e16-dd545f396d60-dns-swift-storage-0\") pod \"dnsmasq-dns-67b789f86c-db22n\" (UID: \"2ed8e502-bcf1-4bf7-8e16-dd545f396d60\") " pod="openstack/dnsmasq-dns-67b789f86c-db22n" Dec 01 09:04:39 crc kubenswrapper[4689]: I1201 09:04:39.637338 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2ed8e502-bcf1-4bf7-8e16-dd545f396d60-ovsdbserver-sb\") pod \"dnsmasq-dns-67b789f86c-db22n\" (UID: \"2ed8e502-bcf1-4bf7-8e16-dd545f396d60\") " pod="openstack/dnsmasq-dns-67b789f86c-db22n" Dec 01 09:04:39 crc kubenswrapper[4689]: I1201 09:04:39.637488 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ed8e502-bcf1-4bf7-8e16-dd545f396d60-config\") pod \"dnsmasq-dns-67b789f86c-db22n\" (UID: \"2ed8e502-bcf1-4bf7-8e16-dd545f396d60\") " pod="openstack/dnsmasq-dns-67b789f86c-db22n" Dec 01 09:04:39 crc kubenswrapper[4689]: I1201 09:04:39.637646 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2ed8e502-bcf1-4bf7-8e16-dd545f396d60-dns-swift-storage-0\") pod \"dnsmasq-dns-67b789f86c-db22n\" (UID: \"2ed8e502-bcf1-4bf7-8e16-dd545f396d60\") " pod="openstack/dnsmasq-dns-67b789f86c-db22n" Dec 01 09:04:39 crc kubenswrapper[4689]: I1201 09:04:39.638132 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2ed8e502-bcf1-4bf7-8e16-dd545f396d60-ovsdbserver-sb\") pod \"dnsmasq-dns-67b789f86c-db22n\" (UID: \"2ed8e502-bcf1-4bf7-8e16-dd545f396d60\") " pod="openstack/dnsmasq-dns-67b789f86c-db22n" Dec 01 09:04:39 crc kubenswrapper[4689]: I1201 09:04:39.638557 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ed8e502-bcf1-4bf7-8e16-dd545f396d60-config\") pod \"dnsmasq-dns-67b789f86c-db22n\" (UID: \"2ed8e502-bcf1-4bf7-8e16-dd545f396d60\") " pod="openstack/dnsmasq-dns-67b789f86c-db22n" Dec 01 09:04:39 crc kubenswrapper[4689]: I1201 09:04:39.638741 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2ed8e502-bcf1-4bf7-8e16-dd545f396d60-ovsdbserver-nb\") pod \"dnsmasq-dns-67b789f86c-db22n\" (UID: \"2ed8e502-bcf1-4bf7-8e16-dd545f396d60\") " pod="openstack/dnsmasq-dns-67b789f86c-db22n" Dec 01 09:04:39 crc kubenswrapper[4689]: I1201 09:04:39.639716 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2ed8e502-bcf1-4bf7-8e16-dd545f396d60-ovsdbserver-nb\") pod \"dnsmasq-dns-67b789f86c-db22n\" (UID: \"2ed8e502-bcf1-4bf7-8e16-dd545f396d60\") " pod="openstack/dnsmasq-dns-67b789f86c-db22n" Dec 01 09:04:39 crc kubenswrapper[4689]: I1201 09:04:39.641660 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ed8e502-bcf1-4bf7-8e16-dd545f396d60-dns-svc\") pod \"dnsmasq-dns-67b789f86c-db22n\" (UID: \"2ed8e502-bcf1-4bf7-8e16-dd545f396d60\") " pod="openstack/dnsmasq-dns-67b789f86c-db22n" Dec 01 09:04:39 crc kubenswrapper[4689]: I1201 09:04:39.641785 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ed8e502-bcf1-4bf7-8e16-dd545f396d60-dns-svc\") pod \"dnsmasq-dns-67b789f86c-db22n\" (UID: \"2ed8e502-bcf1-4bf7-8e16-dd545f396d60\") " pod="openstack/dnsmasq-dns-67b789f86c-db22n" Dec 01 09:04:39 crc kubenswrapper[4689]: I1201 09:04:39.641914 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/2ed8e502-bcf1-4bf7-8e16-dd545f396d60-openstack-edpm-ipam\") pod \"dnsmasq-dns-67b789f86c-db22n\" (UID: \"2ed8e502-bcf1-4bf7-8e16-dd545f396d60\") " pod="openstack/dnsmasq-dns-67b789f86c-db22n" Dec 01 09:04:39 crc kubenswrapper[4689]: I1201 09:04:39.642620 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/2ed8e502-bcf1-4bf7-8e16-dd545f396d60-openstack-edpm-ipam\") pod \"dnsmasq-dns-67b789f86c-db22n\" (UID: \"2ed8e502-bcf1-4bf7-8e16-dd545f396d60\") " pod="openstack/dnsmasq-dns-67b789f86c-db22n" Dec 01 09:04:39 crc kubenswrapper[4689]: I1201 09:04:39.642823 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gq9v\" (UniqueName: \"kubernetes.io/projected/2ed8e502-bcf1-4bf7-8e16-dd545f396d60-kube-api-access-8gq9v\") pod \"dnsmasq-dns-67b789f86c-db22n\" (UID: \"2ed8e502-bcf1-4bf7-8e16-dd545f396d60\") " pod="openstack/dnsmasq-dns-67b789f86c-db22n" Dec 01 09:04:39 crc kubenswrapper[4689]: I1201 09:04:39.666584 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gq9v\" (UniqueName: \"kubernetes.io/projected/2ed8e502-bcf1-4bf7-8e16-dd545f396d60-kube-api-access-8gq9v\") pod \"dnsmasq-dns-67b789f86c-db22n\" (UID: \"2ed8e502-bcf1-4bf7-8e16-dd545f396d60\") " pod="openstack/dnsmasq-dns-67b789f86c-db22n" Dec 01 09:04:39 crc kubenswrapper[4689]: I1201 09:04:39.800203 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67b789f86c-db22n" Dec 01 09:04:40 crc kubenswrapper[4689]: I1201 09:04:40.223640 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"4b5ea820-9372-4a98-8000-75815f156435","Type":"ContainerStarted","Data":"d3fe063c7e0b3e77fb9660261b73226479a0d3057ca5220197f01ff11c8bb806"} Dec 01 09:04:40 crc kubenswrapper[4689]: I1201 09:04:40.224924 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"5100fd48-e762-41b7-ac48-29b85c21dd3d","Type":"ContainerStarted","Data":"126104e05b9c81fe44fbaa00576f8907c5cb0a1bba9a11a749dd3b6355529f9f"} Dec 01 09:04:40 crc kubenswrapper[4689]: I1201 09:04:40.278773 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67b789f86c-db22n"] Dec 01 09:04:41 crc kubenswrapper[4689]: I1201 09:04:41.235689 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"5100fd48-e762-41b7-ac48-29b85c21dd3d","Type":"ContainerStarted","Data":"55a2675f6d6876237e4f1310532ce5fb912180735b201aaeda03fbfea09d660c"} Dec 01 09:04:41 crc kubenswrapper[4689]: I1201 09:04:41.237447 4689 generic.go:334] "Generic (PLEG): container finished" podID="2ed8e502-bcf1-4bf7-8e16-dd545f396d60" containerID="7cbf3979e2ac64e78fdb29ad07e51c8d7bb75be37593f5a1073b2f4420af69a3" exitCode=0 Dec 01 09:04:41 crc kubenswrapper[4689]: I1201 09:04:41.237505 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67b789f86c-db22n" event={"ID":"2ed8e502-bcf1-4bf7-8e16-dd545f396d60","Type":"ContainerDied","Data":"7cbf3979e2ac64e78fdb29ad07e51c8d7bb75be37593f5a1073b2f4420af69a3"} Dec 01 09:04:41 crc kubenswrapper[4689]: I1201 09:04:41.237527 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67b789f86c-db22n" event={"ID":"2ed8e502-bcf1-4bf7-8e16-dd545f396d60","Type":"ContainerStarted","Data":"beae7fb1be11169a9734aee523f75ea22844b69734534108aa24507d0aee02b0"} Dec 01 09:04:41 crc kubenswrapper[4689]: I1201 09:04:41.240334 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"4b5ea820-9372-4a98-8000-75815f156435","Type":"ContainerStarted","Data":"28debea55670c9498fd74163d1c5c5e6673c73d396ee40c7ba234e002460a42a"} Dec 01 09:04:42 crc kubenswrapper[4689]: I1201 09:04:42.252000 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67b789f86c-db22n" event={"ID":"2ed8e502-bcf1-4bf7-8e16-dd545f396d60","Type":"ContainerStarted","Data":"a57393f02f7f3add11f542a210737d0fa086ca436f2f85080d82761b6e521584"} Dec 01 09:04:42 crc kubenswrapper[4689]: I1201 09:04:42.273215 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-67b789f86c-db22n" podStartSLOduration=3.273192653 podStartE2EDuration="3.273192653s" podCreationTimestamp="2025-12-01 09:04:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:04:42.268824655 +0000 UTC m=+1562.341112579" watchObservedRunningTime="2025-12-01 09:04:42.273192653 +0000 UTC m=+1562.345480577" Dec 01 09:04:43 crc kubenswrapper[4689]: I1201 09:04:43.260431 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-67b789f86c-db22n" Dec 01 09:04:45 crc kubenswrapper[4689]: I1201 09:04:45.796561 4689 scope.go:117] "RemoveContainer" containerID="e71e70e63614adcc804aca95ac195b353f6938e502c6418ac9609f75ff113a02" Dec 01 09:04:49 crc kubenswrapper[4689]: I1201 09:04:49.802688 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-67b789f86c-db22n" Dec 01 09:04:49 crc kubenswrapper[4689]: I1201 09:04:49.913375 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-ks5fk"] Dec 01 09:04:49 crc kubenswrapper[4689]: I1201 09:04:49.913770 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-59cf4bdb65-ks5fk" podUID="a85b360e-a5e5-4769-bc64-7ccebba08bd1" containerName="dnsmasq-dns" containerID="cri-o://41d8cc8ee569545f61c9e57189483e8a3aeed7f310f24311f854b5cb0df6df1d" gracePeriod=10 Dec 01 09:04:50 crc kubenswrapper[4689]: I1201 09:04:50.162295 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6bcf8b9d95-dkgsn"] Dec 01 09:04:50 crc kubenswrapper[4689]: I1201 09:04:50.167219 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bcf8b9d95-dkgsn" Dec 01 09:04:50 crc kubenswrapper[4689]: I1201 09:04:50.179623 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bcf8b9d95-dkgsn"] Dec 01 09:04:50 crc kubenswrapper[4689]: I1201 09:04:50.225443 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fb61d912-665c-4e59-b0cf-7e46e24e5201-dns-svc\") pod \"dnsmasq-dns-6bcf8b9d95-dkgsn\" (UID: \"fb61d912-665c-4e59-b0cf-7e46e24e5201\") " pod="openstack/dnsmasq-dns-6bcf8b9d95-dkgsn" Dec 01 09:04:50 crc kubenswrapper[4689]: I1201 09:04:50.225720 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xv59s\" (UniqueName: \"kubernetes.io/projected/fb61d912-665c-4e59-b0cf-7e46e24e5201-kube-api-access-xv59s\") pod \"dnsmasq-dns-6bcf8b9d95-dkgsn\" (UID: \"fb61d912-665c-4e59-b0cf-7e46e24e5201\") " pod="openstack/dnsmasq-dns-6bcf8b9d95-dkgsn" Dec 01 09:04:50 crc kubenswrapper[4689]: I1201 09:04:50.225823 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fb61d912-665c-4e59-b0cf-7e46e24e5201-ovsdbserver-sb\") pod \"dnsmasq-dns-6bcf8b9d95-dkgsn\" (UID: \"fb61d912-665c-4e59-b0cf-7e46e24e5201\") " pod="openstack/dnsmasq-dns-6bcf8b9d95-dkgsn" Dec 01 09:04:50 crc kubenswrapper[4689]: I1201 09:04:50.225914 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fb61d912-665c-4e59-b0cf-7e46e24e5201-dns-swift-storage-0\") pod \"dnsmasq-dns-6bcf8b9d95-dkgsn\" (UID: \"fb61d912-665c-4e59-b0cf-7e46e24e5201\") " pod="openstack/dnsmasq-dns-6bcf8b9d95-dkgsn" Dec 01 09:04:50 crc kubenswrapper[4689]: I1201 09:04:50.225984 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fb61d912-665c-4e59-b0cf-7e46e24e5201-ovsdbserver-nb\") pod \"dnsmasq-dns-6bcf8b9d95-dkgsn\" (UID: \"fb61d912-665c-4e59-b0cf-7e46e24e5201\") " pod="openstack/dnsmasq-dns-6bcf8b9d95-dkgsn" Dec 01 09:04:50 crc kubenswrapper[4689]: I1201 09:04:50.226088 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/fb61d912-665c-4e59-b0cf-7e46e24e5201-openstack-edpm-ipam\") pod \"dnsmasq-dns-6bcf8b9d95-dkgsn\" (UID: \"fb61d912-665c-4e59-b0cf-7e46e24e5201\") " pod="openstack/dnsmasq-dns-6bcf8b9d95-dkgsn" Dec 01 09:04:50 crc kubenswrapper[4689]: I1201 09:04:50.226192 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb61d912-665c-4e59-b0cf-7e46e24e5201-config\") pod \"dnsmasq-dns-6bcf8b9d95-dkgsn\" (UID: \"fb61d912-665c-4e59-b0cf-7e46e24e5201\") " pod="openstack/dnsmasq-dns-6bcf8b9d95-dkgsn" Dec 01 09:04:50 crc kubenswrapper[4689]: I1201 09:04:50.327504 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fb61d912-665c-4e59-b0cf-7e46e24e5201-ovsdbserver-sb\") pod \"dnsmasq-dns-6bcf8b9d95-dkgsn\" (UID: \"fb61d912-665c-4e59-b0cf-7e46e24e5201\") " pod="openstack/dnsmasq-dns-6bcf8b9d95-dkgsn" Dec 01 09:04:50 crc kubenswrapper[4689]: I1201 09:04:50.327894 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xv59s\" (UniqueName: \"kubernetes.io/projected/fb61d912-665c-4e59-b0cf-7e46e24e5201-kube-api-access-xv59s\") pod \"dnsmasq-dns-6bcf8b9d95-dkgsn\" (UID: \"fb61d912-665c-4e59-b0cf-7e46e24e5201\") " pod="openstack/dnsmasq-dns-6bcf8b9d95-dkgsn" Dec 01 09:04:50 crc kubenswrapper[4689]: I1201 09:04:50.327946 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fb61d912-665c-4e59-b0cf-7e46e24e5201-dns-swift-storage-0\") pod \"dnsmasq-dns-6bcf8b9d95-dkgsn\" (UID: \"fb61d912-665c-4e59-b0cf-7e46e24e5201\") " pod="openstack/dnsmasq-dns-6bcf8b9d95-dkgsn" Dec 01 09:04:50 crc kubenswrapper[4689]: I1201 09:04:50.327968 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fb61d912-665c-4e59-b0cf-7e46e24e5201-ovsdbserver-nb\") pod \"dnsmasq-dns-6bcf8b9d95-dkgsn\" (UID: \"fb61d912-665c-4e59-b0cf-7e46e24e5201\") " pod="openstack/dnsmasq-dns-6bcf8b9d95-dkgsn" Dec 01 09:04:50 crc kubenswrapper[4689]: I1201 09:04:50.328012 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/fb61d912-665c-4e59-b0cf-7e46e24e5201-openstack-edpm-ipam\") pod \"dnsmasq-dns-6bcf8b9d95-dkgsn\" (UID: \"fb61d912-665c-4e59-b0cf-7e46e24e5201\") " pod="openstack/dnsmasq-dns-6bcf8b9d95-dkgsn" Dec 01 09:04:50 crc kubenswrapper[4689]: I1201 09:04:50.328058 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb61d912-665c-4e59-b0cf-7e46e24e5201-config\") pod \"dnsmasq-dns-6bcf8b9d95-dkgsn\" (UID: \"fb61d912-665c-4e59-b0cf-7e46e24e5201\") " pod="openstack/dnsmasq-dns-6bcf8b9d95-dkgsn" Dec 01 09:04:50 crc kubenswrapper[4689]: I1201 09:04:50.328132 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fb61d912-665c-4e59-b0cf-7e46e24e5201-dns-svc\") pod \"dnsmasq-dns-6bcf8b9d95-dkgsn\" (UID: \"fb61d912-665c-4e59-b0cf-7e46e24e5201\") " pod="openstack/dnsmasq-dns-6bcf8b9d95-dkgsn" Dec 01 09:04:50 crc kubenswrapper[4689]: I1201 09:04:50.328477 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fb61d912-665c-4e59-b0cf-7e46e24e5201-ovsdbserver-sb\") pod \"dnsmasq-dns-6bcf8b9d95-dkgsn\" (UID: \"fb61d912-665c-4e59-b0cf-7e46e24e5201\") " pod="openstack/dnsmasq-dns-6bcf8b9d95-dkgsn" Dec 01 09:04:50 crc kubenswrapper[4689]: I1201 09:04:50.329071 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fb61d912-665c-4e59-b0cf-7e46e24e5201-dns-svc\") pod \"dnsmasq-dns-6bcf8b9d95-dkgsn\" (UID: \"fb61d912-665c-4e59-b0cf-7e46e24e5201\") " pod="openstack/dnsmasq-dns-6bcf8b9d95-dkgsn" Dec 01 09:04:50 crc kubenswrapper[4689]: I1201 09:04:50.329418 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fb61d912-665c-4e59-b0cf-7e46e24e5201-ovsdbserver-nb\") pod \"dnsmasq-dns-6bcf8b9d95-dkgsn\" (UID: \"fb61d912-665c-4e59-b0cf-7e46e24e5201\") " pod="openstack/dnsmasq-dns-6bcf8b9d95-dkgsn" Dec 01 09:04:50 crc kubenswrapper[4689]: I1201 09:04:50.329857 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/fb61d912-665c-4e59-b0cf-7e46e24e5201-openstack-edpm-ipam\") pod \"dnsmasq-dns-6bcf8b9d95-dkgsn\" (UID: \"fb61d912-665c-4e59-b0cf-7e46e24e5201\") " pod="openstack/dnsmasq-dns-6bcf8b9d95-dkgsn" Dec 01 09:04:50 crc kubenswrapper[4689]: I1201 09:04:50.330201 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fb61d912-665c-4e59-b0cf-7e46e24e5201-dns-swift-storage-0\") pod \"dnsmasq-dns-6bcf8b9d95-dkgsn\" (UID: \"fb61d912-665c-4e59-b0cf-7e46e24e5201\") " pod="openstack/dnsmasq-dns-6bcf8b9d95-dkgsn" Dec 01 09:04:50 crc kubenswrapper[4689]: I1201 09:04:50.332232 4689 generic.go:334] "Generic (PLEG): container finished" podID="a85b360e-a5e5-4769-bc64-7ccebba08bd1" containerID="41d8cc8ee569545f61c9e57189483e8a3aeed7f310f24311f854b5cb0df6df1d" exitCode=0 Dec 01 09:04:50 crc kubenswrapper[4689]: I1201 09:04:50.332263 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb61d912-665c-4e59-b0cf-7e46e24e5201-config\") pod \"dnsmasq-dns-6bcf8b9d95-dkgsn\" (UID: \"fb61d912-665c-4e59-b0cf-7e46e24e5201\") " pod="openstack/dnsmasq-dns-6bcf8b9d95-dkgsn" Dec 01 09:04:50 crc kubenswrapper[4689]: I1201 09:04:50.332274 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-ks5fk" event={"ID":"a85b360e-a5e5-4769-bc64-7ccebba08bd1","Type":"ContainerDied","Data":"41d8cc8ee569545f61c9e57189483e8a3aeed7f310f24311f854b5cb0df6df1d"} Dec 01 09:04:50 crc kubenswrapper[4689]: I1201 09:04:50.357381 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xv59s\" (UniqueName: \"kubernetes.io/projected/fb61d912-665c-4e59-b0cf-7e46e24e5201-kube-api-access-xv59s\") pod \"dnsmasq-dns-6bcf8b9d95-dkgsn\" (UID: \"fb61d912-665c-4e59-b0cf-7e46e24e5201\") " pod="openstack/dnsmasq-dns-6bcf8b9d95-dkgsn" Dec 01 09:04:50 crc kubenswrapper[4689]: I1201 09:04:50.498145 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bcf8b9d95-dkgsn" Dec 01 09:04:50 crc kubenswrapper[4689]: I1201 09:04:50.847452 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59cf4bdb65-ks5fk" Dec 01 09:04:50 crc kubenswrapper[4689]: I1201 09:04:50.948512 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bhtwl\" (UniqueName: \"kubernetes.io/projected/a85b360e-a5e5-4769-bc64-7ccebba08bd1-kube-api-access-bhtwl\") pod \"a85b360e-a5e5-4769-bc64-7ccebba08bd1\" (UID: \"a85b360e-a5e5-4769-bc64-7ccebba08bd1\") " Dec 01 09:04:50 crc kubenswrapper[4689]: I1201 09:04:50.948590 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a85b360e-a5e5-4769-bc64-7ccebba08bd1-dns-swift-storage-0\") pod \"a85b360e-a5e5-4769-bc64-7ccebba08bd1\" (UID: \"a85b360e-a5e5-4769-bc64-7ccebba08bd1\") " Dec 01 09:04:50 crc kubenswrapper[4689]: I1201 09:04:50.948645 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a85b360e-a5e5-4769-bc64-7ccebba08bd1-dns-svc\") pod \"a85b360e-a5e5-4769-bc64-7ccebba08bd1\" (UID: \"a85b360e-a5e5-4769-bc64-7ccebba08bd1\") " Dec 01 09:04:50 crc kubenswrapper[4689]: I1201 09:04:50.948684 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a85b360e-a5e5-4769-bc64-7ccebba08bd1-ovsdbserver-sb\") pod \"a85b360e-a5e5-4769-bc64-7ccebba08bd1\" (UID: \"a85b360e-a5e5-4769-bc64-7ccebba08bd1\") " Dec 01 09:04:50 crc kubenswrapper[4689]: I1201 09:04:50.948725 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a85b360e-a5e5-4769-bc64-7ccebba08bd1-config\") pod \"a85b360e-a5e5-4769-bc64-7ccebba08bd1\" (UID: \"a85b360e-a5e5-4769-bc64-7ccebba08bd1\") " Dec 01 09:04:50 crc kubenswrapper[4689]: I1201 09:04:50.948815 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a85b360e-a5e5-4769-bc64-7ccebba08bd1-ovsdbserver-nb\") pod \"a85b360e-a5e5-4769-bc64-7ccebba08bd1\" (UID: \"a85b360e-a5e5-4769-bc64-7ccebba08bd1\") " Dec 01 09:04:50 crc kubenswrapper[4689]: I1201 09:04:50.953851 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a85b360e-a5e5-4769-bc64-7ccebba08bd1-kube-api-access-bhtwl" (OuterVolumeSpecName: "kube-api-access-bhtwl") pod "a85b360e-a5e5-4769-bc64-7ccebba08bd1" (UID: "a85b360e-a5e5-4769-bc64-7ccebba08bd1"). InnerVolumeSpecName "kube-api-access-bhtwl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:04:51 crc kubenswrapper[4689]: I1201 09:04:51.009696 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a85b360e-a5e5-4769-bc64-7ccebba08bd1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a85b360e-a5e5-4769-bc64-7ccebba08bd1" (UID: "a85b360e-a5e5-4769-bc64-7ccebba08bd1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:04:51 crc kubenswrapper[4689]: I1201 09:04:51.021047 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a85b360e-a5e5-4769-bc64-7ccebba08bd1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a85b360e-a5e5-4769-bc64-7ccebba08bd1" (UID: "a85b360e-a5e5-4769-bc64-7ccebba08bd1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:04:51 crc kubenswrapper[4689]: I1201 09:04:51.029651 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a85b360e-a5e5-4769-bc64-7ccebba08bd1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a85b360e-a5e5-4769-bc64-7ccebba08bd1" (UID: "a85b360e-a5e5-4769-bc64-7ccebba08bd1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:04:51 crc kubenswrapper[4689]: I1201 09:04:51.044679 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a85b360e-a5e5-4769-bc64-7ccebba08bd1-config" (OuterVolumeSpecName: "config") pod "a85b360e-a5e5-4769-bc64-7ccebba08bd1" (UID: "a85b360e-a5e5-4769-bc64-7ccebba08bd1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:04:51 crc kubenswrapper[4689]: I1201 09:04:51.051263 4689 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a85b360e-a5e5-4769-bc64-7ccebba08bd1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 01 09:04:51 crc kubenswrapper[4689]: I1201 09:04:51.051294 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bhtwl\" (UniqueName: \"kubernetes.io/projected/a85b360e-a5e5-4769-bc64-7ccebba08bd1-kube-api-access-bhtwl\") on node \"crc\" DevicePath \"\"" Dec 01 09:04:51 crc kubenswrapper[4689]: I1201 09:04:51.051305 4689 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a85b360e-a5e5-4769-bc64-7ccebba08bd1-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 01 09:04:51 crc kubenswrapper[4689]: I1201 09:04:51.051314 4689 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a85b360e-a5e5-4769-bc64-7ccebba08bd1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 01 09:04:51 crc kubenswrapper[4689]: I1201 09:04:51.051322 4689 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a85b360e-a5e5-4769-bc64-7ccebba08bd1-config\") on node \"crc\" DevicePath \"\"" Dec 01 09:04:51 crc kubenswrapper[4689]: I1201 09:04:51.058837 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a85b360e-a5e5-4769-bc64-7ccebba08bd1-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a85b360e-a5e5-4769-bc64-7ccebba08bd1" (UID: "a85b360e-a5e5-4769-bc64-7ccebba08bd1"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:04:51 crc kubenswrapper[4689]: I1201 09:04:51.103106 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bcf8b9d95-dkgsn"] Dec 01 09:04:51 crc kubenswrapper[4689]: I1201 09:04:51.152762 4689 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a85b360e-a5e5-4769-bc64-7ccebba08bd1-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 01 09:04:51 crc kubenswrapper[4689]: I1201 09:04:51.344875 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-ks5fk" event={"ID":"a85b360e-a5e5-4769-bc64-7ccebba08bd1","Type":"ContainerDied","Data":"8c94115d4e552b1a069f91599a3b48c0de11c87ec5cf5fe45d816bbd6f1741ba"} Dec 01 09:04:51 crc kubenswrapper[4689]: I1201 09:04:51.344941 4689 scope.go:117] "RemoveContainer" containerID="41d8cc8ee569545f61c9e57189483e8a3aeed7f310f24311f854b5cb0df6df1d" Dec 01 09:04:51 crc kubenswrapper[4689]: I1201 09:04:51.345160 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59cf4bdb65-ks5fk" Dec 01 09:04:51 crc kubenswrapper[4689]: I1201 09:04:51.346577 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bcf8b9d95-dkgsn" event={"ID":"fb61d912-665c-4e59-b0cf-7e46e24e5201","Type":"ContainerStarted","Data":"470ee1d700f8debd3df6b9ad0311030e0c9fd3d0192921f41483d316f755723c"} Dec 01 09:04:51 crc kubenswrapper[4689]: I1201 09:04:51.375983 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-ks5fk"] Dec 01 09:04:51 crc kubenswrapper[4689]: I1201 09:04:51.384496 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-ks5fk"] Dec 01 09:04:51 crc kubenswrapper[4689]: I1201 09:04:51.399612 4689 scope.go:117] "RemoveContainer" containerID="76bfec9994a58e803d9884acd5b9ff9e9dcdfdeeabb617bcd1d4c51dd7d37aed" Dec 01 09:04:52 crc kubenswrapper[4689]: I1201 09:04:52.047117 4689 scope.go:117] "RemoveContainer" containerID="3ff597f084dfd52ccbf3aad1aa4bc0d1a664ed524aff9fe436ddb28c1463db49" Dec 01 09:04:52 crc kubenswrapper[4689]: E1201 09:04:52.047635 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:04:52 crc kubenswrapper[4689]: I1201 09:04:52.356309 4689 generic.go:334] "Generic (PLEG): container finished" podID="fb61d912-665c-4e59-b0cf-7e46e24e5201" containerID="ebe0be09ab40005d511babd42b6d2af846b76c7a47cd4814cdc813bbc200b3e5" exitCode=0 Dec 01 09:04:52 crc kubenswrapper[4689]: I1201 09:04:52.356424 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bcf8b9d95-dkgsn" event={"ID":"fb61d912-665c-4e59-b0cf-7e46e24e5201","Type":"ContainerDied","Data":"ebe0be09ab40005d511babd42b6d2af846b76c7a47cd4814cdc813bbc200b3e5"} Dec 01 09:04:53 crc kubenswrapper[4689]: I1201 09:04:53.065522 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a85b360e-a5e5-4769-bc64-7ccebba08bd1" path="/var/lib/kubelet/pods/a85b360e-a5e5-4769-bc64-7ccebba08bd1/volumes" Dec 01 09:04:53 crc kubenswrapper[4689]: I1201 09:04:53.371145 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bcf8b9d95-dkgsn" event={"ID":"fb61d912-665c-4e59-b0cf-7e46e24e5201","Type":"ContainerStarted","Data":"aedb84379f6dc5f78825a734c85dcb70f7a0acf920b8fb671dd31485ad2530a9"} Dec 01 09:04:53 crc kubenswrapper[4689]: I1201 09:04:53.371348 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6bcf8b9d95-dkgsn" Dec 01 09:04:53 crc kubenswrapper[4689]: I1201 09:04:53.396458 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6bcf8b9d95-dkgsn" podStartSLOduration=3.3964349719999998 podStartE2EDuration="3.396434972s" podCreationTimestamp="2025-12-01 09:04:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:04:53.391384296 +0000 UTC m=+1573.463672210" watchObservedRunningTime="2025-12-01 09:04:53.396434972 +0000 UTC m=+1573.468722886" Dec 01 09:05:00 crc kubenswrapper[4689]: I1201 09:05:00.500525 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6bcf8b9d95-dkgsn" Dec 01 09:05:00 crc kubenswrapper[4689]: I1201 09:05:00.594352 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67b789f86c-db22n"] Dec 01 09:05:00 crc kubenswrapper[4689]: I1201 09:05:00.600127 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-67b789f86c-db22n" podUID="2ed8e502-bcf1-4bf7-8e16-dd545f396d60" containerName="dnsmasq-dns" containerID="cri-o://a57393f02f7f3add11f542a210737d0fa086ca436f2f85080d82761b6e521584" gracePeriod=10 Dec 01 09:05:01 crc kubenswrapper[4689]: I1201 09:05:01.255408 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67b789f86c-db22n" Dec 01 09:05:01 crc kubenswrapper[4689]: I1201 09:05:01.341685 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/2ed8e502-bcf1-4bf7-8e16-dd545f396d60-openstack-edpm-ipam\") pod \"2ed8e502-bcf1-4bf7-8e16-dd545f396d60\" (UID: \"2ed8e502-bcf1-4bf7-8e16-dd545f396d60\") " Dec 01 09:05:01 crc kubenswrapper[4689]: I1201 09:05:01.341792 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8gq9v\" (UniqueName: \"kubernetes.io/projected/2ed8e502-bcf1-4bf7-8e16-dd545f396d60-kube-api-access-8gq9v\") pod \"2ed8e502-bcf1-4bf7-8e16-dd545f396d60\" (UID: \"2ed8e502-bcf1-4bf7-8e16-dd545f396d60\") " Dec 01 09:05:01 crc kubenswrapper[4689]: I1201 09:05:01.341861 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ed8e502-bcf1-4bf7-8e16-dd545f396d60-dns-svc\") pod \"2ed8e502-bcf1-4bf7-8e16-dd545f396d60\" (UID: \"2ed8e502-bcf1-4bf7-8e16-dd545f396d60\") " Dec 01 09:05:01 crc kubenswrapper[4689]: I1201 09:05:01.341918 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2ed8e502-bcf1-4bf7-8e16-dd545f396d60-ovsdbserver-sb\") pod \"2ed8e502-bcf1-4bf7-8e16-dd545f396d60\" (UID: \"2ed8e502-bcf1-4bf7-8e16-dd545f396d60\") " Dec 01 09:05:01 crc kubenswrapper[4689]: I1201 09:05:01.341955 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2ed8e502-bcf1-4bf7-8e16-dd545f396d60-ovsdbserver-nb\") pod \"2ed8e502-bcf1-4bf7-8e16-dd545f396d60\" (UID: \"2ed8e502-bcf1-4bf7-8e16-dd545f396d60\") " Dec 01 09:05:01 crc kubenswrapper[4689]: I1201 09:05:01.341983 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ed8e502-bcf1-4bf7-8e16-dd545f396d60-config\") pod \"2ed8e502-bcf1-4bf7-8e16-dd545f396d60\" (UID: \"2ed8e502-bcf1-4bf7-8e16-dd545f396d60\") " Dec 01 09:05:01 crc kubenswrapper[4689]: I1201 09:05:01.342019 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2ed8e502-bcf1-4bf7-8e16-dd545f396d60-dns-swift-storage-0\") pod \"2ed8e502-bcf1-4bf7-8e16-dd545f396d60\" (UID: \"2ed8e502-bcf1-4bf7-8e16-dd545f396d60\") " Dec 01 09:05:01 crc kubenswrapper[4689]: I1201 09:05:01.355203 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ed8e502-bcf1-4bf7-8e16-dd545f396d60-kube-api-access-8gq9v" (OuterVolumeSpecName: "kube-api-access-8gq9v") pod "2ed8e502-bcf1-4bf7-8e16-dd545f396d60" (UID: "2ed8e502-bcf1-4bf7-8e16-dd545f396d60"). InnerVolumeSpecName "kube-api-access-8gq9v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:05:01 crc kubenswrapper[4689]: I1201 09:05:01.445210 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8gq9v\" (UniqueName: \"kubernetes.io/projected/2ed8e502-bcf1-4bf7-8e16-dd545f396d60-kube-api-access-8gq9v\") on node \"crc\" DevicePath \"\"" Dec 01 09:05:01 crc kubenswrapper[4689]: I1201 09:05:01.448614 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ed8e502-bcf1-4bf7-8e16-dd545f396d60-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2ed8e502-bcf1-4bf7-8e16-dd545f396d60" (UID: "2ed8e502-bcf1-4bf7-8e16-dd545f396d60"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:05:01 crc kubenswrapper[4689]: I1201 09:05:01.449313 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ed8e502-bcf1-4bf7-8e16-dd545f396d60-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "2ed8e502-bcf1-4bf7-8e16-dd545f396d60" (UID: "2ed8e502-bcf1-4bf7-8e16-dd545f396d60"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:05:01 crc kubenswrapper[4689]: I1201 09:05:01.465268 4689 generic.go:334] "Generic (PLEG): container finished" podID="2ed8e502-bcf1-4bf7-8e16-dd545f396d60" containerID="a57393f02f7f3add11f542a210737d0fa086ca436f2f85080d82761b6e521584" exitCode=0 Dec 01 09:05:01 crc kubenswrapper[4689]: I1201 09:05:01.465338 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67b789f86c-db22n" event={"ID":"2ed8e502-bcf1-4bf7-8e16-dd545f396d60","Type":"ContainerDied","Data":"a57393f02f7f3add11f542a210737d0fa086ca436f2f85080d82761b6e521584"} Dec 01 09:05:01 crc kubenswrapper[4689]: I1201 09:05:01.465381 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67b789f86c-db22n" event={"ID":"2ed8e502-bcf1-4bf7-8e16-dd545f396d60","Type":"ContainerDied","Data":"beae7fb1be11169a9734aee523f75ea22844b69734534108aa24507d0aee02b0"} Dec 01 09:05:01 crc kubenswrapper[4689]: I1201 09:05:01.465401 4689 scope.go:117] "RemoveContainer" containerID="a57393f02f7f3add11f542a210737d0fa086ca436f2f85080d82761b6e521584" Dec 01 09:05:01 crc kubenswrapper[4689]: I1201 09:05:01.465620 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67b789f86c-db22n" Dec 01 09:05:01 crc kubenswrapper[4689]: I1201 09:05:01.471764 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ed8e502-bcf1-4bf7-8e16-dd545f396d60-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "2ed8e502-bcf1-4bf7-8e16-dd545f396d60" (UID: "2ed8e502-bcf1-4bf7-8e16-dd545f396d60"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:05:01 crc kubenswrapper[4689]: I1201 09:05:01.499434 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ed8e502-bcf1-4bf7-8e16-dd545f396d60-config" (OuterVolumeSpecName: "config") pod "2ed8e502-bcf1-4bf7-8e16-dd545f396d60" (UID: "2ed8e502-bcf1-4bf7-8e16-dd545f396d60"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:05:01 crc kubenswrapper[4689]: I1201 09:05:01.508664 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ed8e502-bcf1-4bf7-8e16-dd545f396d60-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2ed8e502-bcf1-4bf7-8e16-dd545f396d60" (UID: "2ed8e502-bcf1-4bf7-8e16-dd545f396d60"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:05:01 crc kubenswrapper[4689]: I1201 09:05:01.510072 4689 scope.go:117] "RemoveContainer" containerID="7cbf3979e2ac64e78fdb29ad07e51c8d7bb75be37593f5a1073b2f4420af69a3" Dec 01 09:05:01 crc kubenswrapper[4689]: I1201 09:05:01.525855 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ed8e502-bcf1-4bf7-8e16-dd545f396d60-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2ed8e502-bcf1-4bf7-8e16-dd545f396d60" (UID: "2ed8e502-bcf1-4bf7-8e16-dd545f396d60"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:05:01 crc kubenswrapper[4689]: I1201 09:05:01.547002 4689 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ed8e502-bcf1-4bf7-8e16-dd545f396d60-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 01 09:05:01 crc kubenswrapper[4689]: I1201 09:05:01.547047 4689 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2ed8e502-bcf1-4bf7-8e16-dd545f396d60-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 01 09:05:01 crc kubenswrapper[4689]: I1201 09:05:01.547059 4689 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2ed8e502-bcf1-4bf7-8e16-dd545f396d60-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 01 09:05:01 crc kubenswrapper[4689]: I1201 09:05:01.547069 4689 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ed8e502-bcf1-4bf7-8e16-dd545f396d60-config\") on node \"crc\" DevicePath \"\"" Dec 01 09:05:01 crc kubenswrapper[4689]: I1201 09:05:01.547078 4689 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2ed8e502-bcf1-4bf7-8e16-dd545f396d60-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 01 09:05:01 crc kubenswrapper[4689]: I1201 09:05:01.547089 4689 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/2ed8e502-bcf1-4bf7-8e16-dd545f396d60-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Dec 01 09:05:01 crc kubenswrapper[4689]: I1201 09:05:01.554274 4689 scope.go:117] "RemoveContainer" containerID="a57393f02f7f3add11f542a210737d0fa086ca436f2f85080d82761b6e521584" Dec 01 09:05:01 crc kubenswrapper[4689]: E1201 09:05:01.554722 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a57393f02f7f3add11f542a210737d0fa086ca436f2f85080d82761b6e521584\": container with ID starting with a57393f02f7f3add11f542a210737d0fa086ca436f2f85080d82761b6e521584 not found: ID does not exist" containerID="a57393f02f7f3add11f542a210737d0fa086ca436f2f85080d82761b6e521584" Dec 01 09:05:01 crc kubenswrapper[4689]: I1201 09:05:01.554754 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a57393f02f7f3add11f542a210737d0fa086ca436f2f85080d82761b6e521584"} err="failed to get container status \"a57393f02f7f3add11f542a210737d0fa086ca436f2f85080d82761b6e521584\": rpc error: code = NotFound desc = could not find container \"a57393f02f7f3add11f542a210737d0fa086ca436f2f85080d82761b6e521584\": container with ID starting with a57393f02f7f3add11f542a210737d0fa086ca436f2f85080d82761b6e521584 not found: ID does not exist" Dec 01 09:05:01 crc kubenswrapper[4689]: I1201 09:05:01.554776 4689 scope.go:117] "RemoveContainer" containerID="7cbf3979e2ac64e78fdb29ad07e51c8d7bb75be37593f5a1073b2f4420af69a3" Dec 01 09:05:01 crc kubenswrapper[4689]: E1201 09:05:01.555206 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7cbf3979e2ac64e78fdb29ad07e51c8d7bb75be37593f5a1073b2f4420af69a3\": container with ID starting with 7cbf3979e2ac64e78fdb29ad07e51c8d7bb75be37593f5a1073b2f4420af69a3 not found: ID does not exist" containerID="7cbf3979e2ac64e78fdb29ad07e51c8d7bb75be37593f5a1073b2f4420af69a3" Dec 01 09:05:01 crc kubenswrapper[4689]: I1201 09:05:01.555229 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7cbf3979e2ac64e78fdb29ad07e51c8d7bb75be37593f5a1073b2f4420af69a3"} err="failed to get container status \"7cbf3979e2ac64e78fdb29ad07e51c8d7bb75be37593f5a1073b2f4420af69a3\": rpc error: code = NotFound desc = could not find container \"7cbf3979e2ac64e78fdb29ad07e51c8d7bb75be37593f5a1073b2f4420af69a3\": container with ID starting with 7cbf3979e2ac64e78fdb29ad07e51c8d7bb75be37593f5a1073b2f4420af69a3 not found: ID does not exist" Dec 01 09:05:01 crc kubenswrapper[4689]: I1201 09:05:01.808537 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67b789f86c-db22n"] Dec 01 09:05:01 crc kubenswrapper[4689]: I1201 09:05:01.820159 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-67b789f86c-db22n"] Dec 01 09:05:03 crc kubenswrapper[4689]: I1201 09:05:03.060349 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ed8e502-bcf1-4bf7-8e16-dd545f396d60" path="/var/lib/kubelet/pods/2ed8e502-bcf1-4bf7-8e16-dd545f396d60/volumes" Dec 01 09:05:06 crc kubenswrapper[4689]: I1201 09:05:06.048977 4689 scope.go:117] "RemoveContainer" containerID="3ff597f084dfd52ccbf3aad1aa4bc0d1a664ed524aff9fe436ddb28c1463db49" Dec 01 09:05:06 crc kubenswrapper[4689]: E1201 09:05:06.050148 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:05:13 crc kubenswrapper[4689]: I1201 09:05:13.059590 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-drctr"] Dec 01 09:05:13 crc kubenswrapper[4689]: E1201 09:05:13.060658 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a85b360e-a5e5-4769-bc64-7ccebba08bd1" containerName="init" Dec 01 09:05:13 crc kubenswrapper[4689]: I1201 09:05:13.060676 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="a85b360e-a5e5-4769-bc64-7ccebba08bd1" containerName="init" Dec 01 09:05:13 crc kubenswrapper[4689]: E1201 09:05:13.060685 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a85b360e-a5e5-4769-bc64-7ccebba08bd1" containerName="dnsmasq-dns" Dec 01 09:05:13 crc kubenswrapper[4689]: I1201 09:05:13.060693 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="a85b360e-a5e5-4769-bc64-7ccebba08bd1" containerName="dnsmasq-dns" Dec 01 09:05:13 crc kubenswrapper[4689]: E1201 09:05:13.060700 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ed8e502-bcf1-4bf7-8e16-dd545f396d60" containerName="dnsmasq-dns" Dec 01 09:05:13 crc kubenswrapper[4689]: I1201 09:05:13.060707 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ed8e502-bcf1-4bf7-8e16-dd545f396d60" containerName="dnsmasq-dns" Dec 01 09:05:13 crc kubenswrapper[4689]: E1201 09:05:13.060726 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ed8e502-bcf1-4bf7-8e16-dd545f396d60" containerName="init" Dec 01 09:05:13 crc kubenswrapper[4689]: I1201 09:05:13.060733 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ed8e502-bcf1-4bf7-8e16-dd545f396d60" containerName="init" Dec 01 09:05:13 crc kubenswrapper[4689]: I1201 09:05:13.060996 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ed8e502-bcf1-4bf7-8e16-dd545f396d60" containerName="dnsmasq-dns" Dec 01 09:05:13 crc kubenswrapper[4689]: I1201 09:05:13.061018 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="a85b360e-a5e5-4769-bc64-7ccebba08bd1" containerName="dnsmasq-dns" Dec 01 09:05:13 crc kubenswrapper[4689]: I1201 09:05:13.061863 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-drctr" Dec 01 09:05:13 crc kubenswrapper[4689]: I1201 09:05:13.065954 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 01 09:05:13 crc kubenswrapper[4689]: I1201 09:05:13.066149 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 01 09:05:13 crc kubenswrapper[4689]: I1201 09:05:13.071615 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 01 09:05:13 crc kubenswrapper[4689]: I1201 09:05:13.071838 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-sh59x" Dec 01 09:05:13 crc kubenswrapper[4689]: I1201 09:05:13.075141 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-drctr"] Dec 01 09:05:13 crc kubenswrapper[4689]: I1201 09:05:13.130633 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0e13f608-85ec-4fe4-b6bb-e651d2f736d3-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-drctr\" (UID: \"0e13f608-85ec-4fe4-b6bb-e651d2f736d3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-drctr" Dec 01 09:05:13 crc kubenswrapper[4689]: I1201 09:05:13.130789 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e13f608-85ec-4fe4-b6bb-e651d2f736d3-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-drctr\" (UID: \"0e13f608-85ec-4fe4-b6bb-e651d2f736d3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-drctr" Dec 01 09:05:13 crc kubenswrapper[4689]: I1201 09:05:13.130857 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0e13f608-85ec-4fe4-b6bb-e651d2f736d3-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-drctr\" (UID: \"0e13f608-85ec-4fe4-b6bb-e651d2f736d3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-drctr" Dec 01 09:05:13 crc kubenswrapper[4689]: I1201 09:05:13.130876 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5n6p\" (UniqueName: \"kubernetes.io/projected/0e13f608-85ec-4fe4-b6bb-e651d2f736d3-kube-api-access-c5n6p\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-drctr\" (UID: \"0e13f608-85ec-4fe4-b6bb-e651d2f736d3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-drctr" Dec 01 09:05:13 crc kubenswrapper[4689]: I1201 09:05:13.232684 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e13f608-85ec-4fe4-b6bb-e651d2f736d3-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-drctr\" (UID: \"0e13f608-85ec-4fe4-b6bb-e651d2f736d3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-drctr" Dec 01 09:05:13 crc kubenswrapper[4689]: I1201 09:05:13.233023 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0e13f608-85ec-4fe4-b6bb-e651d2f736d3-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-drctr\" (UID: \"0e13f608-85ec-4fe4-b6bb-e651d2f736d3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-drctr" Dec 01 09:05:13 crc kubenswrapper[4689]: I1201 09:05:13.233203 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5n6p\" (UniqueName: \"kubernetes.io/projected/0e13f608-85ec-4fe4-b6bb-e651d2f736d3-kube-api-access-c5n6p\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-drctr\" (UID: \"0e13f608-85ec-4fe4-b6bb-e651d2f736d3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-drctr" Dec 01 09:05:13 crc kubenswrapper[4689]: I1201 09:05:13.233525 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0e13f608-85ec-4fe4-b6bb-e651d2f736d3-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-drctr\" (UID: \"0e13f608-85ec-4fe4-b6bb-e651d2f736d3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-drctr" Dec 01 09:05:13 crc kubenswrapper[4689]: I1201 09:05:13.238642 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0e13f608-85ec-4fe4-b6bb-e651d2f736d3-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-drctr\" (UID: \"0e13f608-85ec-4fe4-b6bb-e651d2f736d3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-drctr" Dec 01 09:05:13 crc kubenswrapper[4689]: I1201 09:05:13.238737 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0e13f608-85ec-4fe4-b6bb-e651d2f736d3-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-drctr\" (UID: \"0e13f608-85ec-4fe4-b6bb-e651d2f736d3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-drctr" Dec 01 09:05:13 crc kubenswrapper[4689]: I1201 09:05:13.238896 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e13f608-85ec-4fe4-b6bb-e651d2f736d3-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-drctr\" (UID: \"0e13f608-85ec-4fe4-b6bb-e651d2f736d3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-drctr" Dec 01 09:05:13 crc kubenswrapper[4689]: I1201 09:05:13.252304 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5n6p\" (UniqueName: \"kubernetes.io/projected/0e13f608-85ec-4fe4-b6bb-e651d2f736d3-kube-api-access-c5n6p\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-drctr\" (UID: \"0e13f608-85ec-4fe4-b6bb-e651d2f736d3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-drctr" Dec 01 09:05:13 crc kubenswrapper[4689]: I1201 09:05:13.386664 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-drctr" Dec 01 09:05:13 crc kubenswrapper[4689]: I1201 09:05:13.607515 4689 generic.go:334] "Generic (PLEG): container finished" podID="4b5ea820-9372-4a98-8000-75815f156435" containerID="28debea55670c9498fd74163d1c5c5e6673c73d396ee40c7ba234e002460a42a" exitCode=0 Dec 01 09:05:13 crc kubenswrapper[4689]: I1201 09:05:13.607591 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"4b5ea820-9372-4a98-8000-75815f156435","Type":"ContainerDied","Data":"28debea55670c9498fd74163d1c5c5e6673c73d396ee40c7ba234e002460a42a"} Dec 01 09:05:13 crc kubenswrapper[4689]: I1201 09:05:13.620862 4689 generic.go:334] "Generic (PLEG): container finished" podID="5100fd48-e762-41b7-ac48-29b85c21dd3d" containerID="55a2675f6d6876237e4f1310532ce5fb912180735b201aaeda03fbfea09d660c" exitCode=0 Dec 01 09:05:13 crc kubenswrapper[4689]: I1201 09:05:13.620908 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"5100fd48-e762-41b7-ac48-29b85c21dd3d","Type":"ContainerDied","Data":"55a2675f6d6876237e4f1310532ce5fb912180735b201aaeda03fbfea09d660c"} Dec 01 09:05:13 crc kubenswrapper[4689]: I1201 09:05:13.930978 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-drctr"] Dec 01 09:05:13 crc kubenswrapper[4689]: W1201 09:05:13.933239 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0e13f608_85ec_4fe4_b6bb_e651d2f736d3.slice/crio-0c2725e9586a576a9556e652e2dcc24e07a1e1ad3f62ba739bec82b2bed5129f WatchSource:0}: Error finding container 0c2725e9586a576a9556e652e2dcc24e07a1e1ad3f62ba739bec82b2bed5129f: Status 404 returned error can't find the container with id 0c2725e9586a576a9556e652e2dcc24e07a1e1ad3f62ba739bec82b2bed5129f Dec 01 09:05:13 crc kubenswrapper[4689]: I1201 09:05:13.935567 4689 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 01 09:05:14 crc kubenswrapper[4689]: I1201 09:05:14.633240 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"5100fd48-e762-41b7-ac48-29b85c21dd3d","Type":"ContainerStarted","Data":"59aa6b90cdbe801c3fe2bba7c6c2b2ab006311650248acc35f8a43421b5e92b8"} Dec 01 09:05:14 crc kubenswrapper[4689]: I1201 09:05:14.633534 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:05:14 crc kubenswrapper[4689]: I1201 09:05:14.638987 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-drctr" event={"ID":"0e13f608-85ec-4fe4-b6bb-e651d2f736d3","Type":"ContainerStarted","Data":"0c2725e9586a576a9556e652e2dcc24e07a1e1ad3f62ba739bec82b2bed5129f"} Dec 01 09:05:14 crc kubenswrapper[4689]: I1201 09:05:14.642453 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"4b5ea820-9372-4a98-8000-75815f156435","Type":"ContainerStarted","Data":"7ef20878106477e8a67902dbe29694c022f6e6158152b03a31a8d334d91fcd4b"} Dec 01 09:05:14 crc kubenswrapper[4689]: I1201 09:05:14.642766 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Dec 01 09:05:14 crc kubenswrapper[4689]: I1201 09:05:14.694290 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.689140284 podStartE2EDuration="36.689140284s" podCreationTimestamp="2025-12-01 09:04:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:05:14.667210746 +0000 UTC m=+1594.739498660" watchObservedRunningTime="2025-12-01 09:05:14.689140284 +0000 UTC m=+1594.761428188" Dec 01 09:05:14 crc kubenswrapper[4689]: I1201 09:05:14.710788 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.710765565 podStartE2EDuration="36.710765565s" podCreationTimestamp="2025-12-01 09:04:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:05:14.69863008 +0000 UTC m=+1594.770918004" watchObservedRunningTime="2025-12-01 09:05:14.710765565 +0000 UTC m=+1594.783053459" Dec 01 09:05:19 crc kubenswrapper[4689]: I1201 09:05:19.047654 4689 scope.go:117] "RemoveContainer" containerID="3ff597f084dfd52ccbf3aad1aa4bc0d1a664ed524aff9fe436ddb28c1463db49" Dec 01 09:05:19 crc kubenswrapper[4689]: E1201 09:05:19.048639 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:05:28 crc kubenswrapper[4689]: I1201 09:05:28.693868 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Dec 01 09:05:28 crc kubenswrapper[4689]: I1201 09:05:28.705611 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Dec 01 09:05:29 crc kubenswrapper[4689]: I1201 09:05:29.802016 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-drctr" event={"ID":"0e13f608-85ec-4fe4-b6bb-e651d2f736d3","Type":"ContainerStarted","Data":"d8b1007c223078c7116d118b193b6c4e5e47b60d7c9cc2187cc7a8b512bf2ada"} Dec 01 09:05:29 crc kubenswrapper[4689]: I1201 09:05:29.821779 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-drctr" podStartSLOduration=1.6883220429999999 podStartE2EDuration="16.821756847s" podCreationTimestamp="2025-12-01 09:05:13 +0000 UTC" firstStartedPulling="2025-12-01 09:05:13.935378347 +0000 UTC m=+1594.007666241" lastFinishedPulling="2025-12-01 09:05:29.068813141 +0000 UTC m=+1609.141101045" observedRunningTime="2025-12-01 09:05:29.820687629 +0000 UTC m=+1609.892975553" watchObservedRunningTime="2025-12-01 09:05:29.821756847 +0000 UTC m=+1609.894044751" Dec 01 09:05:30 crc kubenswrapper[4689]: I1201 09:05:30.047540 4689 scope.go:117] "RemoveContainer" containerID="3ff597f084dfd52ccbf3aad1aa4bc0d1a664ed524aff9fe436ddb28c1463db49" Dec 01 09:05:30 crc kubenswrapper[4689]: E1201 09:05:30.047805 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:05:40 crc kubenswrapper[4689]: I1201 09:05:40.923792 4689 generic.go:334] "Generic (PLEG): container finished" podID="0e13f608-85ec-4fe4-b6bb-e651d2f736d3" containerID="d8b1007c223078c7116d118b193b6c4e5e47b60d7c9cc2187cc7a8b512bf2ada" exitCode=0 Dec 01 09:05:40 crc kubenswrapper[4689]: I1201 09:05:40.923885 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-drctr" event={"ID":"0e13f608-85ec-4fe4-b6bb-e651d2f736d3","Type":"ContainerDied","Data":"d8b1007c223078c7116d118b193b6c4e5e47b60d7c9cc2187cc7a8b512bf2ada"} Dec 01 09:05:42 crc kubenswrapper[4689]: I1201 09:05:42.413561 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-drctr" Dec 01 09:05:42 crc kubenswrapper[4689]: I1201 09:05:42.607039 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c5n6p\" (UniqueName: \"kubernetes.io/projected/0e13f608-85ec-4fe4-b6bb-e651d2f736d3-kube-api-access-c5n6p\") pod \"0e13f608-85ec-4fe4-b6bb-e651d2f736d3\" (UID: \"0e13f608-85ec-4fe4-b6bb-e651d2f736d3\") " Dec 01 09:05:42 crc kubenswrapper[4689]: I1201 09:05:42.607324 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0e13f608-85ec-4fe4-b6bb-e651d2f736d3-ssh-key\") pod \"0e13f608-85ec-4fe4-b6bb-e651d2f736d3\" (UID: \"0e13f608-85ec-4fe4-b6bb-e651d2f736d3\") " Dec 01 09:05:42 crc kubenswrapper[4689]: I1201 09:05:42.607490 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0e13f608-85ec-4fe4-b6bb-e651d2f736d3-inventory\") pod \"0e13f608-85ec-4fe4-b6bb-e651d2f736d3\" (UID: \"0e13f608-85ec-4fe4-b6bb-e651d2f736d3\") " Dec 01 09:05:42 crc kubenswrapper[4689]: I1201 09:05:42.607672 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e13f608-85ec-4fe4-b6bb-e651d2f736d3-repo-setup-combined-ca-bundle\") pod \"0e13f608-85ec-4fe4-b6bb-e651d2f736d3\" (UID: \"0e13f608-85ec-4fe4-b6bb-e651d2f736d3\") " Dec 01 09:05:42 crc kubenswrapper[4689]: I1201 09:05:42.614020 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e13f608-85ec-4fe4-b6bb-e651d2f736d3-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "0e13f608-85ec-4fe4-b6bb-e651d2f736d3" (UID: "0e13f608-85ec-4fe4-b6bb-e651d2f736d3"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:05:42 crc kubenswrapper[4689]: I1201 09:05:42.621684 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e13f608-85ec-4fe4-b6bb-e651d2f736d3-kube-api-access-c5n6p" (OuterVolumeSpecName: "kube-api-access-c5n6p") pod "0e13f608-85ec-4fe4-b6bb-e651d2f736d3" (UID: "0e13f608-85ec-4fe4-b6bb-e651d2f736d3"). InnerVolumeSpecName "kube-api-access-c5n6p". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:05:42 crc kubenswrapper[4689]: I1201 09:05:42.640608 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e13f608-85ec-4fe4-b6bb-e651d2f736d3-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "0e13f608-85ec-4fe4-b6bb-e651d2f736d3" (UID: "0e13f608-85ec-4fe4-b6bb-e651d2f736d3"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:05:42 crc kubenswrapper[4689]: I1201 09:05:42.644064 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e13f608-85ec-4fe4-b6bb-e651d2f736d3-inventory" (OuterVolumeSpecName: "inventory") pod "0e13f608-85ec-4fe4-b6bb-e651d2f736d3" (UID: "0e13f608-85ec-4fe4-b6bb-e651d2f736d3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:05:42 crc kubenswrapper[4689]: I1201 09:05:42.710247 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c5n6p\" (UniqueName: \"kubernetes.io/projected/0e13f608-85ec-4fe4-b6bb-e651d2f736d3-kube-api-access-c5n6p\") on node \"crc\" DevicePath \"\"" Dec 01 09:05:42 crc kubenswrapper[4689]: I1201 09:05:42.710298 4689 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0e13f608-85ec-4fe4-b6bb-e651d2f736d3-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 01 09:05:42 crc kubenswrapper[4689]: I1201 09:05:42.710307 4689 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0e13f608-85ec-4fe4-b6bb-e651d2f736d3-inventory\") on node \"crc\" DevicePath \"\"" Dec 01 09:05:42 crc kubenswrapper[4689]: I1201 09:05:42.710322 4689 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e13f608-85ec-4fe4-b6bb-e651d2f736d3-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 09:05:42 crc kubenswrapper[4689]: I1201 09:05:42.944112 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-drctr" event={"ID":"0e13f608-85ec-4fe4-b6bb-e651d2f736d3","Type":"ContainerDied","Data":"0c2725e9586a576a9556e652e2dcc24e07a1e1ad3f62ba739bec82b2bed5129f"} Dec 01 09:05:42 crc kubenswrapper[4689]: I1201 09:05:42.944172 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c2725e9586a576a9556e652e2dcc24e07a1e1ad3f62ba739bec82b2bed5129f" Dec 01 09:05:42 crc kubenswrapper[4689]: I1201 09:05:42.944235 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-drctr" Dec 01 09:05:43 crc kubenswrapper[4689]: I1201 09:05:43.070162 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-gcfdn"] Dec 01 09:05:43 crc kubenswrapper[4689]: E1201 09:05:43.070631 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e13f608-85ec-4fe4-b6bb-e651d2f736d3" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 01 09:05:43 crc kubenswrapper[4689]: I1201 09:05:43.070653 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e13f608-85ec-4fe4-b6bb-e651d2f736d3" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 01 09:05:43 crc kubenswrapper[4689]: I1201 09:05:43.070895 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e13f608-85ec-4fe4-b6bb-e651d2f736d3" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 01 09:05:43 crc kubenswrapper[4689]: I1201 09:05:43.072193 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-gcfdn" Dec 01 09:05:43 crc kubenswrapper[4689]: I1201 09:05:43.075123 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 01 09:05:43 crc kubenswrapper[4689]: I1201 09:05:43.075147 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 01 09:05:43 crc kubenswrapper[4689]: I1201 09:05:43.075176 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 01 09:05:43 crc kubenswrapper[4689]: I1201 09:05:43.075131 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-sh59x" Dec 01 09:05:43 crc kubenswrapper[4689]: I1201 09:05:43.089832 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-gcfdn"] Dec 01 09:05:43 crc kubenswrapper[4689]: I1201 09:05:43.229553 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1da07875-e46b-4de1-8eea-fb33b293b5a7-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-gcfdn\" (UID: \"1da07875-e46b-4de1-8eea-fb33b293b5a7\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-gcfdn" Dec 01 09:05:43 crc kubenswrapper[4689]: I1201 09:05:43.229881 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdwx8\" (UniqueName: \"kubernetes.io/projected/1da07875-e46b-4de1-8eea-fb33b293b5a7-kube-api-access-jdwx8\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-gcfdn\" (UID: \"1da07875-e46b-4de1-8eea-fb33b293b5a7\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-gcfdn" Dec 01 09:05:43 crc kubenswrapper[4689]: I1201 09:05:43.230232 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1da07875-e46b-4de1-8eea-fb33b293b5a7-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-gcfdn\" (UID: \"1da07875-e46b-4de1-8eea-fb33b293b5a7\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-gcfdn" Dec 01 09:05:43 crc kubenswrapper[4689]: I1201 09:05:43.332645 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1da07875-e46b-4de1-8eea-fb33b293b5a7-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-gcfdn\" (UID: \"1da07875-e46b-4de1-8eea-fb33b293b5a7\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-gcfdn" Dec 01 09:05:43 crc kubenswrapper[4689]: I1201 09:05:43.332844 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdwx8\" (UniqueName: \"kubernetes.io/projected/1da07875-e46b-4de1-8eea-fb33b293b5a7-kube-api-access-jdwx8\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-gcfdn\" (UID: \"1da07875-e46b-4de1-8eea-fb33b293b5a7\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-gcfdn" Dec 01 09:05:43 crc kubenswrapper[4689]: I1201 09:05:43.333035 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1da07875-e46b-4de1-8eea-fb33b293b5a7-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-gcfdn\" (UID: \"1da07875-e46b-4de1-8eea-fb33b293b5a7\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-gcfdn" Dec 01 09:05:43 crc kubenswrapper[4689]: I1201 09:05:43.336754 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1da07875-e46b-4de1-8eea-fb33b293b5a7-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-gcfdn\" (UID: \"1da07875-e46b-4de1-8eea-fb33b293b5a7\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-gcfdn" Dec 01 09:05:43 crc kubenswrapper[4689]: I1201 09:05:43.336900 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1da07875-e46b-4de1-8eea-fb33b293b5a7-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-gcfdn\" (UID: \"1da07875-e46b-4de1-8eea-fb33b293b5a7\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-gcfdn" Dec 01 09:05:43 crc kubenswrapper[4689]: I1201 09:05:43.360515 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdwx8\" (UniqueName: \"kubernetes.io/projected/1da07875-e46b-4de1-8eea-fb33b293b5a7-kube-api-access-jdwx8\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-gcfdn\" (UID: \"1da07875-e46b-4de1-8eea-fb33b293b5a7\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-gcfdn" Dec 01 09:05:43 crc kubenswrapper[4689]: I1201 09:05:43.394833 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-gcfdn" Dec 01 09:05:43 crc kubenswrapper[4689]: I1201 09:05:43.970833 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-gcfdn"] Dec 01 09:05:44 crc kubenswrapper[4689]: I1201 09:05:44.976867 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-gcfdn" event={"ID":"1da07875-e46b-4de1-8eea-fb33b293b5a7","Type":"ContainerStarted","Data":"df70d41bbaa590baaa65fbefef9bee7bd6a79784665233db082434e53e408985"} Dec 01 09:05:45 crc kubenswrapper[4689]: I1201 09:05:45.049174 4689 scope.go:117] "RemoveContainer" containerID="3ff597f084dfd52ccbf3aad1aa4bc0d1a664ed524aff9fe436ddb28c1463db49" Dec 01 09:05:45 crc kubenswrapper[4689]: E1201 09:05:45.049510 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:05:45 crc kubenswrapper[4689]: I1201 09:05:45.997212 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-gcfdn" event={"ID":"1da07875-e46b-4de1-8eea-fb33b293b5a7","Type":"ContainerStarted","Data":"7d7df8dd5194deb709eaecb10b6229d2bdc1370e68859beb2d84505c9db8a577"} Dec 01 09:05:46 crc kubenswrapper[4689]: I1201 09:05:46.015741 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-gcfdn" podStartSLOduration=2.366606883 podStartE2EDuration="3.015722176s" podCreationTimestamp="2025-12-01 09:05:43 +0000 UTC" firstStartedPulling="2025-12-01 09:05:43.984243586 +0000 UTC m=+1624.056531490" lastFinishedPulling="2025-12-01 09:05:44.633358879 +0000 UTC m=+1624.705646783" observedRunningTime="2025-12-01 09:05:46.010370855 +0000 UTC m=+1626.082658759" watchObservedRunningTime="2025-12-01 09:05:46.015722176 +0000 UTC m=+1626.088010080" Dec 01 09:05:48 crc kubenswrapper[4689]: I1201 09:05:48.026018 4689 generic.go:334] "Generic (PLEG): container finished" podID="1da07875-e46b-4de1-8eea-fb33b293b5a7" containerID="7d7df8dd5194deb709eaecb10b6229d2bdc1370e68859beb2d84505c9db8a577" exitCode=0 Dec 01 09:05:48 crc kubenswrapper[4689]: I1201 09:05:48.026105 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-gcfdn" event={"ID":"1da07875-e46b-4de1-8eea-fb33b293b5a7","Type":"ContainerDied","Data":"7d7df8dd5194deb709eaecb10b6229d2bdc1370e68859beb2d84505c9db8a577"} Dec 01 09:05:49 crc kubenswrapper[4689]: I1201 09:05:49.438874 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-gcfdn" Dec 01 09:05:49 crc kubenswrapper[4689]: I1201 09:05:49.550644 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1da07875-e46b-4de1-8eea-fb33b293b5a7-ssh-key\") pod \"1da07875-e46b-4de1-8eea-fb33b293b5a7\" (UID: \"1da07875-e46b-4de1-8eea-fb33b293b5a7\") " Dec 01 09:05:49 crc kubenswrapper[4689]: I1201 09:05:49.550758 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdwx8\" (UniqueName: \"kubernetes.io/projected/1da07875-e46b-4de1-8eea-fb33b293b5a7-kube-api-access-jdwx8\") pod \"1da07875-e46b-4de1-8eea-fb33b293b5a7\" (UID: \"1da07875-e46b-4de1-8eea-fb33b293b5a7\") " Dec 01 09:05:49 crc kubenswrapper[4689]: I1201 09:05:49.550802 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1da07875-e46b-4de1-8eea-fb33b293b5a7-inventory\") pod \"1da07875-e46b-4de1-8eea-fb33b293b5a7\" (UID: \"1da07875-e46b-4de1-8eea-fb33b293b5a7\") " Dec 01 09:05:49 crc kubenswrapper[4689]: I1201 09:05:49.555893 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1da07875-e46b-4de1-8eea-fb33b293b5a7-kube-api-access-jdwx8" (OuterVolumeSpecName: "kube-api-access-jdwx8") pod "1da07875-e46b-4de1-8eea-fb33b293b5a7" (UID: "1da07875-e46b-4de1-8eea-fb33b293b5a7"). InnerVolumeSpecName "kube-api-access-jdwx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:05:49 crc kubenswrapper[4689]: I1201 09:05:49.584257 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1da07875-e46b-4de1-8eea-fb33b293b5a7-inventory" (OuterVolumeSpecName: "inventory") pod "1da07875-e46b-4de1-8eea-fb33b293b5a7" (UID: "1da07875-e46b-4de1-8eea-fb33b293b5a7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:05:49 crc kubenswrapper[4689]: I1201 09:05:49.591624 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1da07875-e46b-4de1-8eea-fb33b293b5a7-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "1da07875-e46b-4de1-8eea-fb33b293b5a7" (UID: "1da07875-e46b-4de1-8eea-fb33b293b5a7"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:05:49 crc kubenswrapper[4689]: I1201 09:05:49.653819 4689 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1da07875-e46b-4de1-8eea-fb33b293b5a7-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 01 09:05:49 crc kubenswrapper[4689]: I1201 09:05:49.653867 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jdwx8\" (UniqueName: \"kubernetes.io/projected/1da07875-e46b-4de1-8eea-fb33b293b5a7-kube-api-access-jdwx8\") on node \"crc\" DevicePath \"\"" Dec 01 09:05:49 crc kubenswrapper[4689]: I1201 09:05:49.653915 4689 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1da07875-e46b-4de1-8eea-fb33b293b5a7-inventory\") on node \"crc\" DevicePath \"\"" Dec 01 09:05:50 crc kubenswrapper[4689]: I1201 09:05:50.063659 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-gcfdn" event={"ID":"1da07875-e46b-4de1-8eea-fb33b293b5a7","Type":"ContainerDied","Data":"df70d41bbaa590baaa65fbefef9bee7bd6a79784665233db082434e53e408985"} Dec 01 09:05:50 crc kubenswrapper[4689]: I1201 09:05:50.063723 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df70d41bbaa590baaa65fbefef9bee7bd6a79784665233db082434e53e408985" Dec 01 09:05:50 crc kubenswrapper[4689]: I1201 09:05:50.063733 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-gcfdn" Dec 01 09:05:50 crc kubenswrapper[4689]: I1201 09:05:50.146232 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-qsxf4"] Dec 01 09:05:50 crc kubenswrapper[4689]: E1201 09:05:50.146875 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1da07875-e46b-4de1-8eea-fb33b293b5a7" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Dec 01 09:05:50 crc kubenswrapper[4689]: I1201 09:05:50.146945 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="1da07875-e46b-4de1-8eea-fb33b293b5a7" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Dec 01 09:05:50 crc kubenswrapper[4689]: I1201 09:05:50.148265 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="1da07875-e46b-4de1-8eea-fb33b293b5a7" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Dec 01 09:05:50 crc kubenswrapper[4689]: I1201 09:05:50.149107 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-qsxf4" Dec 01 09:05:50 crc kubenswrapper[4689]: I1201 09:05:50.151874 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 01 09:05:50 crc kubenswrapper[4689]: I1201 09:05:50.153442 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-sh59x" Dec 01 09:05:50 crc kubenswrapper[4689]: I1201 09:05:50.155537 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-qsxf4"] Dec 01 09:05:50 crc kubenswrapper[4689]: I1201 09:05:50.155833 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 01 09:05:50 crc kubenswrapper[4689]: I1201 09:05:50.156158 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 01 09:05:50 crc kubenswrapper[4689]: I1201 09:05:50.163514 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caf4bdec-471c-4c07-a5a7-294faf35c880-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-qsxf4\" (UID: \"caf4bdec-471c-4c07-a5a7-294faf35c880\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-qsxf4" Dec 01 09:05:50 crc kubenswrapper[4689]: I1201 09:05:50.163572 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwh4m\" (UniqueName: \"kubernetes.io/projected/caf4bdec-471c-4c07-a5a7-294faf35c880-kube-api-access-dwh4m\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-qsxf4\" (UID: \"caf4bdec-471c-4c07-a5a7-294faf35c880\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-qsxf4" Dec 01 09:05:50 crc kubenswrapper[4689]: I1201 09:05:50.163766 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/caf4bdec-471c-4c07-a5a7-294faf35c880-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-qsxf4\" (UID: \"caf4bdec-471c-4c07-a5a7-294faf35c880\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-qsxf4" Dec 01 09:05:50 crc kubenswrapper[4689]: I1201 09:05:50.163910 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/caf4bdec-471c-4c07-a5a7-294faf35c880-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-qsxf4\" (UID: \"caf4bdec-471c-4c07-a5a7-294faf35c880\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-qsxf4" Dec 01 09:05:50 crc kubenswrapper[4689]: I1201 09:05:50.266584 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caf4bdec-471c-4c07-a5a7-294faf35c880-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-qsxf4\" (UID: \"caf4bdec-471c-4c07-a5a7-294faf35c880\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-qsxf4" Dec 01 09:05:50 crc kubenswrapper[4689]: I1201 09:05:50.266655 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwh4m\" (UniqueName: \"kubernetes.io/projected/caf4bdec-471c-4c07-a5a7-294faf35c880-kube-api-access-dwh4m\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-qsxf4\" (UID: \"caf4bdec-471c-4c07-a5a7-294faf35c880\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-qsxf4" Dec 01 09:05:50 crc kubenswrapper[4689]: I1201 09:05:50.266720 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/caf4bdec-471c-4c07-a5a7-294faf35c880-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-qsxf4\" (UID: \"caf4bdec-471c-4c07-a5a7-294faf35c880\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-qsxf4" Dec 01 09:05:50 crc kubenswrapper[4689]: I1201 09:05:50.266757 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/caf4bdec-471c-4c07-a5a7-294faf35c880-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-qsxf4\" (UID: \"caf4bdec-471c-4c07-a5a7-294faf35c880\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-qsxf4" Dec 01 09:05:50 crc kubenswrapper[4689]: I1201 09:05:50.271330 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/caf4bdec-471c-4c07-a5a7-294faf35c880-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-qsxf4\" (UID: \"caf4bdec-471c-4c07-a5a7-294faf35c880\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-qsxf4" Dec 01 09:05:50 crc kubenswrapper[4689]: I1201 09:05:50.271732 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/caf4bdec-471c-4c07-a5a7-294faf35c880-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-qsxf4\" (UID: \"caf4bdec-471c-4c07-a5a7-294faf35c880\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-qsxf4" Dec 01 09:05:50 crc kubenswrapper[4689]: I1201 09:05:50.276791 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caf4bdec-471c-4c07-a5a7-294faf35c880-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-qsxf4\" (UID: \"caf4bdec-471c-4c07-a5a7-294faf35c880\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-qsxf4" Dec 01 09:05:50 crc kubenswrapper[4689]: I1201 09:05:50.286064 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwh4m\" (UniqueName: \"kubernetes.io/projected/caf4bdec-471c-4c07-a5a7-294faf35c880-kube-api-access-dwh4m\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-qsxf4\" (UID: \"caf4bdec-471c-4c07-a5a7-294faf35c880\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-qsxf4" Dec 01 09:05:50 crc kubenswrapper[4689]: I1201 09:05:50.473933 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-qsxf4" Dec 01 09:05:51 crc kubenswrapper[4689]: I1201 09:05:51.010913 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-qsxf4"] Dec 01 09:05:51 crc kubenswrapper[4689]: I1201 09:05:51.076056 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-qsxf4" event={"ID":"caf4bdec-471c-4c07-a5a7-294faf35c880","Type":"ContainerStarted","Data":"272e395d7fdab481c77fc89caab52c8f1b17008653d00afe44c8ba9417758cd9"} Dec 01 09:05:52 crc kubenswrapper[4689]: I1201 09:05:52.087479 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-qsxf4" event={"ID":"caf4bdec-471c-4c07-a5a7-294faf35c880","Type":"ContainerStarted","Data":"c41d6026967b5f522709c17ff1a4de54f6ef515d2d3da16462995f81d5252b81"} Dec 01 09:05:58 crc kubenswrapper[4689]: I1201 09:05:58.048843 4689 scope.go:117] "RemoveContainer" containerID="3ff597f084dfd52ccbf3aad1aa4bc0d1a664ed524aff9fe436ddb28c1463db49" Dec 01 09:05:58 crc kubenswrapper[4689]: E1201 09:05:58.049619 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:06:11 crc kubenswrapper[4689]: I1201 09:06:11.057716 4689 scope.go:117] "RemoveContainer" containerID="3ff597f084dfd52ccbf3aad1aa4bc0d1a664ed524aff9fe436ddb28c1463db49" Dec 01 09:06:11 crc kubenswrapper[4689]: E1201 09:06:11.058698 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:06:11 crc kubenswrapper[4689]: I1201 09:06:11.085377 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-qsxf4" podStartSLOduration=20.401428834 podStartE2EDuration="21.085346881s" podCreationTimestamp="2025-12-01 09:05:50 +0000 UTC" firstStartedPulling="2025-12-01 09:05:51.021520743 +0000 UTC m=+1631.093808647" lastFinishedPulling="2025-12-01 09:05:51.70543879 +0000 UTC m=+1631.777726694" observedRunningTime="2025-12-01 09:05:52.110353911 +0000 UTC m=+1632.182641825" watchObservedRunningTime="2025-12-01 09:06:11.085346881 +0000 UTC m=+1651.157634785" Dec 01 09:06:11 crc kubenswrapper[4689]: I1201 09:06:11.091393 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-pxsb5"] Dec 01 09:06:11 crc kubenswrapper[4689]: I1201 09:06:11.101573 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-rg78r"] Dec 01 09:06:11 crc kubenswrapper[4689]: I1201 09:06:11.114739 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-1ae5-account-create-update-55l2b"] Dec 01 09:06:11 crc kubenswrapper[4689]: I1201 09:06:11.123058 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-2w7pr"] Dec 01 09:06:11 crc kubenswrapper[4689]: I1201 09:06:11.134047 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-c798-account-create-update-6jt58"] Dec 01 09:06:11 crc kubenswrapper[4689]: I1201 09:06:11.142880 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-54f1-account-create-update-p6hrg"] Dec 01 09:06:11 crc kubenswrapper[4689]: I1201 09:06:11.151864 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-2w7pr"] Dec 01 09:06:11 crc kubenswrapper[4689]: I1201 09:06:11.160586 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-pxsb5"] Dec 01 09:06:11 crc kubenswrapper[4689]: I1201 09:06:11.170052 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-1ae5-account-create-update-55l2b"] Dec 01 09:06:11 crc kubenswrapper[4689]: I1201 09:06:11.180082 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-rg78r"] Dec 01 09:06:11 crc kubenswrapper[4689]: I1201 09:06:11.189002 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-c798-account-create-update-6jt58"] Dec 01 09:06:11 crc kubenswrapper[4689]: I1201 09:06:11.197512 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-54f1-account-create-update-p6hrg"] Dec 01 09:06:13 crc kubenswrapper[4689]: I1201 09:06:13.059269 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32bcbe2d-59ae-41f7-861e-04b27330d055" path="/var/lib/kubelet/pods/32bcbe2d-59ae-41f7-861e-04b27330d055/volumes" Dec 01 09:06:13 crc kubenswrapper[4689]: I1201 09:06:13.061406 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b0afecc-7b1c-43f8-b9cd-7595bbd87459" path="/var/lib/kubelet/pods/3b0afecc-7b1c-43f8-b9cd-7595bbd87459/volumes" Dec 01 09:06:13 crc kubenswrapper[4689]: I1201 09:06:13.062317 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8502e89e-509a-4cf5-8308-d769dba3a547" path="/var/lib/kubelet/pods/8502e89e-509a-4cf5-8308-d769dba3a547/volumes" Dec 01 09:06:13 crc kubenswrapper[4689]: I1201 09:06:13.063872 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a07bdee7-df02-4a87-ab8e-68939cd995bd" path="/var/lib/kubelet/pods/a07bdee7-df02-4a87-ab8e-68939cd995bd/volumes" Dec 01 09:06:13 crc kubenswrapper[4689]: I1201 09:06:13.065656 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b96ed96a-7a15-4ab1-add3-930f95896b44" path="/var/lib/kubelet/pods/b96ed96a-7a15-4ab1-add3-930f95896b44/volumes" Dec 01 09:06:13 crc kubenswrapper[4689]: I1201 09:06:13.066714 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fcefa00f-aa74-4399-8e77-df956e479367" path="/var/lib/kubelet/pods/fcefa00f-aa74-4399-8e77-df956e479367/volumes" Dec 01 09:06:22 crc kubenswrapper[4689]: I1201 09:06:22.047854 4689 scope.go:117] "RemoveContainer" containerID="3ff597f084dfd52ccbf3aad1aa4bc0d1a664ed524aff9fe436ddb28c1463db49" Dec 01 09:06:22 crc kubenswrapper[4689]: E1201 09:06:22.048915 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:06:35 crc kubenswrapper[4689]: I1201 09:06:35.049442 4689 scope.go:117] "RemoveContainer" containerID="3ff597f084dfd52ccbf3aad1aa4bc0d1a664ed524aff9fe436ddb28c1463db49" Dec 01 09:06:35 crc kubenswrapper[4689]: E1201 09:06:35.050868 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:06:37 crc kubenswrapper[4689]: I1201 09:06:37.061191 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-2cjdr"] Dec 01 09:06:37 crc kubenswrapper[4689]: I1201 09:06:37.073258 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-2cjdr"] Dec 01 09:06:38 crc kubenswrapper[4689]: I1201 09:06:38.045301 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-a825-account-create-update-7f62d"] Dec 01 09:06:38 crc kubenswrapper[4689]: I1201 09:06:38.060089 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-a825-account-create-update-7f62d"] Dec 01 09:06:39 crc kubenswrapper[4689]: I1201 09:06:39.032579 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-22b7-account-create-update-l4xst"] Dec 01 09:06:39 crc kubenswrapper[4689]: I1201 09:06:39.070115 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08afe837-f3ba-42bb-b61b-492d30229c45" path="/var/lib/kubelet/pods/08afe837-f3ba-42bb-b61b-492d30229c45/volumes" Dec 01 09:06:39 crc kubenswrapper[4689]: I1201 09:06:39.071508 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45efc88a-3b6d-41f2-91fa-8025cfed0b11" path="/var/lib/kubelet/pods/45efc88a-3b6d-41f2-91fa-8025cfed0b11/volumes" Dec 01 09:06:39 crc kubenswrapper[4689]: I1201 09:06:39.072491 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-w42fl"] Dec 01 09:06:39 crc kubenswrapper[4689]: I1201 09:06:39.072522 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-22b7-account-create-update-l4xst"] Dec 01 09:06:39 crc kubenswrapper[4689]: I1201 09:06:39.073163 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-jq84v"] Dec 01 09:06:39 crc kubenswrapper[4689]: I1201 09:06:39.081720 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-e1d0-account-create-update-922g5"] Dec 01 09:06:39 crc kubenswrapper[4689]: I1201 09:06:39.091268 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-w42fl"] Dec 01 09:06:39 crc kubenswrapper[4689]: I1201 09:06:39.099859 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-jq84v"] Dec 01 09:06:39 crc kubenswrapper[4689]: I1201 09:06:39.108407 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-e1d0-account-create-update-922g5"] Dec 01 09:06:41 crc kubenswrapper[4689]: I1201 09:06:41.083560 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2761a9b0-bfa4-4992-83d9-532157d688c4" path="/var/lib/kubelet/pods/2761a9b0-bfa4-4992-83d9-532157d688c4/volumes" Dec 01 09:06:41 crc kubenswrapper[4689]: I1201 09:06:41.086233 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75b770c2-67d6-4b04-9a4d-a3a1cad52cc6" path="/var/lib/kubelet/pods/75b770c2-67d6-4b04-9a4d-a3a1cad52cc6/volumes" Dec 01 09:06:41 crc kubenswrapper[4689]: I1201 09:06:41.094767 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9fbffd63-b0cb-41e8-a4a7-d995432ad88c" path="/var/lib/kubelet/pods/9fbffd63-b0cb-41e8-a4a7-d995432ad88c/volumes" Dec 01 09:06:41 crc kubenswrapper[4689]: I1201 09:06:41.097105 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5a21dc6-917e-454e-a0ef-c4f21af302b3" path="/var/lib/kubelet/pods/f5a21dc6-917e-454e-a0ef-c4f21af302b3/volumes" Dec 01 09:06:45 crc kubenswrapper[4689]: I1201 09:06:45.994104 4689 scope.go:117] "RemoveContainer" containerID="65f43bcc97ef562b23e0c17d852a26bb7fa26fd5811acde8ece17ce5fcf65515" Dec 01 09:06:46 crc kubenswrapper[4689]: I1201 09:06:46.052140 4689 scope.go:117] "RemoveContainer" containerID="35abe5af8a910c47f78d78d2cca418a15ad2aff3ea98171e0d5f158f808e9a37" Dec 01 09:06:46 crc kubenswrapper[4689]: I1201 09:06:46.106641 4689 scope.go:117] "RemoveContainer" containerID="df5bd295cdd0f16cc45e0f52af00dd0786f840f2a08d17dc2918764d7cf5433e" Dec 01 09:06:46 crc kubenswrapper[4689]: I1201 09:06:46.142074 4689 scope.go:117] "RemoveContainer" containerID="8a40b23d6ad02fadffbef6461a975a8da162769f0b19fe8b3fc9f29dda89da0d" Dec 01 09:06:46 crc kubenswrapper[4689]: I1201 09:06:46.190807 4689 scope.go:117] "RemoveContainer" containerID="b5070d66b6d35927be05ee7f821923bcda2c5f89c09741acaab5dd0f84e0fe72" Dec 01 09:06:46 crc kubenswrapper[4689]: I1201 09:06:46.235829 4689 scope.go:117] "RemoveContainer" containerID="d853403846e2f530d1ae5b8dd46cfb48c5bec299612c373667b88ddb6cc00291" Dec 01 09:06:46 crc kubenswrapper[4689]: I1201 09:06:46.321148 4689 scope.go:117] "RemoveContainer" containerID="61b5a6302b698504a6bb0098be6e2120a5c4d5da5e21e823f01e4d587642a97f" Dec 01 09:06:46 crc kubenswrapper[4689]: I1201 09:06:46.347508 4689 scope.go:117] "RemoveContainer" containerID="e063038774bf2c85c2f8a10cfe598939d148c6cab2028e51b7339b821406a104" Dec 01 09:06:46 crc kubenswrapper[4689]: I1201 09:06:46.370631 4689 scope.go:117] "RemoveContainer" containerID="74a62c071b1cebedb55d068a8db740442722bb5c88139c8bd4ca26e1a950919a" Dec 01 09:06:46 crc kubenswrapper[4689]: I1201 09:06:46.405971 4689 scope.go:117] "RemoveContainer" containerID="3ad5281bd9650caaec47667d7f2d0a6171010575626cbaaafe0e8ce406d6d73f" Dec 01 09:06:46 crc kubenswrapper[4689]: I1201 09:06:46.431803 4689 scope.go:117] "RemoveContainer" containerID="57490bb6e9268b7bada778cd46cdb78d2be222b035283746ea691170a54c8ddb" Dec 01 09:06:46 crc kubenswrapper[4689]: I1201 09:06:46.452423 4689 scope.go:117] "RemoveContainer" containerID="b420e3d8808e9c474aec7172df259e1b5e282072be07e9360abd024e3d5d8aee" Dec 01 09:06:46 crc kubenswrapper[4689]: I1201 09:06:46.475579 4689 scope.go:117] "RemoveContainer" containerID="b27f162348fa9714141b251ad2f6e1ee1b3b107f4421258f0498d55df45806a4" Dec 01 09:06:46 crc kubenswrapper[4689]: I1201 09:06:46.497421 4689 scope.go:117] "RemoveContainer" containerID="8a5d0037c0e01786c1948e970feceedb7d0defff2fc354f6a572480941380d75" Dec 01 09:06:47 crc kubenswrapper[4689]: I1201 09:06:47.039324 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-5qxkw"] Dec 01 09:06:47 crc kubenswrapper[4689]: I1201 09:06:47.062703 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-5qxkw"] Dec 01 09:06:48 crc kubenswrapper[4689]: I1201 09:06:48.033017 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-wkgqk"] Dec 01 09:06:48 crc kubenswrapper[4689]: I1201 09:06:48.047468 4689 scope.go:117] "RemoveContainer" containerID="3ff597f084dfd52ccbf3aad1aa4bc0d1a664ed524aff9fe436ddb28c1463db49" Dec 01 09:06:48 crc kubenswrapper[4689]: E1201 09:06:48.047871 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:06:48 crc kubenswrapper[4689]: I1201 09:06:48.048575 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-wkgqk"] Dec 01 09:06:49 crc kubenswrapper[4689]: I1201 09:06:49.068276 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21498a51-fbab-4263-88dd-9c30df75721c" path="/var/lib/kubelet/pods/21498a51-fbab-4263-88dd-9c30df75721c/volumes" Dec 01 09:06:49 crc kubenswrapper[4689]: I1201 09:06:49.070497 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fbecbbce-632b-4832-b4aa-6834ff6541e5" path="/var/lib/kubelet/pods/fbecbbce-632b-4832-b4aa-6834ff6541e5/volumes" Dec 01 09:07:00 crc kubenswrapper[4689]: I1201 09:07:00.048324 4689 scope.go:117] "RemoveContainer" containerID="3ff597f084dfd52ccbf3aad1aa4bc0d1a664ed524aff9fe436ddb28c1463db49" Dec 01 09:07:00 crc kubenswrapper[4689]: E1201 09:07:00.049465 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:07:15 crc kubenswrapper[4689]: I1201 09:07:15.047917 4689 scope.go:117] "RemoveContainer" containerID="3ff597f084dfd52ccbf3aad1aa4bc0d1a664ed524aff9fe436ddb28c1463db49" Dec 01 09:07:15 crc kubenswrapper[4689]: E1201 09:07:15.048626 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:07:27 crc kubenswrapper[4689]: I1201 09:07:27.047721 4689 scope.go:117] "RemoveContainer" containerID="3ff597f084dfd52ccbf3aad1aa4bc0d1a664ed524aff9fe436ddb28c1463db49" Dec 01 09:07:27 crc kubenswrapper[4689]: E1201 09:07:27.049064 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:07:29 crc kubenswrapper[4689]: I1201 09:07:29.061775 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-kfc4d"] Dec 01 09:07:29 crc kubenswrapper[4689]: I1201 09:07:29.071814 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-kfc4d"] Dec 01 09:07:31 crc kubenswrapper[4689]: I1201 09:07:31.070117 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f240a66f-70cd-4747-b16f-807e6715e7a0" path="/var/lib/kubelet/pods/f240a66f-70cd-4747-b16f-807e6715e7a0/volumes" Dec 01 09:07:39 crc kubenswrapper[4689]: I1201 09:07:39.062817 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-5ttrw"] Dec 01 09:07:39 crc kubenswrapper[4689]: I1201 09:07:39.063291 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-5ttrw"] Dec 01 09:07:40 crc kubenswrapper[4689]: I1201 09:07:40.038196 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-4tdn6"] Dec 01 09:07:40 crc kubenswrapper[4689]: I1201 09:07:40.049102 4689 scope.go:117] "RemoveContainer" containerID="3ff597f084dfd52ccbf3aad1aa4bc0d1a664ed524aff9fe436ddb28c1463db49" Dec 01 09:07:40 crc kubenswrapper[4689]: E1201 09:07:40.049352 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:07:40 crc kubenswrapper[4689]: I1201 09:07:40.058570 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-f9pr6"] Dec 01 09:07:40 crc kubenswrapper[4689]: I1201 09:07:40.073601 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-4tdn6"] Dec 01 09:07:40 crc kubenswrapper[4689]: I1201 09:07:40.088358 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-f9pr6"] Dec 01 09:07:41 crc kubenswrapper[4689]: I1201 09:07:41.066869 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="498e2dd1-b659-447d-9f5d-8a86c48fae77" path="/var/lib/kubelet/pods/498e2dd1-b659-447d-9f5d-8a86c48fae77/volumes" Dec 01 09:07:41 crc kubenswrapper[4689]: I1201 09:07:41.068321 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="878af3f4-684c-457b-b943-b47aa64dcb58" path="/var/lib/kubelet/pods/878af3f4-684c-457b-b943-b47aa64dcb58/volumes" Dec 01 09:07:41 crc kubenswrapper[4689]: I1201 09:07:41.069235 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc8aad14-4d75-45c4-9456-db0e80ffd8e7" path="/var/lib/kubelet/pods/dc8aad14-4d75-45c4-9456-db0e80ffd8e7/volumes" Dec 01 09:07:46 crc kubenswrapper[4689]: I1201 09:07:46.758159 4689 scope.go:117] "RemoveContainer" containerID="1cd2be9868dfb5bb0036601eef7275e918e44677b634a65081e0bc418c903a5e" Dec 01 09:07:46 crc kubenswrapper[4689]: I1201 09:07:46.809386 4689 scope.go:117] "RemoveContainer" containerID="fab6e207a5bb22be56610cfc7b26e34ca92f987886681d3a174260301254d349" Dec 01 09:07:46 crc kubenswrapper[4689]: I1201 09:07:46.860021 4689 scope.go:117] "RemoveContainer" containerID="2f7514baf81930de56bddc5961cad21fec755c798db6ba412d4b0bbaadcf391a" Dec 01 09:07:46 crc kubenswrapper[4689]: I1201 09:07:46.918196 4689 scope.go:117] "RemoveContainer" containerID="333e324eabbfcf368117f7274a7058ff8bdd97d9cf55201eb1b467d1366bbc8e" Dec 01 09:07:46 crc kubenswrapper[4689]: I1201 09:07:46.975692 4689 scope.go:117] "RemoveContainer" containerID="a594a7715aa34dcac9c70c5a096c7b85d42af74194a425c8c8b35f799d8fb14a" Dec 01 09:07:47 crc kubenswrapper[4689]: I1201 09:07:47.013938 4689 scope.go:117] "RemoveContainer" containerID="83ecdc041efdabd181b74491ceb4c864f3fa2b8f0b1c3cc9ed5539e8ed1f522e" Dec 01 09:07:52 crc kubenswrapper[4689]: I1201 09:07:52.047520 4689 scope.go:117] "RemoveContainer" containerID="3ff597f084dfd52ccbf3aad1aa4bc0d1a664ed524aff9fe436ddb28c1463db49" Dec 01 09:07:52 crc kubenswrapper[4689]: E1201 09:07:52.048441 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:07:56 crc kubenswrapper[4689]: I1201 09:07:56.053669 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-kx454"] Dec 01 09:07:56 crc kubenswrapper[4689]: I1201 09:07:56.063030 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-kx454"] Dec 01 09:07:57 crc kubenswrapper[4689]: I1201 09:07:57.063144 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="767a61f9-7a7d-43df-b53f-efdc8c693381" path="/var/lib/kubelet/pods/767a61f9-7a7d-43df-b53f-efdc8c693381/volumes" Dec 01 09:08:03 crc kubenswrapper[4689]: I1201 09:08:03.047591 4689 scope.go:117] "RemoveContainer" containerID="3ff597f084dfd52ccbf3aad1aa4bc0d1a664ed524aff9fe436ddb28c1463db49" Dec 01 09:08:03 crc kubenswrapper[4689]: E1201 09:08:03.048573 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:08:16 crc kubenswrapper[4689]: I1201 09:08:16.048248 4689 scope.go:117] "RemoveContainer" containerID="3ff597f084dfd52ccbf3aad1aa4bc0d1a664ed524aff9fe436ddb28c1463db49" Dec 01 09:08:16 crc kubenswrapper[4689]: E1201 09:08:16.049047 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:08:28 crc kubenswrapper[4689]: I1201 09:08:28.047668 4689 scope.go:117] "RemoveContainer" containerID="3ff597f084dfd52ccbf3aad1aa4bc0d1a664ed524aff9fe436ddb28c1463db49" Dec 01 09:08:28 crc kubenswrapper[4689]: E1201 09:08:28.048492 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:08:42 crc kubenswrapper[4689]: I1201 09:08:42.047212 4689 scope.go:117] "RemoveContainer" containerID="3ff597f084dfd52ccbf3aad1aa4bc0d1a664ed524aff9fe436ddb28c1463db49" Dec 01 09:08:43 crc kubenswrapper[4689]: I1201 09:08:43.240651 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" event={"ID":"3947625d-75bf-4332-a233-1491b2ee9d96","Type":"ContainerStarted","Data":"fb65526bf7a453c4de10d095fe7ede8e63ed9c728cb3c6e1c38808977ec1a5f0"} Dec 01 09:08:47 crc kubenswrapper[4689]: I1201 09:08:47.175700 4689 scope.go:117] "RemoveContainer" containerID="5b0fae6c6cdf40c359bd0d9c1173095622267bdfc4784caa27b64ba809f165ad" Dec 01 09:09:04 crc kubenswrapper[4689]: I1201 09:09:04.038945 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-w59kl"] Dec 01 09:09:04 crc kubenswrapper[4689]: I1201 09:09:04.046917 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-w59kl"] Dec 01 09:09:05 crc kubenswrapper[4689]: I1201 09:09:05.071243 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86d5f546-10fa-4682-95b1-df605b5f23dc" path="/var/lib/kubelet/pods/86d5f546-10fa-4682-95b1-df605b5f23dc/volumes" Dec 01 09:09:05 crc kubenswrapper[4689]: I1201 09:09:05.072133 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-c0cb-account-create-update-j6d5l"] Dec 01 09:09:05 crc kubenswrapper[4689]: I1201 09:09:05.086250 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-c0cb-account-create-update-j6d5l"] Dec 01 09:09:05 crc kubenswrapper[4689]: I1201 09:09:05.103432 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-rl897"] Dec 01 09:09:05 crc kubenswrapper[4689]: I1201 09:09:05.113971 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-bcf0-account-create-update-r8ch4"] Dec 01 09:09:05 crc kubenswrapper[4689]: I1201 09:09:05.121303 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-bcf0-account-create-update-r8ch4"] Dec 01 09:09:05 crc kubenswrapper[4689]: I1201 09:09:05.137878 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-rl897"] Dec 01 09:09:05 crc kubenswrapper[4689]: I1201 09:09:05.137948 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-zc7tc"] Dec 01 09:09:05 crc kubenswrapper[4689]: I1201 09:09:05.144358 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-zc7tc"] Dec 01 09:09:05 crc kubenswrapper[4689]: I1201 09:09:05.153381 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-c9c8-account-create-update-h265p"] Dec 01 09:09:05 crc kubenswrapper[4689]: I1201 09:09:05.163405 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-c9c8-account-create-update-h265p"] Dec 01 09:09:07 crc kubenswrapper[4689]: I1201 09:09:07.068135 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02bd29a4-4353-40b1-b147-f6257ef42632" path="/var/lib/kubelet/pods/02bd29a4-4353-40b1-b147-f6257ef42632/volumes" Dec 01 09:09:07 crc kubenswrapper[4689]: I1201 09:09:07.070665 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b9b4aab-acce-4ef4-b930-9a4942a3dc5d" path="/var/lib/kubelet/pods/5b9b4aab-acce-4ef4-b930-9a4942a3dc5d/volumes" Dec 01 09:09:07 crc kubenswrapper[4689]: I1201 09:09:07.072499 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64a8b1dd-5948-4f22-98f2-a3441493cf5a" path="/var/lib/kubelet/pods/64a8b1dd-5948-4f22-98f2-a3441493cf5a/volumes" Dec 01 09:09:07 crc kubenswrapper[4689]: I1201 09:09:07.073866 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69206391-deb0-4dc2-a5e9-a7f7cbdd7844" path="/var/lib/kubelet/pods/69206391-deb0-4dc2-a5e9-a7f7cbdd7844/volumes" Dec 01 09:09:07 crc kubenswrapper[4689]: I1201 09:09:07.076740 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ca8b1e2-5166-488b-bda5-7c97602825da" path="/var/lib/kubelet/pods/6ca8b1e2-5166-488b-bda5-7c97602825da/volumes" Dec 01 09:09:11 crc kubenswrapper[4689]: I1201 09:09:11.530305 4689 generic.go:334] "Generic (PLEG): container finished" podID="caf4bdec-471c-4c07-a5a7-294faf35c880" containerID="c41d6026967b5f522709c17ff1a4de54f6ef515d2d3da16462995f81d5252b81" exitCode=0 Dec 01 09:09:11 crc kubenswrapper[4689]: I1201 09:09:11.530421 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-qsxf4" event={"ID":"caf4bdec-471c-4c07-a5a7-294faf35c880","Type":"ContainerDied","Data":"c41d6026967b5f522709c17ff1a4de54f6ef515d2d3da16462995f81d5252b81"} Dec 01 09:09:12 crc kubenswrapper[4689]: I1201 09:09:12.937671 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-qsxf4" Dec 01 09:09:13 crc kubenswrapper[4689]: I1201 09:09:13.020810 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/caf4bdec-471c-4c07-a5a7-294faf35c880-inventory\") pod \"caf4bdec-471c-4c07-a5a7-294faf35c880\" (UID: \"caf4bdec-471c-4c07-a5a7-294faf35c880\") " Dec 01 09:09:13 crc kubenswrapper[4689]: I1201 09:09:13.020945 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwh4m\" (UniqueName: \"kubernetes.io/projected/caf4bdec-471c-4c07-a5a7-294faf35c880-kube-api-access-dwh4m\") pod \"caf4bdec-471c-4c07-a5a7-294faf35c880\" (UID: \"caf4bdec-471c-4c07-a5a7-294faf35c880\") " Dec 01 09:09:13 crc kubenswrapper[4689]: I1201 09:09:13.021074 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/caf4bdec-471c-4c07-a5a7-294faf35c880-ssh-key\") pod \"caf4bdec-471c-4c07-a5a7-294faf35c880\" (UID: \"caf4bdec-471c-4c07-a5a7-294faf35c880\") " Dec 01 09:09:13 crc kubenswrapper[4689]: I1201 09:09:13.021209 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caf4bdec-471c-4c07-a5a7-294faf35c880-bootstrap-combined-ca-bundle\") pod \"caf4bdec-471c-4c07-a5a7-294faf35c880\" (UID: \"caf4bdec-471c-4c07-a5a7-294faf35c880\") " Dec 01 09:09:13 crc kubenswrapper[4689]: I1201 09:09:13.039970 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/caf4bdec-471c-4c07-a5a7-294faf35c880-kube-api-access-dwh4m" (OuterVolumeSpecName: "kube-api-access-dwh4m") pod "caf4bdec-471c-4c07-a5a7-294faf35c880" (UID: "caf4bdec-471c-4c07-a5a7-294faf35c880"). InnerVolumeSpecName "kube-api-access-dwh4m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:09:13 crc kubenswrapper[4689]: I1201 09:09:13.040048 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/caf4bdec-471c-4c07-a5a7-294faf35c880-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "caf4bdec-471c-4c07-a5a7-294faf35c880" (UID: "caf4bdec-471c-4c07-a5a7-294faf35c880"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:09:13 crc kubenswrapper[4689]: I1201 09:09:13.053918 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/caf4bdec-471c-4c07-a5a7-294faf35c880-inventory" (OuterVolumeSpecName: "inventory") pod "caf4bdec-471c-4c07-a5a7-294faf35c880" (UID: "caf4bdec-471c-4c07-a5a7-294faf35c880"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:09:13 crc kubenswrapper[4689]: I1201 09:09:13.065779 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/caf4bdec-471c-4c07-a5a7-294faf35c880-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "caf4bdec-471c-4c07-a5a7-294faf35c880" (UID: "caf4bdec-471c-4c07-a5a7-294faf35c880"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:09:13 crc kubenswrapper[4689]: I1201 09:09:13.123519 4689 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caf4bdec-471c-4c07-a5a7-294faf35c880-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 09:09:13 crc kubenswrapper[4689]: I1201 09:09:13.123550 4689 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/caf4bdec-471c-4c07-a5a7-294faf35c880-inventory\") on node \"crc\" DevicePath \"\"" Dec 01 09:09:13 crc kubenswrapper[4689]: I1201 09:09:13.123564 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dwh4m\" (UniqueName: \"kubernetes.io/projected/caf4bdec-471c-4c07-a5a7-294faf35c880-kube-api-access-dwh4m\") on node \"crc\" DevicePath \"\"" Dec 01 09:09:13 crc kubenswrapper[4689]: I1201 09:09:13.123576 4689 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/caf4bdec-471c-4c07-a5a7-294faf35c880-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 01 09:09:13 crc kubenswrapper[4689]: I1201 09:09:13.557932 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-qsxf4" event={"ID":"caf4bdec-471c-4c07-a5a7-294faf35c880","Type":"ContainerDied","Data":"272e395d7fdab481c77fc89caab52c8f1b17008653d00afe44c8ba9417758cd9"} Dec 01 09:09:13 crc kubenswrapper[4689]: I1201 09:09:13.558592 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="272e395d7fdab481c77fc89caab52c8f1b17008653d00afe44c8ba9417758cd9" Dec 01 09:09:13 crc kubenswrapper[4689]: I1201 09:09:13.558759 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-qsxf4" Dec 01 09:09:13 crc kubenswrapper[4689]: I1201 09:09:13.647663 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rdqqh"] Dec 01 09:09:13 crc kubenswrapper[4689]: E1201 09:09:13.648146 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="caf4bdec-471c-4c07-a5a7-294faf35c880" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 01 09:09:13 crc kubenswrapper[4689]: I1201 09:09:13.648171 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="caf4bdec-471c-4c07-a5a7-294faf35c880" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 01 09:09:13 crc kubenswrapper[4689]: I1201 09:09:13.648410 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="caf4bdec-471c-4c07-a5a7-294faf35c880" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 01 09:09:13 crc kubenswrapper[4689]: I1201 09:09:13.649286 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rdqqh" Dec 01 09:09:13 crc kubenswrapper[4689]: I1201 09:09:13.654024 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-sh59x" Dec 01 09:09:13 crc kubenswrapper[4689]: I1201 09:09:13.654217 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 01 09:09:13 crc kubenswrapper[4689]: I1201 09:09:13.654360 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 01 09:09:13 crc kubenswrapper[4689]: I1201 09:09:13.654604 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 01 09:09:13 crc kubenswrapper[4689]: I1201 09:09:13.663104 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rdqqh"] Dec 01 09:09:13 crc kubenswrapper[4689]: I1201 09:09:13.734187 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7f3287e5-9e76-46ee-91c4-8bc9b69a738f-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-rdqqh\" (UID: \"7f3287e5-9e76-46ee-91c4-8bc9b69a738f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rdqqh" Dec 01 09:09:13 crc kubenswrapper[4689]: I1201 09:09:13.734242 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7f3287e5-9e76-46ee-91c4-8bc9b69a738f-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-rdqqh\" (UID: \"7f3287e5-9e76-46ee-91c4-8bc9b69a738f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rdqqh" Dec 01 09:09:13 crc kubenswrapper[4689]: I1201 09:09:13.734500 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f92bm\" (UniqueName: \"kubernetes.io/projected/7f3287e5-9e76-46ee-91c4-8bc9b69a738f-kube-api-access-f92bm\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-rdqqh\" (UID: \"7f3287e5-9e76-46ee-91c4-8bc9b69a738f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rdqqh" Dec 01 09:09:13 crc kubenswrapper[4689]: I1201 09:09:13.835169 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f92bm\" (UniqueName: \"kubernetes.io/projected/7f3287e5-9e76-46ee-91c4-8bc9b69a738f-kube-api-access-f92bm\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-rdqqh\" (UID: \"7f3287e5-9e76-46ee-91c4-8bc9b69a738f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rdqqh" Dec 01 09:09:13 crc kubenswrapper[4689]: I1201 09:09:13.835240 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7f3287e5-9e76-46ee-91c4-8bc9b69a738f-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-rdqqh\" (UID: \"7f3287e5-9e76-46ee-91c4-8bc9b69a738f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rdqqh" Dec 01 09:09:13 crc kubenswrapper[4689]: I1201 09:09:13.835281 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7f3287e5-9e76-46ee-91c4-8bc9b69a738f-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-rdqqh\" (UID: \"7f3287e5-9e76-46ee-91c4-8bc9b69a738f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rdqqh" Dec 01 09:09:13 crc kubenswrapper[4689]: I1201 09:09:13.839735 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7f3287e5-9e76-46ee-91c4-8bc9b69a738f-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-rdqqh\" (UID: \"7f3287e5-9e76-46ee-91c4-8bc9b69a738f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rdqqh" Dec 01 09:09:13 crc kubenswrapper[4689]: I1201 09:09:13.844610 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7f3287e5-9e76-46ee-91c4-8bc9b69a738f-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-rdqqh\" (UID: \"7f3287e5-9e76-46ee-91c4-8bc9b69a738f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rdqqh" Dec 01 09:09:13 crc kubenswrapper[4689]: I1201 09:09:13.859722 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f92bm\" (UniqueName: \"kubernetes.io/projected/7f3287e5-9e76-46ee-91c4-8bc9b69a738f-kube-api-access-f92bm\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-rdqqh\" (UID: \"7f3287e5-9e76-46ee-91c4-8bc9b69a738f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rdqqh" Dec 01 09:09:13 crc kubenswrapper[4689]: I1201 09:09:13.967472 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rdqqh" Dec 01 09:09:14 crc kubenswrapper[4689]: I1201 09:09:14.516176 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rdqqh"] Dec 01 09:09:14 crc kubenswrapper[4689]: I1201 09:09:14.569296 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rdqqh" event={"ID":"7f3287e5-9e76-46ee-91c4-8bc9b69a738f","Type":"ContainerStarted","Data":"817bb0c00dc9b38cd6ba2b47d8ec17468e8b12664f277d7adc2ba6cc785a3731"} Dec 01 09:09:15 crc kubenswrapper[4689]: I1201 09:09:15.580185 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rdqqh" event={"ID":"7f3287e5-9e76-46ee-91c4-8bc9b69a738f","Type":"ContainerStarted","Data":"7d077367143b53a90bb3b38828f5db086ade011f41c7002ff767398f037541c7"} Dec 01 09:09:15 crc kubenswrapper[4689]: I1201 09:09:15.619308 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rdqqh" podStartSLOduration=1.968549484 podStartE2EDuration="2.619276899s" podCreationTimestamp="2025-12-01 09:09:13 +0000 UTC" firstStartedPulling="2025-12-01 09:09:14.531148249 +0000 UTC m=+1834.603436183" lastFinishedPulling="2025-12-01 09:09:15.181875694 +0000 UTC m=+1835.254163598" observedRunningTime="2025-12-01 09:09:15.615179192 +0000 UTC m=+1835.687467126" watchObservedRunningTime="2025-12-01 09:09:15.619276899 +0000 UTC m=+1835.691564823" Dec 01 09:09:33 crc kubenswrapper[4689]: I1201 09:09:33.033470 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-5p44f"] Dec 01 09:09:33 crc kubenswrapper[4689]: I1201 09:09:33.043272 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-5p44f"] Dec 01 09:09:33 crc kubenswrapper[4689]: I1201 09:09:33.067572 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="961fc017-9eb5-427a-8d58-a189c19eadc5" path="/var/lib/kubelet/pods/961fc017-9eb5-427a-8d58-a189c19eadc5/volumes" Dec 01 09:09:47 crc kubenswrapper[4689]: I1201 09:09:47.336136 4689 scope.go:117] "RemoveContainer" containerID="795a3f0771aad515880e5a8e9d27c0f932059710825db00e8c874b00486bc760" Dec 01 09:09:47 crc kubenswrapper[4689]: I1201 09:09:47.386439 4689 scope.go:117] "RemoveContainer" containerID="4f756820e53aae93e151411814f9d7216a7d328d3848dd95af96198798ebd783" Dec 01 09:09:47 crc kubenswrapper[4689]: I1201 09:09:47.429502 4689 scope.go:117] "RemoveContainer" containerID="3e637158c35c8e1c0dae4fd60dc0dea19f163dc46207fee3e04d157ea4c846fd" Dec 01 09:09:47 crc kubenswrapper[4689]: I1201 09:09:47.483976 4689 scope.go:117] "RemoveContainer" containerID="e065f7753d63b5d66f266b4af289dd44b7bad3b9bec8b0bc8ef02553cc492924" Dec 01 09:09:47 crc kubenswrapper[4689]: I1201 09:09:47.535749 4689 scope.go:117] "RemoveContainer" containerID="437099511ae5b65c561273bb4417b703c8032f31109e6bf7c406956663ca7ff7" Dec 01 09:09:47 crc kubenswrapper[4689]: I1201 09:09:47.572505 4689 scope.go:117] "RemoveContainer" containerID="17232e796d8a818724743928d131f349c861e7eff94f62322ec1b506bf8db74f" Dec 01 09:09:47 crc kubenswrapper[4689]: I1201 09:09:47.614551 4689 scope.go:117] "RemoveContainer" containerID="75f0f832ac4509a1221c7b9ecc66cec584668738ecc2eef3558817e0472833af" Dec 01 09:10:01 crc kubenswrapper[4689]: I1201 09:10:01.044195 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-qds9f"] Dec 01 09:10:01 crc kubenswrapper[4689]: I1201 09:10:01.070393 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-qds9f"] Dec 01 09:10:01 crc kubenswrapper[4689]: I1201 09:10:01.083196 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-gf7l5"] Dec 01 09:10:01 crc kubenswrapper[4689]: I1201 09:10:01.090614 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-gf7l5"] Dec 01 09:10:03 crc kubenswrapper[4689]: I1201 09:10:03.066701 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08dd5230-a82f-43bc-9517-78b80ed7b39a" path="/var/lib/kubelet/pods/08dd5230-a82f-43bc-9517-78b80ed7b39a/volumes" Dec 01 09:10:03 crc kubenswrapper[4689]: I1201 09:10:03.069502 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63fc0d00-4168-47eb-998a-d32962b46bad" path="/var/lib/kubelet/pods/63fc0d00-4168-47eb-998a-d32962b46bad/volumes" Dec 01 09:10:46 crc kubenswrapper[4689]: I1201 09:10:46.053563 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-jlvvx"] Dec 01 09:10:46 crc kubenswrapper[4689]: I1201 09:10:46.068580 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-jlvvx"] Dec 01 09:10:47 crc kubenswrapper[4689]: I1201 09:10:47.059222 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3502f3c1-75ed-41f9-b6d3-a7fffbbd9ccd" path="/var/lib/kubelet/pods/3502f3c1-75ed-41f9-b6d3-a7fffbbd9ccd/volumes" Dec 01 09:10:47 crc kubenswrapper[4689]: I1201 09:10:47.744170 4689 scope.go:117] "RemoveContainer" containerID="61cc432ae69ff9c2c03725e948d044f0bdef4f09f870962a779ebfd9f635da7f" Dec 01 09:10:47 crc kubenswrapper[4689]: I1201 09:10:47.784148 4689 scope.go:117] "RemoveContainer" containerID="80d58717b2799765da54ea3219c700d5f309a5d2c17fc50f5cff842d9e2f1f9c" Dec 01 09:10:47 crc kubenswrapper[4689]: I1201 09:10:47.849312 4689 scope.go:117] "RemoveContainer" containerID="89f7562fa8966edf05d3e6681ce4143d625c7241d261a3c58263f01a0f70d79e" Dec 01 09:10:56 crc kubenswrapper[4689]: I1201 09:10:56.397592 4689 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-29dmp container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 01 09:10:56 crc kubenswrapper[4689]: I1201 09:10:56.398104 4689 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-29dmp" podUID="21eaf97a-bf73-4e70-a9bc-153b17b8a799" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 01 09:10:56 crc kubenswrapper[4689]: I1201 09:10:56.648619 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/test-operator-controller-manager-5854674fcc-vbkrn" podUID="f94d79da-740a-4080-81d0-ff3bf1867b3d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.86:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 01 09:11:09 crc kubenswrapper[4689]: I1201 09:11:09.146891 4689 patch_prober.go:28] interesting pod/machine-config-daemon-hmdnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 09:11:09 crc kubenswrapper[4689]: I1201 09:11:09.148540 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 09:11:39 crc kubenswrapper[4689]: I1201 09:11:39.146659 4689 patch_prober.go:28] interesting pod/machine-config-daemon-hmdnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 09:11:39 crc kubenswrapper[4689]: I1201 09:11:39.147217 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 09:12:07 crc kubenswrapper[4689]: I1201 09:12:07.770067 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hfm9q"] Dec 01 09:12:07 crc kubenswrapper[4689]: I1201 09:12:07.774806 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hfm9q" Dec 01 09:12:07 crc kubenswrapper[4689]: I1201 09:12:07.785747 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hfm9q"] Dec 01 09:12:07 crc kubenswrapper[4689]: I1201 09:12:07.835733 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84dd867d-bcc1-424b-95bf-f813545c129a-utilities\") pod \"redhat-marketplace-hfm9q\" (UID: \"84dd867d-bcc1-424b-95bf-f813545c129a\") " pod="openshift-marketplace/redhat-marketplace-hfm9q" Dec 01 09:12:07 crc kubenswrapper[4689]: I1201 09:12:07.835818 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84dd867d-bcc1-424b-95bf-f813545c129a-catalog-content\") pod \"redhat-marketplace-hfm9q\" (UID: \"84dd867d-bcc1-424b-95bf-f813545c129a\") " pod="openshift-marketplace/redhat-marketplace-hfm9q" Dec 01 09:12:07 crc kubenswrapper[4689]: I1201 09:12:07.835887 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tp4l9\" (UniqueName: \"kubernetes.io/projected/84dd867d-bcc1-424b-95bf-f813545c129a-kube-api-access-tp4l9\") pod \"redhat-marketplace-hfm9q\" (UID: \"84dd867d-bcc1-424b-95bf-f813545c129a\") " pod="openshift-marketplace/redhat-marketplace-hfm9q" Dec 01 09:12:07 crc kubenswrapper[4689]: I1201 09:12:07.938380 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84dd867d-bcc1-424b-95bf-f813545c129a-utilities\") pod \"redhat-marketplace-hfm9q\" (UID: \"84dd867d-bcc1-424b-95bf-f813545c129a\") " pod="openshift-marketplace/redhat-marketplace-hfm9q" Dec 01 09:12:07 crc kubenswrapper[4689]: I1201 09:12:07.938444 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84dd867d-bcc1-424b-95bf-f813545c129a-catalog-content\") pod \"redhat-marketplace-hfm9q\" (UID: \"84dd867d-bcc1-424b-95bf-f813545c129a\") " pod="openshift-marketplace/redhat-marketplace-hfm9q" Dec 01 09:12:07 crc kubenswrapper[4689]: I1201 09:12:07.938546 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tp4l9\" (UniqueName: \"kubernetes.io/projected/84dd867d-bcc1-424b-95bf-f813545c129a-kube-api-access-tp4l9\") pod \"redhat-marketplace-hfm9q\" (UID: \"84dd867d-bcc1-424b-95bf-f813545c129a\") " pod="openshift-marketplace/redhat-marketplace-hfm9q" Dec 01 09:12:07 crc kubenswrapper[4689]: I1201 09:12:07.939208 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84dd867d-bcc1-424b-95bf-f813545c129a-utilities\") pod \"redhat-marketplace-hfm9q\" (UID: \"84dd867d-bcc1-424b-95bf-f813545c129a\") " pod="openshift-marketplace/redhat-marketplace-hfm9q" Dec 01 09:12:07 crc kubenswrapper[4689]: I1201 09:12:07.939313 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84dd867d-bcc1-424b-95bf-f813545c129a-catalog-content\") pod \"redhat-marketplace-hfm9q\" (UID: \"84dd867d-bcc1-424b-95bf-f813545c129a\") " pod="openshift-marketplace/redhat-marketplace-hfm9q" Dec 01 09:12:07 crc kubenswrapper[4689]: I1201 09:12:07.960710 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tp4l9\" (UniqueName: \"kubernetes.io/projected/84dd867d-bcc1-424b-95bf-f813545c129a-kube-api-access-tp4l9\") pod \"redhat-marketplace-hfm9q\" (UID: \"84dd867d-bcc1-424b-95bf-f813545c129a\") " pod="openshift-marketplace/redhat-marketplace-hfm9q" Dec 01 09:12:08 crc kubenswrapper[4689]: I1201 09:12:08.099177 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hfm9q" Dec 01 09:12:08 crc kubenswrapper[4689]: I1201 09:12:08.429704 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hfm9q"] Dec 01 09:12:08 crc kubenswrapper[4689]: I1201 09:12:08.762007 4689 generic.go:334] "Generic (PLEG): container finished" podID="84dd867d-bcc1-424b-95bf-f813545c129a" containerID="192c9bbc32b72e7d2b67b3a6e72c9f51afdd6063432e8baea9d4d7a049cc2723" exitCode=0 Dec 01 09:12:08 crc kubenswrapper[4689]: I1201 09:12:08.762057 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hfm9q" event={"ID":"84dd867d-bcc1-424b-95bf-f813545c129a","Type":"ContainerDied","Data":"192c9bbc32b72e7d2b67b3a6e72c9f51afdd6063432e8baea9d4d7a049cc2723"} Dec 01 09:12:08 crc kubenswrapper[4689]: I1201 09:12:08.762287 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hfm9q" event={"ID":"84dd867d-bcc1-424b-95bf-f813545c129a","Type":"ContainerStarted","Data":"5cf199799377aa247daa0873330c4c2bcaafc7945024393e97af1a31b736d4f8"} Dec 01 09:12:08 crc kubenswrapper[4689]: I1201 09:12:08.764578 4689 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 01 09:12:09 crc kubenswrapper[4689]: I1201 09:12:09.146638 4689 patch_prober.go:28] interesting pod/machine-config-daemon-hmdnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 09:12:09 crc kubenswrapper[4689]: I1201 09:12:09.146702 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 09:12:09 crc kubenswrapper[4689]: I1201 09:12:09.146755 4689 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" Dec 01 09:12:09 crc kubenswrapper[4689]: I1201 09:12:09.147784 4689 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fb65526bf7a453c4de10d095fe7ede8e63ed9c728cb3c6e1c38808977ec1a5f0"} pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 01 09:12:09 crc kubenswrapper[4689]: I1201 09:12:09.147858 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" containerName="machine-config-daemon" containerID="cri-o://fb65526bf7a453c4de10d095fe7ede8e63ed9c728cb3c6e1c38808977ec1a5f0" gracePeriod=600 Dec 01 09:12:09 crc kubenswrapper[4689]: I1201 09:12:09.775571 4689 generic.go:334] "Generic (PLEG): container finished" podID="3947625d-75bf-4332-a233-1491b2ee9d96" containerID="fb65526bf7a453c4de10d095fe7ede8e63ed9c728cb3c6e1c38808977ec1a5f0" exitCode=0 Dec 01 09:12:09 crc kubenswrapper[4689]: I1201 09:12:09.775669 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" event={"ID":"3947625d-75bf-4332-a233-1491b2ee9d96","Type":"ContainerDied","Data":"fb65526bf7a453c4de10d095fe7ede8e63ed9c728cb3c6e1c38808977ec1a5f0"} Dec 01 09:12:09 crc kubenswrapper[4689]: I1201 09:12:09.775810 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" event={"ID":"3947625d-75bf-4332-a233-1491b2ee9d96","Type":"ContainerStarted","Data":"6af8c01fe62e3173d12af05e7baaa005504b6f1d01dccfa4c248d8eb547589a3"} Dec 01 09:12:09 crc kubenswrapper[4689]: I1201 09:12:09.775833 4689 scope.go:117] "RemoveContainer" containerID="3ff597f084dfd52ccbf3aad1aa4bc0d1a664ed524aff9fe436ddb28c1463db49" Dec 01 09:12:10 crc kubenswrapper[4689]: I1201 09:12:10.791782 4689 generic.go:334] "Generic (PLEG): container finished" podID="84dd867d-bcc1-424b-95bf-f813545c129a" containerID="47c5911c6cb0b2138acd86890f0ad2e12b5b5c09abae901656418b0ba57db1ee" exitCode=0 Dec 01 09:12:10 crc kubenswrapper[4689]: I1201 09:12:10.791854 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hfm9q" event={"ID":"84dd867d-bcc1-424b-95bf-f813545c129a","Type":"ContainerDied","Data":"47c5911c6cb0b2138acd86890f0ad2e12b5b5c09abae901656418b0ba57db1ee"} Dec 01 09:12:11 crc kubenswrapper[4689]: I1201 09:12:11.810220 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hfm9q" event={"ID":"84dd867d-bcc1-424b-95bf-f813545c129a","Type":"ContainerStarted","Data":"17d49d6f3296e17d368a5b8a9f8a3c1e7ff0896d949f78eaadc40bbf9e3c17ec"} Dec 01 09:12:11 crc kubenswrapper[4689]: I1201 09:12:11.835734 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hfm9q" podStartSLOduration=2.158177532 podStartE2EDuration="4.835712395s" podCreationTimestamp="2025-12-01 09:12:07 +0000 UTC" firstStartedPulling="2025-12-01 09:12:08.764090318 +0000 UTC m=+2008.836378232" lastFinishedPulling="2025-12-01 09:12:11.441625181 +0000 UTC m=+2011.513913095" observedRunningTime="2025-12-01 09:12:11.829187965 +0000 UTC m=+2011.901475859" watchObservedRunningTime="2025-12-01 09:12:11.835712395 +0000 UTC m=+2011.908000299" Dec 01 09:12:18 crc kubenswrapper[4689]: I1201 09:12:18.100573 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hfm9q" Dec 01 09:12:18 crc kubenswrapper[4689]: I1201 09:12:18.102567 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hfm9q" Dec 01 09:12:18 crc kubenswrapper[4689]: I1201 09:12:18.171927 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hfm9q" Dec 01 09:12:18 crc kubenswrapper[4689]: I1201 09:12:18.940611 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hfm9q" Dec 01 09:12:18 crc kubenswrapper[4689]: I1201 09:12:18.994957 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hfm9q"] Dec 01 09:12:20 crc kubenswrapper[4689]: I1201 09:12:20.912347 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hfm9q" podUID="84dd867d-bcc1-424b-95bf-f813545c129a" containerName="registry-server" containerID="cri-o://17d49d6f3296e17d368a5b8a9f8a3c1e7ff0896d949f78eaadc40bbf9e3c17ec" gracePeriod=2 Dec 01 09:12:21 crc kubenswrapper[4689]: I1201 09:12:21.375709 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hfm9q" Dec 01 09:12:21 crc kubenswrapper[4689]: I1201 09:12:21.445033 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84dd867d-bcc1-424b-95bf-f813545c129a-catalog-content\") pod \"84dd867d-bcc1-424b-95bf-f813545c129a\" (UID: \"84dd867d-bcc1-424b-95bf-f813545c129a\") " Dec 01 09:12:21 crc kubenswrapper[4689]: I1201 09:12:21.445194 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84dd867d-bcc1-424b-95bf-f813545c129a-utilities\") pod \"84dd867d-bcc1-424b-95bf-f813545c129a\" (UID: \"84dd867d-bcc1-424b-95bf-f813545c129a\") " Dec 01 09:12:21 crc kubenswrapper[4689]: I1201 09:12:21.445340 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tp4l9\" (UniqueName: \"kubernetes.io/projected/84dd867d-bcc1-424b-95bf-f813545c129a-kube-api-access-tp4l9\") pod \"84dd867d-bcc1-424b-95bf-f813545c129a\" (UID: \"84dd867d-bcc1-424b-95bf-f813545c129a\") " Dec 01 09:12:21 crc kubenswrapper[4689]: I1201 09:12:21.446062 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84dd867d-bcc1-424b-95bf-f813545c129a-utilities" (OuterVolumeSpecName: "utilities") pod "84dd867d-bcc1-424b-95bf-f813545c129a" (UID: "84dd867d-bcc1-424b-95bf-f813545c129a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:12:21 crc kubenswrapper[4689]: I1201 09:12:21.457211 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84dd867d-bcc1-424b-95bf-f813545c129a-kube-api-access-tp4l9" (OuterVolumeSpecName: "kube-api-access-tp4l9") pod "84dd867d-bcc1-424b-95bf-f813545c129a" (UID: "84dd867d-bcc1-424b-95bf-f813545c129a"). InnerVolumeSpecName "kube-api-access-tp4l9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:12:21 crc kubenswrapper[4689]: I1201 09:12:21.463505 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84dd867d-bcc1-424b-95bf-f813545c129a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "84dd867d-bcc1-424b-95bf-f813545c129a" (UID: "84dd867d-bcc1-424b-95bf-f813545c129a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:12:21 crc kubenswrapper[4689]: I1201 09:12:21.546744 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tp4l9\" (UniqueName: \"kubernetes.io/projected/84dd867d-bcc1-424b-95bf-f813545c129a-kube-api-access-tp4l9\") on node \"crc\" DevicePath \"\"" Dec 01 09:12:21 crc kubenswrapper[4689]: I1201 09:12:21.546782 4689 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84dd867d-bcc1-424b-95bf-f813545c129a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 09:12:21 crc kubenswrapper[4689]: I1201 09:12:21.546792 4689 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84dd867d-bcc1-424b-95bf-f813545c129a-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 09:12:21 crc kubenswrapper[4689]: I1201 09:12:21.930135 4689 generic.go:334] "Generic (PLEG): container finished" podID="84dd867d-bcc1-424b-95bf-f813545c129a" containerID="17d49d6f3296e17d368a5b8a9f8a3c1e7ff0896d949f78eaadc40bbf9e3c17ec" exitCode=0 Dec 01 09:12:21 crc kubenswrapper[4689]: I1201 09:12:21.930209 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hfm9q" event={"ID":"84dd867d-bcc1-424b-95bf-f813545c129a","Type":"ContainerDied","Data":"17d49d6f3296e17d368a5b8a9f8a3c1e7ff0896d949f78eaadc40bbf9e3c17ec"} Dec 01 09:12:21 crc kubenswrapper[4689]: I1201 09:12:21.930472 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hfm9q" event={"ID":"84dd867d-bcc1-424b-95bf-f813545c129a","Type":"ContainerDied","Data":"5cf199799377aa247daa0873330c4c2bcaafc7945024393e97af1a31b736d4f8"} Dec 01 09:12:21 crc kubenswrapper[4689]: I1201 09:12:21.930504 4689 scope.go:117] "RemoveContainer" containerID="17d49d6f3296e17d368a5b8a9f8a3c1e7ff0896d949f78eaadc40bbf9e3c17ec" Dec 01 09:12:21 crc kubenswrapper[4689]: I1201 09:12:21.930315 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hfm9q" Dec 01 09:12:21 crc kubenswrapper[4689]: I1201 09:12:21.960695 4689 scope.go:117] "RemoveContainer" containerID="47c5911c6cb0b2138acd86890f0ad2e12b5b5c09abae901656418b0ba57db1ee" Dec 01 09:12:21 crc kubenswrapper[4689]: I1201 09:12:21.990773 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hfm9q"] Dec 01 09:12:22 crc kubenswrapper[4689]: I1201 09:12:22.002129 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hfm9q"] Dec 01 09:12:22 crc kubenswrapper[4689]: I1201 09:12:22.016743 4689 scope.go:117] "RemoveContainer" containerID="192c9bbc32b72e7d2b67b3a6e72c9f51afdd6063432e8baea9d4d7a049cc2723" Dec 01 09:12:22 crc kubenswrapper[4689]: I1201 09:12:22.080687 4689 scope.go:117] "RemoveContainer" containerID="17d49d6f3296e17d368a5b8a9f8a3c1e7ff0896d949f78eaadc40bbf9e3c17ec" Dec 01 09:12:22 crc kubenswrapper[4689]: E1201 09:12:22.081217 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17d49d6f3296e17d368a5b8a9f8a3c1e7ff0896d949f78eaadc40bbf9e3c17ec\": container with ID starting with 17d49d6f3296e17d368a5b8a9f8a3c1e7ff0896d949f78eaadc40bbf9e3c17ec not found: ID does not exist" containerID="17d49d6f3296e17d368a5b8a9f8a3c1e7ff0896d949f78eaadc40bbf9e3c17ec" Dec 01 09:12:22 crc kubenswrapper[4689]: I1201 09:12:22.081288 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17d49d6f3296e17d368a5b8a9f8a3c1e7ff0896d949f78eaadc40bbf9e3c17ec"} err="failed to get container status \"17d49d6f3296e17d368a5b8a9f8a3c1e7ff0896d949f78eaadc40bbf9e3c17ec\": rpc error: code = NotFound desc = could not find container \"17d49d6f3296e17d368a5b8a9f8a3c1e7ff0896d949f78eaadc40bbf9e3c17ec\": container with ID starting with 17d49d6f3296e17d368a5b8a9f8a3c1e7ff0896d949f78eaadc40bbf9e3c17ec not found: ID does not exist" Dec 01 09:12:22 crc kubenswrapper[4689]: I1201 09:12:22.081310 4689 scope.go:117] "RemoveContainer" containerID="47c5911c6cb0b2138acd86890f0ad2e12b5b5c09abae901656418b0ba57db1ee" Dec 01 09:12:22 crc kubenswrapper[4689]: E1201 09:12:22.082570 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47c5911c6cb0b2138acd86890f0ad2e12b5b5c09abae901656418b0ba57db1ee\": container with ID starting with 47c5911c6cb0b2138acd86890f0ad2e12b5b5c09abae901656418b0ba57db1ee not found: ID does not exist" containerID="47c5911c6cb0b2138acd86890f0ad2e12b5b5c09abae901656418b0ba57db1ee" Dec 01 09:12:22 crc kubenswrapper[4689]: I1201 09:12:22.082601 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47c5911c6cb0b2138acd86890f0ad2e12b5b5c09abae901656418b0ba57db1ee"} err="failed to get container status \"47c5911c6cb0b2138acd86890f0ad2e12b5b5c09abae901656418b0ba57db1ee\": rpc error: code = NotFound desc = could not find container \"47c5911c6cb0b2138acd86890f0ad2e12b5b5c09abae901656418b0ba57db1ee\": container with ID starting with 47c5911c6cb0b2138acd86890f0ad2e12b5b5c09abae901656418b0ba57db1ee not found: ID does not exist" Dec 01 09:12:22 crc kubenswrapper[4689]: I1201 09:12:22.082617 4689 scope.go:117] "RemoveContainer" containerID="192c9bbc32b72e7d2b67b3a6e72c9f51afdd6063432e8baea9d4d7a049cc2723" Dec 01 09:12:22 crc kubenswrapper[4689]: E1201 09:12:22.083003 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"192c9bbc32b72e7d2b67b3a6e72c9f51afdd6063432e8baea9d4d7a049cc2723\": container with ID starting with 192c9bbc32b72e7d2b67b3a6e72c9f51afdd6063432e8baea9d4d7a049cc2723 not found: ID does not exist" containerID="192c9bbc32b72e7d2b67b3a6e72c9f51afdd6063432e8baea9d4d7a049cc2723" Dec 01 09:12:22 crc kubenswrapper[4689]: I1201 09:12:22.083027 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"192c9bbc32b72e7d2b67b3a6e72c9f51afdd6063432e8baea9d4d7a049cc2723"} err="failed to get container status \"192c9bbc32b72e7d2b67b3a6e72c9f51afdd6063432e8baea9d4d7a049cc2723\": rpc error: code = NotFound desc = could not find container \"192c9bbc32b72e7d2b67b3a6e72c9f51afdd6063432e8baea9d4d7a049cc2723\": container with ID starting with 192c9bbc32b72e7d2b67b3a6e72c9f51afdd6063432e8baea9d4d7a049cc2723 not found: ID does not exist" Dec 01 09:12:23 crc kubenswrapper[4689]: I1201 09:12:23.063750 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84dd867d-bcc1-424b-95bf-f813545c129a" path="/var/lib/kubelet/pods/84dd867d-bcc1-424b-95bf-f813545c129a/volumes" Dec 01 09:12:28 crc kubenswrapper[4689]: I1201 09:12:28.579947 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6xmrc"] Dec 01 09:12:28 crc kubenswrapper[4689]: E1201 09:12:28.581066 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84dd867d-bcc1-424b-95bf-f813545c129a" containerName="registry-server" Dec 01 09:12:28 crc kubenswrapper[4689]: I1201 09:12:28.581082 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="84dd867d-bcc1-424b-95bf-f813545c129a" containerName="registry-server" Dec 01 09:12:28 crc kubenswrapper[4689]: E1201 09:12:28.581099 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84dd867d-bcc1-424b-95bf-f813545c129a" containerName="extract-content" Dec 01 09:12:28 crc kubenswrapper[4689]: I1201 09:12:28.581107 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="84dd867d-bcc1-424b-95bf-f813545c129a" containerName="extract-content" Dec 01 09:12:28 crc kubenswrapper[4689]: E1201 09:12:28.581127 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84dd867d-bcc1-424b-95bf-f813545c129a" containerName="extract-utilities" Dec 01 09:12:28 crc kubenswrapper[4689]: I1201 09:12:28.581134 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="84dd867d-bcc1-424b-95bf-f813545c129a" containerName="extract-utilities" Dec 01 09:12:28 crc kubenswrapper[4689]: I1201 09:12:28.581312 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="84dd867d-bcc1-424b-95bf-f813545c129a" containerName="registry-server" Dec 01 09:12:28 crc kubenswrapper[4689]: I1201 09:12:28.582713 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6xmrc" Dec 01 09:12:28 crc kubenswrapper[4689]: I1201 09:12:28.604872 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6xmrc"] Dec 01 09:12:28 crc kubenswrapper[4689]: I1201 09:12:28.678477 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/191202a4-e565-4d60-9744-3ef6f10e30e7-utilities\") pod \"certified-operators-6xmrc\" (UID: \"191202a4-e565-4d60-9744-3ef6f10e30e7\") " pod="openshift-marketplace/certified-operators-6xmrc" Dec 01 09:12:28 crc kubenswrapper[4689]: I1201 09:12:28.678580 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/191202a4-e565-4d60-9744-3ef6f10e30e7-catalog-content\") pod \"certified-operators-6xmrc\" (UID: \"191202a4-e565-4d60-9744-3ef6f10e30e7\") " pod="openshift-marketplace/certified-operators-6xmrc" Dec 01 09:12:28 crc kubenswrapper[4689]: I1201 09:12:28.678640 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7p5xv\" (UniqueName: \"kubernetes.io/projected/191202a4-e565-4d60-9744-3ef6f10e30e7-kube-api-access-7p5xv\") pod \"certified-operators-6xmrc\" (UID: \"191202a4-e565-4d60-9744-3ef6f10e30e7\") " pod="openshift-marketplace/certified-operators-6xmrc" Dec 01 09:12:28 crc kubenswrapper[4689]: I1201 09:12:28.780944 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/191202a4-e565-4d60-9744-3ef6f10e30e7-utilities\") pod \"certified-operators-6xmrc\" (UID: \"191202a4-e565-4d60-9744-3ef6f10e30e7\") " pod="openshift-marketplace/certified-operators-6xmrc" Dec 01 09:12:28 crc kubenswrapper[4689]: I1201 09:12:28.781331 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/191202a4-e565-4d60-9744-3ef6f10e30e7-catalog-content\") pod \"certified-operators-6xmrc\" (UID: \"191202a4-e565-4d60-9744-3ef6f10e30e7\") " pod="openshift-marketplace/certified-operators-6xmrc" Dec 01 09:12:28 crc kubenswrapper[4689]: I1201 09:12:28.781713 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7p5xv\" (UniqueName: \"kubernetes.io/projected/191202a4-e565-4d60-9744-3ef6f10e30e7-kube-api-access-7p5xv\") pod \"certified-operators-6xmrc\" (UID: \"191202a4-e565-4d60-9744-3ef6f10e30e7\") " pod="openshift-marketplace/certified-operators-6xmrc" Dec 01 09:12:28 crc kubenswrapper[4689]: I1201 09:12:28.781781 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/191202a4-e565-4d60-9744-3ef6f10e30e7-catalog-content\") pod \"certified-operators-6xmrc\" (UID: \"191202a4-e565-4d60-9744-3ef6f10e30e7\") " pod="openshift-marketplace/certified-operators-6xmrc" Dec 01 09:12:28 crc kubenswrapper[4689]: I1201 09:12:28.781774 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/191202a4-e565-4d60-9744-3ef6f10e30e7-utilities\") pod \"certified-operators-6xmrc\" (UID: \"191202a4-e565-4d60-9744-3ef6f10e30e7\") " pod="openshift-marketplace/certified-operators-6xmrc" Dec 01 09:12:28 crc kubenswrapper[4689]: I1201 09:12:28.806356 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7p5xv\" (UniqueName: \"kubernetes.io/projected/191202a4-e565-4d60-9744-3ef6f10e30e7-kube-api-access-7p5xv\") pod \"certified-operators-6xmrc\" (UID: \"191202a4-e565-4d60-9744-3ef6f10e30e7\") " pod="openshift-marketplace/certified-operators-6xmrc" Dec 01 09:12:28 crc kubenswrapper[4689]: I1201 09:12:28.900333 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6xmrc" Dec 01 09:12:29 crc kubenswrapper[4689]: I1201 09:12:29.531216 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6xmrc"] Dec 01 09:12:30 crc kubenswrapper[4689]: I1201 09:12:30.069162 4689 generic.go:334] "Generic (PLEG): container finished" podID="191202a4-e565-4d60-9744-3ef6f10e30e7" containerID="aa2f86b176c015d0fd08bb8d259d775bfa1d5fbecae6135ea73eb87e779b8e24" exitCode=0 Dec 01 09:12:30 crc kubenswrapper[4689]: I1201 09:12:30.069610 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6xmrc" event={"ID":"191202a4-e565-4d60-9744-3ef6f10e30e7","Type":"ContainerDied","Data":"aa2f86b176c015d0fd08bb8d259d775bfa1d5fbecae6135ea73eb87e779b8e24"} Dec 01 09:12:30 crc kubenswrapper[4689]: I1201 09:12:30.069643 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6xmrc" event={"ID":"191202a4-e565-4d60-9744-3ef6f10e30e7","Type":"ContainerStarted","Data":"a3cdf382820c736ea58bf7f5c53f32d3c5cdb90cfdbfc578aa007ff5e6871f67"} Dec 01 09:12:31 crc kubenswrapper[4689]: I1201 09:12:31.086879 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6xmrc" event={"ID":"191202a4-e565-4d60-9744-3ef6f10e30e7","Type":"ContainerStarted","Data":"cd437bce4114969e6b38039fe473d3bc437eb1f0c076a72b6d5dc45151461cdd"} Dec 01 09:12:33 crc kubenswrapper[4689]: I1201 09:12:33.169631 4689 generic.go:334] "Generic (PLEG): container finished" podID="191202a4-e565-4d60-9744-3ef6f10e30e7" containerID="cd437bce4114969e6b38039fe473d3bc437eb1f0c076a72b6d5dc45151461cdd" exitCode=0 Dec 01 09:12:33 crc kubenswrapper[4689]: I1201 09:12:33.169679 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6xmrc" event={"ID":"191202a4-e565-4d60-9744-3ef6f10e30e7","Type":"ContainerDied","Data":"cd437bce4114969e6b38039fe473d3bc437eb1f0c076a72b6d5dc45151461cdd"} Dec 01 09:12:34 crc kubenswrapper[4689]: I1201 09:12:34.184487 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6xmrc" event={"ID":"191202a4-e565-4d60-9744-3ef6f10e30e7","Type":"ContainerStarted","Data":"40a4df97e1babe94f72fb63b6cb7c8ace92c685fb43b86c77a7775f75fadd8bc"} Dec 01 09:12:34 crc kubenswrapper[4689]: I1201 09:12:34.204196 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6xmrc" podStartSLOduration=2.593769352 podStartE2EDuration="6.204179079s" podCreationTimestamp="2025-12-01 09:12:28 +0000 UTC" firstStartedPulling="2025-12-01 09:12:30.071730541 +0000 UTC m=+2030.144018445" lastFinishedPulling="2025-12-01 09:12:33.682140268 +0000 UTC m=+2033.754428172" observedRunningTime="2025-12-01 09:12:34.202692348 +0000 UTC m=+2034.274980262" watchObservedRunningTime="2025-12-01 09:12:34.204179079 +0000 UTC m=+2034.276466983" Dec 01 09:12:38 crc kubenswrapper[4689]: I1201 09:12:38.901010 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6xmrc" Dec 01 09:12:38 crc kubenswrapper[4689]: I1201 09:12:38.901731 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6xmrc" Dec 01 09:12:38 crc kubenswrapper[4689]: I1201 09:12:38.981501 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6xmrc" Dec 01 09:12:39 crc kubenswrapper[4689]: I1201 09:12:39.288586 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6xmrc" Dec 01 09:12:39 crc kubenswrapper[4689]: I1201 09:12:39.350734 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6xmrc"] Dec 01 09:12:41 crc kubenswrapper[4689]: I1201 09:12:41.257953 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6xmrc" podUID="191202a4-e565-4d60-9744-3ef6f10e30e7" containerName="registry-server" containerID="cri-o://40a4df97e1babe94f72fb63b6cb7c8ace92c685fb43b86c77a7775f75fadd8bc" gracePeriod=2 Dec 01 09:12:41 crc kubenswrapper[4689]: I1201 09:12:41.719674 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6xmrc" Dec 01 09:12:41 crc kubenswrapper[4689]: I1201 09:12:41.872735 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7p5xv\" (UniqueName: \"kubernetes.io/projected/191202a4-e565-4d60-9744-3ef6f10e30e7-kube-api-access-7p5xv\") pod \"191202a4-e565-4d60-9744-3ef6f10e30e7\" (UID: \"191202a4-e565-4d60-9744-3ef6f10e30e7\") " Dec 01 09:12:41 crc kubenswrapper[4689]: I1201 09:12:41.872851 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/191202a4-e565-4d60-9744-3ef6f10e30e7-catalog-content\") pod \"191202a4-e565-4d60-9744-3ef6f10e30e7\" (UID: \"191202a4-e565-4d60-9744-3ef6f10e30e7\") " Dec 01 09:12:41 crc kubenswrapper[4689]: I1201 09:12:41.873141 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/191202a4-e565-4d60-9744-3ef6f10e30e7-utilities\") pod \"191202a4-e565-4d60-9744-3ef6f10e30e7\" (UID: \"191202a4-e565-4d60-9744-3ef6f10e30e7\") " Dec 01 09:12:41 crc kubenswrapper[4689]: I1201 09:12:41.874874 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/191202a4-e565-4d60-9744-3ef6f10e30e7-utilities" (OuterVolumeSpecName: "utilities") pod "191202a4-e565-4d60-9744-3ef6f10e30e7" (UID: "191202a4-e565-4d60-9744-3ef6f10e30e7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:12:41 crc kubenswrapper[4689]: I1201 09:12:41.886512 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/191202a4-e565-4d60-9744-3ef6f10e30e7-kube-api-access-7p5xv" (OuterVolumeSpecName: "kube-api-access-7p5xv") pod "191202a4-e565-4d60-9744-3ef6f10e30e7" (UID: "191202a4-e565-4d60-9744-3ef6f10e30e7"). InnerVolumeSpecName "kube-api-access-7p5xv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:12:41 crc kubenswrapper[4689]: I1201 09:12:41.935802 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/191202a4-e565-4d60-9744-3ef6f10e30e7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "191202a4-e565-4d60-9744-3ef6f10e30e7" (UID: "191202a4-e565-4d60-9744-3ef6f10e30e7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:12:41 crc kubenswrapper[4689]: I1201 09:12:41.979496 4689 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/191202a4-e565-4d60-9744-3ef6f10e30e7-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 09:12:41 crc kubenswrapper[4689]: I1201 09:12:41.979561 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7p5xv\" (UniqueName: \"kubernetes.io/projected/191202a4-e565-4d60-9744-3ef6f10e30e7-kube-api-access-7p5xv\") on node \"crc\" DevicePath \"\"" Dec 01 09:12:41 crc kubenswrapper[4689]: I1201 09:12:41.979579 4689 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/191202a4-e565-4d60-9744-3ef6f10e30e7-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 09:12:42 crc kubenswrapper[4689]: I1201 09:12:42.277843 4689 generic.go:334] "Generic (PLEG): container finished" podID="191202a4-e565-4d60-9744-3ef6f10e30e7" containerID="40a4df97e1babe94f72fb63b6cb7c8ace92c685fb43b86c77a7775f75fadd8bc" exitCode=0 Dec 01 09:12:42 crc kubenswrapper[4689]: I1201 09:12:42.277918 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6xmrc" event={"ID":"191202a4-e565-4d60-9744-3ef6f10e30e7","Type":"ContainerDied","Data":"40a4df97e1babe94f72fb63b6cb7c8ace92c685fb43b86c77a7775f75fadd8bc"} Dec 01 09:12:42 crc kubenswrapper[4689]: I1201 09:12:42.278554 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6xmrc" event={"ID":"191202a4-e565-4d60-9744-3ef6f10e30e7","Type":"ContainerDied","Data":"a3cdf382820c736ea58bf7f5c53f32d3c5cdb90cfdbfc578aa007ff5e6871f67"} Dec 01 09:12:42 crc kubenswrapper[4689]: I1201 09:12:42.278026 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6xmrc" Dec 01 09:12:42 crc kubenswrapper[4689]: I1201 09:12:42.278594 4689 scope.go:117] "RemoveContainer" containerID="40a4df97e1babe94f72fb63b6cb7c8ace92c685fb43b86c77a7775f75fadd8bc" Dec 01 09:12:42 crc kubenswrapper[4689]: I1201 09:12:42.316390 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6xmrc"] Dec 01 09:12:42 crc kubenswrapper[4689]: I1201 09:12:42.326762 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6xmrc"] Dec 01 09:12:42 crc kubenswrapper[4689]: I1201 09:12:42.327549 4689 scope.go:117] "RemoveContainer" containerID="cd437bce4114969e6b38039fe473d3bc437eb1f0c076a72b6d5dc45151461cdd" Dec 01 09:12:42 crc kubenswrapper[4689]: I1201 09:12:42.366545 4689 scope.go:117] "RemoveContainer" containerID="aa2f86b176c015d0fd08bb8d259d775bfa1d5fbecae6135ea73eb87e779b8e24" Dec 01 09:12:42 crc kubenswrapper[4689]: I1201 09:12:42.395182 4689 scope.go:117] "RemoveContainer" containerID="40a4df97e1babe94f72fb63b6cb7c8ace92c685fb43b86c77a7775f75fadd8bc" Dec 01 09:12:42 crc kubenswrapper[4689]: E1201 09:12:42.395609 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40a4df97e1babe94f72fb63b6cb7c8ace92c685fb43b86c77a7775f75fadd8bc\": container with ID starting with 40a4df97e1babe94f72fb63b6cb7c8ace92c685fb43b86c77a7775f75fadd8bc not found: ID does not exist" containerID="40a4df97e1babe94f72fb63b6cb7c8ace92c685fb43b86c77a7775f75fadd8bc" Dec 01 09:12:42 crc kubenswrapper[4689]: I1201 09:12:42.395637 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40a4df97e1babe94f72fb63b6cb7c8ace92c685fb43b86c77a7775f75fadd8bc"} err="failed to get container status \"40a4df97e1babe94f72fb63b6cb7c8ace92c685fb43b86c77a7775f75fadd8bc\": rpc error: code = NotFound desc = could not find container \"40a4df97e1babe94f72fb63b6cb7c8ace92c685fb43b86c77a7775f75fadd8bc\": container with ID starting with 40a4df97e1babe94f72fb63b6cb7c8ace92c685fb43b86c77a7775f75fadd8bc not found: ID does not exist" Dec 01 09:12:42 crc kubenswrapper[4689]: I1201 09:12:42.395657 4689 scope.go:117] "RemoveContainer" containerID="cd437bce4114969e6b38039fe473d3bc437eb1f0c076a72b6d5dc45151461cdd" Dec 01 09:12:42 crc kubenswrapper[4689]: E1201 09:12:42.395965 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd437bce4114969e6b38039fe473d3bc437eb1f0c076a72b6d5dc45151461cdd\": container with ID starting with cd437bce4114969e6b38039fe473d3bc437eb1f0c076a72b6d5dc45151461cdd not found: ID does not exist" containerID="cd437bce4114969e6b38039fe473d3bc437eb1f0c076a72b6d5dc45151461cdd" Dec 01 09:12:42 crc kubenswrapper[4689]: I1201 09:12:42.395989 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd437bce4114969e6b38039fe473d3bc437eb1f0c076a72b6d5dc45151461cdd"} err="failed to get container status \"cd437bce4114969e6b38039fe473d3bc437eb1f0c076a72b6d5dc45151461cdd\": rpc error: code = NotFound desc = could not find container \"cd437bce4114969e6b38039fe473d3bc437eb1f0c076a72b6d5dc45151461cdd\": container with ID starting with cd437bce4114969e6b38039fe473d3bc437eb1f0c076a72b6d5dc45151461cdd not found: ID does not exist" Dec 01 09:12:42 crc kubenswrapper[4689]: I1201 09:12:42.396002 4689 scope.go:117] "RemoveContainer" containerID="aa2f86b176c015d0fd08bb8d259d775bfa1d5fbecae6135ea73eb87e779b8e24" Dec 01 09:12:42 crc kubenswrapper[4689]: E1201 09:12:42.396335 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa2f86b176c015d0fd08bb8d259d775bfa1d5fbecae6135ea73eb87e779b8e24\": container with ID starting with aa2f86b176c015d0fd08bb8d259d775bfa1d5fbecae6135ea73eb87e779b8e24 not found: ID does not exist" containerID="aa2f86b176c015d0fd08bb8d259d775bfa1d5fbecae6135ea73eb87e779b8e24" Dec 01 09:12:42 crc kubenswrapper[4689]: I1201 09:12:42.396457 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa2f86b176c015d0fd08bb8d259d775bfa1d5fbecae6135ea73eb87e779b8e24"} err="failed to get container status \"aa2f86b176c015d0fd08bb8d259d775bfa1d5fbecae6135ea73eb87e779b8e24\": rpc error: code = NotFound desc = could not find container \"aa2f86b176c015d0fd08bb8d259d775bfa1d5fbecae6135ea73eb87e779b8e24\": container with ID starting with aa2f86b176c015d0fd08bb8d259d775bfa1d5fbecae6135ea73eb87e779b8e24 not found: ID does not exist" Dec 01 09:12:43 crc kubenswrapper[4689]: I1201 09:12:43.062129 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="191202a4-e565-4d60-9744-3ef6f10e30e7" path="/var/lib/kubelet/pods/191202a4-e565-4d60-9744-3ef6f10e30e7/volumes" Dec 01 09:13:05 crc kubenswrapper[4689]: I1201 09:13:05.064119 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-k6n9v"] Dec 01 09:13:05 crc kubenswrapper[4689]: E1201 09:13:05.064941 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="191202a4-e565-4d60-9744-3ef6f10e30e7" containerName="registry-server" Dec 01 09:13:05 crc kubenswrapper[4689]: I1201 09:13:05.064954 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="191202a4-e565-4d60-9744-3ef6f10e30e7" containerName="registry-server" Dec 01 09:13:05 crc kubenswrapper[4689]: E1201 09:13:05.064977 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="191202a4-e565-4d60-9744-3ef6f10e30e7" containerName="extract-utilities" Dec 01 09:13:05 crc kubenswrapper[4689]: I1201 09:13:05.064985 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="191202a4-e565-4d60-9744-3ef6f10e30e7" containerName="extract-utilities" Dec 01 09:13:05 crc kubenswrapper[4689]: E1201 09:13:05.064996 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="191202a4-e565-4d60-9744-3ef6f10e30e7" containerName="extract-content" Dec 01 09:13:05 crc kubenswrapper[4689]: I1201 09:13:05.065002 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="191202a4-e565-4d60-9744-3ef6f10e30e7" containerName="extract-content" Dec 01 09:13:05 crc kubenswrapper[4689]: I1201 09:13:05.065190 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="191202a4-e565-4d60-9744-3ef6f10e30e7" containerName="registry-server" Dec 01 09:13:05 crc kubenswrapper[4689]: I1201 09:13:05.066478 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k6n9v" Dec 01 09:13:05 crc kubenswrapper[4689]: I1201 09:13:05.110884 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k6n9v"] Dec 01 09:13:05 crc kubenswrapper[4689]: I1201 09:13:05.147509 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fr567\" (UniqueName: \"kubernetes.io/projected/3e91eb5d-6bfb-460e-b950-6df17a600bf0-kube-api-access-fr567\") pod \"redhat-operators-k6n9v\" (UID: \"3e91eb5d-6bfb-460e-b950-6df17a600bf0\") " pod="openshift-marketplace/redhat-operators-k6n9v" Dec 01 09:13:05 crc kubenswrapper[4689]: I1201 09:13:05.147596 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e91eb5d-6bfb-460e-b950-6df17a600bf0-utilities\") pod \"redhat-operators-k6n9v\" (UID: \"3e91eb5d-6bfb-460e-b950-6df17a600bf0\") " pod="openshift-marketplace/redhat-operators-k6n9v" Dec 01 09:13:05 crc kubenswrapper[4689]: I1201 09:13:05.147700 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e91eb5d-6bfb-460e-b950-6df17a600bf0-catalog-content\") pod \"redhat-operators-k6n9v\" (UID: \"3e91eb5d-6bfb-460e-b950-6df17a600bf0\") " pod="openshift-marketplace/redhat-operators-k6n9v" Dec 01 09:13:05 crc kubenswrapper[4689]: I1201 09:13:05.250356 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fr567\" (UniqueName: \"kubernetes.io/projected/3e91eb5d-6bfb-460e-b950-6df17a600bf0-kube-api-access-fr567\") pod \"redhat-operators-k6n9v\" (UID: \"3e91eb5d-6bfb-460e-b950-6df17a600bf0\") " pod="openshift-marketplace/redhat-operators-k6n9v" Dec 01 09:13:05 crc kubenswrapper[4689]: I1201 09:13:05.250480 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e91eb5d-6bfb-460e-b950-6df17a600bf0-utilities\") pod \"redhat-operators-k6n9v\" (UID: \"3e91eb5d-6bfb-460e-b950-6df17a600bf0\") " pod="openshift-marketplace/redhat-operators-k6n9v" Dec 01 09:13:05 crc kubenswrapper[4689]: I1201 09:13:05.250602 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e91eb5d-6bfb-460e-b950-6df17a600bf0-catalog-content\") pod \"redhat-operators-k6n9v\" (UID: \"3e91eb5d-6bfb-460e-b950-6df17a600bf0\") " pod="openshift-marketplace/redhat-operators-k6n9v" Dec 01 09:13:05 crc kubenswrapper[4689]: I1201 09:13:05.250902 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e91eb5d-6bfb-460e-b950-6df17a600bf0-utilities\") pod \"redhat-operators-k6n9v\" (UID: \"3e91eb5d-6bfb-460e-b950-6df17a600bf0\") " pod="openshift-marketplace/redhat-operators-k6n9v" Dec 01 09:13:05 crc kubenswrapper[4689]: I1201 09:13:05.251068 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e91eb5d-6bfb-460e-b950-6df17a600bf0-catalog-content\") pod \"redhat-operators-k6n9v\" (UID: \"3e91eb5d-6bfb-460e-b950-6df17a600bf0\") " pod="openshift-marketplace/redhat-operators-k6n9v" Dec 01 09:13:05 crc kubenswrapper[4689]: I1201 09:13:05.271325 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fr567\" (UniqueName: \"kubernetes.io/projected/3e91eb5d-6bfb-460e-b950-6df17a600bf0-kube-api-access-fr567\") pod \"redhat-operators-k6n9v\" (UID: \"3e91eb5d-6bfb-460e-b950-6df17a600bf0\") " pod="openshift-marketplace/redhat-operators-k6n9v" Dec 01 09:13:05 crc kubenswrapper[4689]: I1201 09:13:05.407619 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k6n9v" Dec 01 09:13:05 crc kubenswrapper[4689]: I1201 09:13:05.905314 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k6n9v"] Dec 01 09:13:06 crc kubenswrapper[4689]: I1201 09:13:06.503469 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k6n9v" event={"ID":"3e91eb5d-6bfb-460e-b950-6df17a600bf0","Type":"ContainerDied","Data":"70c421073287951b0f01a9ea14f05bf19ace046f2def2ba00c100e82689dfd09"} Dec 01 09:13:06 crc kubenswrapper[4689]: I1201 09:13:06.503309 4689 generic.go:334] "Generic (PLEG): container finished" podID="3e91eb5d-6bfb-460e-b950-6df17a600bf0" containerID="70c421073287951b0f01a9ea14f05bf19ace046f2def2ba00c100e82689dfd09" exitCode=0 Dec 01 09:13:06 crc kubenswrapper[4689]: I1201 09:13:06.504155 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k6n9v" event={"ID":"3e91eb5d-6bfb-460e-b950-6df17a600bf0","Type":"ContainerStarted","Data":"a05b90941e71d1b527fa4eb5bb7fa16048f82907f3d89ff0b5d71c96f001195a"} Dec 01 09:13:09 crc kubenswrapper[4689]: I1201 09:13:09.665747 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k6n9v" event={"ID":"3e91eb5d-6bfb-460e-b950-6df17a600bf0","Type":"ContainerStarted","Data":"0a243fe648c333e200bab4be74f4468215f2c48faa8cb47b5581c5c75768aec7"} Dec 01 09:13:14 crc kubenswrapper[4689]: I1201 09:13:14.326650 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-m78vm"] Dec 01 09:13:14 crc kubenswrapper[4689]: I1201 09:13:14.329848 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m78vm" Dec 01 09:13:14 crc kubenswrapper[4689]: I1201 09:13:14.339056 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-m78vm"] Dec 01 09:13:14 crc kubenswrapper[4689]: I1201 09:13:14.481769 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e3faeb5-df82-4f9d-95ae-666d3d17b984-utilities\") pod \"community-operators-m78vm\" (UID: \"6e3faeb5-df82-4f9d-95ae-666d3d17b984\") " pod="openshift-marketplace/community-operators-m78vm" Dec 01 09:13:14 crc kubenswrapper[4689]: I1201 09:13:14.482313 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rp7dv\" (UniqueName: \"kubernetes.io/projected/6e3faeb5-df82-4f9d-95ae-666d3d17b984-kube-api-access-rp7dv\") pod \"community-operators-m78vm\" (UID: \"6e3faeb5-df82-4f9d-95ae-666d3d17b984\") " pod="openshift-marketplace/community-operators-m78vm" Dec 01 09:13:14 crc kubenswrapper[4689]: I1201 09:13:14.482610 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e3faeb5-df82-4f9d-95ae-666d3d17b984-catalog-content\") pod \"community-operators-m78vm\" (UID: \"6e3faeb5-df82-4f9d-95ae-666d3d17b984\") " pod="openshift-marketplace/community-operators-m78vm" Dec 01 09:13:14 crc kubenswrapper[4689]: I1201 09:13:14.584251 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e3faeb5-df82-4f9d-95ae-666d3d17b984-catalog-content\") pod \"community-operators-m78vm\" (UID: \"6e3faeb5-df82-4f9d-95ae-666d3d17b984\") " pod="openshift-marketplace/community-operators-m78vm" Dec 01 09:13:14 crc kubenswrapper[4689]: I1201 09:13:14.584673 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e3faeb5-df82-4f9d-95ae-666d3d17b984-utilities\") pod \"community-operators-m78vm\" (UID: \"6e3faeb5-df82-4f9d-95ae-666d3d17b984\") " pod="openshift-marketplace/community-operators-m78vm" Dec 01 09:13:14 crc kubenswrapper[4689]: I1201 09:13:14.584872 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rp7dv\" (UniqueName: \"kubernetes.io/projected/6e3faeb5-df82-4f9d-95ae-666d3d17b984-kube-api-access-rp7dv\") pod \"community-operators-m78vm\" (UID: \"6e3faeb5-df82-4f9d-95ae-666d3d17b984\") " pod="openshift-marketplace/community-operators-m78vm" Dec 01 09:13:14 crc kubenswrapper[4689]: I1201 09:13:14.585013 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e3faeb5-df82-4f9d-95ae-666d3d17b984-catalog-content\") pod \"community-operators-m78vm\" (UID: \"6e3faeb5-df82-4f9d-95ae-666d3d17b984\") " pod="openshift-marketplace/community-operators-m78vm" Dec 01 09:13:14 crc kubenswrapper[4689]: I1201 09:13:14.585210 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e3faeb5-df82-4f9d-95ae-666d3d17b984-utilities\") pod \"community-operators-m78vm\" (UID: \"6e3faeb5-df82-4f9d-95ae-666d3d17b984\") " pod="openshift-marketplace/community-operators-m78vm" Dec 01 09:13:14 crc kubenswrapper[4689]: I1201 09:13:14.614255 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rp7dv\" (UniqueName: \"kubernetes.io/projected/6e3faeb5-df82-4f9d-95ae-666d3d17b984-kube-api-access-rp7dv\") pod \"community-operators-m78vm\" (UID: \"6e3faeb5-df82-4f9d-95ae-666d3d17b984\") " pod="openshift-marketplace/community-operators-m78vm" Dec 01 09:13:14 crc kubenswrapper[4689]: I1201 09:13:14.661221 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m78vm" Dec 01 09:13:15 crc kubenswrapper[4689]: I1201 09:13:15.190393 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-m78vm"] Dec 01 09:13:15 crc kubenswrapper[4689]: I1201 09:13:15.725688 4689 generic.go:334] "Generic (PLEG): container finished" podID="3e91eb5d-6bfb-460e-b950-6df17a600bf0" containerID="0a243fe648c333e200bab4be74f4468215f2c48faa8cb47b5581c5c75768aec7" exitCode=0 Dec 01 09:13:15 crc kubenswrapper[4689]: I1201 09:13:15.725765 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k6n9v" event={"ID":"3e91eb5d-6bfb-460e-b950-6df17a600bf0","Type":"ContainerDied","Data":"0a243fe648c333e200bab4be74f4468215f2c48faa8cb47b5581c5c75768aec7"} Dec 01 09:13:15 crc kubenswrapper[4689]: I1201 09:13:15.729860 4689 generic.go:334] "Generic (PLEG): container finished" podID="6e3faeb5-df82-4f9d-95ae-666d3d17b984" containerID="0bea62f9fbb02adca1905dce89523e4f4c9f21d56c91c98805f8856138eceb37" exitCode=0 Dec 01 09:13:15 crc kubenswrapper[4689]: I1201 09:13:15.729895 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m78vm" event={"ID":"6e3faeb5-df82-4f9d-95ae-666d3d17b984","Type":"ContainerDied","Data":"0bea62f9fbb02adca1905dce89523e4f4c9f21d56c91c98805f8856138eceb37"} Dec 01 09:13:15 crc kubenswrapper[4689]: I1201 09:13:15.729917 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m78vm" event={"ID":"6e3faeb5-df82-4f9d-95ae-666d3d17b984","Type":"ContainerStarted","Data":"c003e79652d0e61815e2409ee5eb383504770f5522dc32e5d746657d703d8267"} Dec 01 09:13:16 crc kubenswrapper[4689]: I1201 09:13:16.744220 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k6n9v" event={"ID":"3e91eb5d-6bfb-460e-b950-6df17a600bf0","Type":"ContainerStarted","Data":"db6c343c7d5f03f447aac85b61f9b5370a7116f5af1b7d5151fc45ee75843bbc"} Dec 01 09:13:16 crc kubenswrapper[4689]: I1201 09:13:16.775395 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-k6n9v" podStartSLOduration=1.852174619 podStartE2EDuration="11.775352985s" podCreationTimestamp="2025-12-01 09:13:05 +0000 UTC" firstStartedPulling="2025-12-01 09:13:06.506197874 +0000 UTC m=+2066.578485778" lastFinishedPulling="2025-12-01 09:13:16.42937625 +0000 UTC m=+2076.501664144" observedRunningTime="2025-12-01 09:13:16.769403462 +0000 UTC m=+2076.841691376" watchObservedRunningTime="2025-12-01 09:13:16.775352985 +0000 UTC m=+2076.847640889" Dec 01 09:13:17 crc kubenswrapper[4689]: I1201 09:13:17.762597 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m78vm" event={"ID":"6e3faeb5-df82-4f9d-95ae-666d3d17b984","Type":"ContainerStarted","Data":"4124d1a5bc90f0741e5122298fdd51b7b8653f74640103cdcb837b86a1438d85"} Dec 01 09:13:18 crc kubenswrapper[4689]: I1201 09:13:18.773615 4689 generic.go:334] "Generic (PLEG): container finished" podID="6e3faeb5-df82-4f9d-95ae-666d3d17b984" containerID="4124d1a5bc90f0741e5122298fdd51b7b8653f74640103cdcb837b86a1438d85" exitCode=0 Dec 01 09:13:18 crc kubenswrapper[4689]: I1201 09:13:18.773692 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m78vm" event={"ID":"6e3faeb5-df82-4f9d-95ae-666d3d17b984","Type":"ContainerDied","Data":"4124d1a5bc90f0741e5122298fdd51b7b8653f74640103cdcb837b86a1438d85"} Dec 01 09:13:19 crc kubenswrapper[4689]: I1201 09:13:19.785646 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m78vm" event={"ID":"6e3faeb5-df82-4f9d-95ae-666d3d17b984","Type":"ContainerStarted","Data":"38b01f1b5bae29cf594557e6d5564f3f3b316815dab56be2fbfe319134c930e7"} Dec 01 09:13:19 crc kubenswrapper[4689]: I1201 09:13:19.814143 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-m78vm" podStartSLOduration=2.21052243 podStartE2EDuration="5.81412037s" podCreationTimestamp="2025-12-01 09:13:14 +0000 UTC" firstStartedPulling="2025-12-01 09:13:15.731386325 +0000 UTC m=+2075.803674229" lastFinishedPulling="2025-12-01 09:13:19.334984255 +0000 UTC m=+2079.407272169" observedRunningTime="2025-12-01 09:13:19.805476674 +0000 UTC m=+2079.877764608" watchObservedRunningTime="2025-12-01 09:13:19.81412037 +0000 UTC m=+2079.886408264" Dec 01 09:13:24 crc kubenswrapper[4689]: I1201 09:13:24.720298 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-m78vm" Dec 01 09:13:24 crc kubenswrapper[4689]: I1201 09:13:24.721517 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-m78vm" Dec 01 09:13:24 crc kubenswrapper[4689]: I1201 09:13:24.802000 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-m78vm" Dec 01 09:13:24 crc kubenswrapper[4689]: I1201 09:13:24.898782 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-m78vm" Dec 01 09:13:25 crc kubenswrapper[4689]: I1201 09:13:25.041514 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-m78vm"] Dec 01 09:13:25 crc kubenswrapper[4689]: I1201 09:13:25.408995 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-k6n9v" Dec 01 09:13:25 crc kubenswrapper[4689]: I1201 09:13:25.409047 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-k6n9v" Dec 01 09:13:25 crc kubenswrapper[4689]: I1201 09:13:25.859409 4689 generic.go:334] "Generic (PLEG): container finished" podID="7f3287e5-9e76-46ee-91c4-8bc9b69a738f" containerID="7d077367143b53a90bb3b38828f5db086ade011f41c7002ff767398f037541c7" exitCode=0 Dec 01 09:13:25 crc kubenswrapper[4689]: I1201 09:13:25.859461 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rdqqh" event={"ID":"7f3287e5-9e76-46ee-91c4-8bc9b69a738f","Type":"ContainerDied","Data":"7d077367143b53a90bb3b38828f5db086ade011f41c7002ff767398f037541c7"} Dec 01 09:13:26 crc kubenswrapper[4689]: I1201 09:13:26.464132 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k6n9v" podUID="3e91eb5d-6bfb-460e-b950-6df17a600bf0" containerName="registry-server" probeResult="failure" output=< Dec 01 09:13:26 crc kubenswrapper[4689]: timeout: failed to connect service ":50051" within 1s Dec 01 09:13:26 crc kubenswrapper[4689]: > Dec 01 09:13:26 crc kubenswrapper[4689]: I1201 09:13:26.869277 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-m78vm" podUID="6e3faeb5-df82-4f9d-95ae-666d3d17b984" containerName="registry-server" containerID="cri-o://38b01f1b5bae29cf594557e6d5564f3f3b316815dab56be2fbfe319134c930e7" gracePeriod=2 Dec 01 09:13:27 crc kubenswrapper[4689]: I1201 09:13:27.426412 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rdqqh" Dec 01 09:13:27 crc kubenswrapper[4689]: I1201 09:13:27.438158 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m78vm" Dec 01 09:13:27 crc kubenswrapper[4689]: I1201 09:13:27.495405 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7f3287e5-9e76-46ee-91c4-8bc9b69a738f-inventory\") pod \"7f3287e5-9e76-46ee-91c4-8bc9b69a738f\" (UID: \"7f3287e5-9e76-46ee-91c4-8bc9b69a738f\") " Dec 01 09:13:27 crc kubenswrapper[4689]: I1201 09:13:27.495487 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rp7dv\" (UniqueName: \"kubernetes.io/projected/6e3faeb5-df82-4f9d-95ae-666d3d17b984-kube-api-access-rp7dv\") pod \"6e3faeb5-df82-4f9d-95ae-666d3d17b984\" (UID: \"6e3faeb5-df82-4f9d-95ae-666d3d17b984\") " Dec 01 09:13:27 crc kubenswrapper[4689]: I1201 09:13:27.495679 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e3faeb5-df82-4f9d-95ae-666d3d17b984-catalog-content\") pod \"6e3faeb5-df82-4f9d-95ae-666d3d17b984\" (UID: \"6e3faeb5-df82-4f9d-95ae-666d3d17b984\") " Dec 01 09:13:27 crc kubenswrapper[4689]: I1201 09:13:27.495815 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7f3287e5-9e76-46ee-91c4-8bc9b69a738f-ssh-key\") pod \"7f3287e5-9e76-46ee-91c4-8bc9b69a738f\" (UID: \"7f3287e5-9e76-46ee-91c4-8bc9b69a738f\") " Dec 01 09:13:27 crc kubenswrapper[4689]: I1201 09:13:27.495900 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f92bm\" (UniqueName: \"kubernetes.io/projected/7f3287e5-9e76-46ee-91c4-8bc9b69a738f-kube-api-access-f92bm\") pod \"7f3287e5-9e76-46ee-91c4-8bc9b69a738f\" (UID: \"7f3287e5-9e76-46ee-91c4-8bc9b69a738f\") " Dec 01 09:13:27 crc kubenswrapper[4689]: I1201 09:13:27.495936 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e3faeb5-df82-4f9d-95ae-666d3d17b984-utilities\") pod \"6e3faeb5-df82-4f9d-95ae-666d3d17b984\" (UID: \"6e3faeb5-df82-4f9d-95ae-666d3d17b984\") " Dec 01 09:13:27 crc kubenswrapper[4689]: I1201 09:13:27.496457 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e3faeb5-df82-4f9d-95ae-666d3d17b984-utilities" (OuterVolumeSpecName: "utilities") pod "6e3faeb5-df82-4f9d-95ae-666d3d17b984" (UID: "6e3faeb5-df82-4f9d-95ae-666d3d17b984"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:13:27 crc kubenswrapper[4689]: I1201 09:13:27.503716 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e3faeb5-df82-4f9d-95ae-666d3d17b984-kube-api-access-rp7dv" (OuterVolumeSpecName: "kube-api-access-rp7dv") pod "6e3faeb5-df82-4f9d-95ae-666d3d17b984" (UID: "6e3faeb5-df82-4f9d-95ae-666d3d17b984"). InnerVolumeSpecName "kube-api-access-rp7dv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:13:27 crc kubenswrapper[4689]: I1201 09:13:27.503970 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f3287e5-9e76-46ee-91c4-8bc9b69a738f-kube-api-access-f92bm" (OuterVolumeSpecName: "kube-api-access-f92bm") pod "7f3287e5-9e76-46ee-91c4-8bc9b69a738f" (UID: "7f3287e5-9e76-46ee-91c4-8bc9b69a738f"). InnerVolumeSpecName "kube-api-access-f92bm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:13:27 crc kubenswrapper[4689]: I1201 09:13:27.534498 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f3287e5-9e76-46ee-91c4-8bc9b69a738f-inventory" (OuterVolumeSpecName: "inventory") pod "7f3287e5-9e76-46ee-91c4-8bc9b69a738f" (UID: "7f3287e5-9e76-46ee-91c4-8bc9b69a738f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:13:27 crc kubenswrapper[4689]: I1201 09:13:27.535156 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f3287e5-9e76-46ee-91c4-8bc9b69a738f-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "7f3287e5-9e76-46ee-91c4-8bc9b69a738f" (UID: "7f3287e5-9e76-46ee-91c4-8bc9b69a738f"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:13:27 crc kubenswrapper[4689]: I1201 09:13:27.549973 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e3faeb5-df82-4f9d-95ae-666d3d17b984-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6e3faeb5-df82-4f9d-95ae-666d3d17b984" (UID: "6e3faeb5-df82-4f9d-95ae-666d3d17b984"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:13:27 crc kubenswrapper[4689]: I1201 09:13:27.598146 4689 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e3faeb5-df82-4f9d-95ae-666d3d17b984-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 09:13:27 crc kubenswrapper[4689]: I1201 09:13:27.598177 4689 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7f3287e5-9e76-46ee-91c4-8bc9b69a738f-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 01 09:13:27 crc kubenswrapper[4689]: I1201 09:13:27.598185 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f92bm\" (UniqueName: \"kubernetes.io/projected/7f3287e5-9e76-46ee-91c4-8bc9b69a738f-kube-api-access-f92bm\") on node \"crc\" DevicePath \"\"" Dec 01 09:13:27 crc kubenswrapper[4689]: I1201 09:13:27.598200 4689 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e3faeb5-df82-4f9d-95ae-666d3d17b984-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 09:13:27 crc kubenswrapper[4689]: I1201 09:13:27.598208 4689 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7f3287e5-9e76-46ee-91c4-8bc9b69a738f-inventory\") on node \"crc\" DevicePath \"\"" Dec 01 09:13:27 crc kubenswrapper[4689]: I1201 09:13:27.598216 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rp7dv\" (UniqueName: \"kubernetes.io/projected/6e3faeb5-df82-4f9d-95ae-666d3d17b984-kube-api-access-rp7dv\") on node \"crc\" DevicePath \"\"" Dec 01 09:13:27 crc kubenswrapper[4689]: I1201 09:13:27.885926 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rdqqh" Dec 01 09:13:27 crc kubenswrapper[4689]: I1201 09:13:27.885948 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rdqqh" event={"ID":"7f3287e5-9e76-46ee-91c4-8bc9b69a738f","Type":"ContainerDied","Data":"817bb0c00dc9b38cd6ba2b47d8ec17468e8b12664f277d7adc2ba6cc785a3731"} Dec 01 09:13:27 crc kubenswrapper[4689]: I1201 09:13:27.886696 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="817bb0c00dc9b38cd6ba2b47d8ec17468e8b12664f277d7adc2ba6cc785a3731" Dec 01 09:13:27 crc kubenswrapper[4689]: I1201 09:13:27.890644 4689 generic.go:334] "Generic (PLEG): container finished" podID="6e3faeb5-df82-4f9d-95ae-666d3d17b984" containerID="38b01f1b5bae29cf594557e6d5564f3f3b316815dab56be2fbfe319134c930e7" exitCode=0 Dec 01 09:13:27 crc kubenswrapper[4689]: I1201 09:13:27.890709 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m78vm" event={"ID":"6e3faeb5-df82-4f9d-95ae-666d3d17b984","Type":"ContainerDied","Data":"38b01f1b5bae29cf594557e6d5564f3f3b316815dab56be2fbfe319134c930e7"} Dec 01 09:13:27 crc kubenswrapper[4689]: I1201 09:13:27.890744 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m78vm" event={"ID":"6e3faeb5-df82-4f9d-95ae-666d3d17b984","Type":"ContainerDied","Data":"c003e79652d0e61815e2409ee5eb383504770f5522dc32e5d746657d703d8267"} Dec 01 09:13:27 crc kubenswrapper[4689]: I1201 09:13:27.890765 4689 scope.go:117] "RemoveContainer" containerID="38b01f1b5bae29cf594557e6d5564f3f3b316815dab56be2fbfe319134c930e7" Dec 01 09:13:27 crc kubenswrapper[4689]: I1201 09:13:27.890783 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m78vm" Dec 01 09:13:27 crc kubenswrapper[4689]: I1201 09:13:27.940025 4689 scope.go:117] "RemoveContainer" containerID="4124d1a5bc90f0741e5122298fdd51b7b8653f74640103cdcb837b86a1438d85" Dec 01 09:13:27 crc kubenswrapper[4689]: I1201 09:13:27.957940 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-m78vm"] Dec 01 09:13:27 crc kubenswrapper[4689]: I1201 09:13:27.967041 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-m78vm"] Dec 01 09:13:27 crc kubenswrapper[4689]: I1201 09:13:27.986663 4689 scope.go:117] "RemoveContainer" containerID="0bea62f9fbb02adca1905dce89523e4f4c9f21d56c91c98805f8856138eceb37" Dec 01 09:13:28 crc kubenswrapper[4689]: I1201 09:13:28.034956 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vf78w"] Dec 01 09:13:28 crc kubenswrapper[4689]: E1201 09:13:28.035457 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e3faeb5-df82-4f9d-95ae-666d3d17b984" containerName="registry-server" Dec 01 09:13:28 crc kubenswrapper[4689]: I1201 09:13:28.035476 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e3faeb5-df82-4f9d-95ae-666d3d17b984" containerName="registry-server" Dec 01 09:13:28 crc kubenswrapper[4689]: E1201 09:13:28.035497 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e3faeb5-df82-4f9d-95ae-666d3d17b984" containerName="extract-utilities" Dec 01 09:13:28 crc kubenswrapper[4689]: I1201 09:13:28.035504 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e3faeb5-df82-4f9d-95ae-666d3d17b984" containerName="extract-utilities" Dec 01 09:13:28 crc kubenswrapper[4689]: E1201 09:13:28.035521 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f3287e5-9e76-46ee-91c4-8bc9b69a738f" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 01 09:13:28 crc kubenswrapper[4689]: I1201 09:13:28.035530 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f3287e5-9e76-46ee-91c4-8bc9b69a738f" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 01 09:13:28 crc kubenswrapper[4689]: E1201 09:13:28.035569 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e3faeb5-df82-4f9d-95ae-666d3d17b984" containerName="extract-content" Dec 01 09:13:28 crc kubenswrapper[4689]: I1201 09:13:28.035577 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e3faeb5-df82-4f9d-95ae-666d3d17b984" containerName="extract-content" Dec 01 09:13:28 crc kubenswrapper[4689]: I1201 09:13:28.035845 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f3287e5-9e76-46ee-91c4-8bc9b69a738f" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 01 09:13:28 crc kubenswrapper[4689]: I1201 09:13:28.035884 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e3faeb5-df82-4f9d-95ae-666d3d17b984" containerName="registry-server" Dec 01 09:13:28 crc kubenswrapper[4689]: I1201 09:13:28.036721 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vf78w" Dec 01 09:13:28 crc kubenswrapper[4689]: I1201 09:13:28.040803 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 01 09:13:28 crc kubenswrapper[4689]: I1201 09:13:28.041125 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 01 09:13:28 crc kubenswrapper[4689]: I1201 09:13:28.041359 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 01 09:13:28 crc kubenswrapper[4689]: I1201 09:13:28.050497 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-sh59x" Dec 01 09:13:28 crc kubenswrapper[4689]: I1201 09:13:28.058648 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vf78w"] Dec 01 09:13:28 crc kubenswrapper[4689]: I1201 09:13:28.073521 4689 scope.go:117] "RemoveContainer" containerID="38b01f1b5bae29cf594557e6d5564f3f3b316815dab56be2fbfe319134c930e7" Dec 01 09:13:28 crc kubenswrapper[4689]: E1201 09:13:28.076485 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38b01f1b5bae29cf594557e6d5564f3f3b316815dab56be2fbfe319134c930e7\": container with ID starting with 38b01f1b5bae29cf594557e6d5564f3f3b316815dab56be2fbfe319134c930e7 not found: ID does not exist" containerID="38b01f1b5bae29cf594557e6d5564f3f3b316815dab56be2fbfe319134c930e7" Dec 01 09:13:28 crc kubenswrapper[4689]: I1201 09:13:28.076520 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38b01f1b5bae29cf594557e6d5564f3f3b316815dab56be2fbfe319134c930e7"} err="failed to get container status \"38b01f1b5bae29cf594557e6d5564f3f3b316815dab56be2fbfe319134c930e7\": rpc error: code = NotFound desc = could not find container \"38b01f1b5bae29cf594557e6d5564f3f3b316815dab56be2fbfe319134c930e7\": container with ID starting with 38b01f1b5bae29cf594557e6d5564f3f3b316815dab56be2fbfe319134c930e7 not found: ID does not exist" Dec 01 09:13:28 crc kubenswrapper[4689]: I1201 09:13:28.076543 4689 scope.go:117] "RemoveContainer" containerID="4124d1a5bc90f0741e5122298fdd51b7b8653f74640103cdcb837b86a1438d85" Dec 01 09:13:28 crc kubenswrapper[4689]: E1201 09:13:28.077411 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4124d1a5bc90f0741e5122298fdd51b7b8653f74640103cdcb837b86a1438d85\": container with ID starting with 4124d1a5bc90f0741e5122298fdd51b7b8653f74640103cdcb837b86a1438d85 not found: ID does not exist" containerID="4124d1a5bc90f0741e5122298fdd51b7b8653f74640103cdcb837b86a1438d85" Dec 01 09:13:28 crc kubenswrapper[4689]: I1201 09:13:28.077430 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4124d1a5bc90f0741e5122298fdd51b7b8653f74640103cdcb837b86a1438d85"} err="failed to get container status \"4124d1a5bc90f0741e5122298fdd51b7b8653f74640103cdcb837b86a1438d85\": rpc error: code = NotFound desc = could not find container \"4124d1a5bc90f0741e5122298fdd51b7b8653f74640103cdcb837b86a1438d85\": container with ID starting with 4124d1a5bc90f0741e5122298fdd51b7b8653f74640103cdcb837b86a1438d85 not found: ID does not exist" Dec 01 09:13:28 crc kubenswrapper[4689]: I1201 09:13:28.077445 4689 scope.go:117] "RemoveContainer" containerID="0bea62f9fbb02adca1905dce89523e4f4c9f21d56c91c98805f8856138eceb37" Dec 01 09:13:28 crc kubenswrapper[4689]: E1201 09:13:28.077932 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0bea62f9fbb02adca1905dce89523e4f4c9f21d56c91c98805f8856138eceb37\": container with ID starting with 0bea62f9fbb02adca1905dce89523e4f4c9f21d56c91c98805f8856138eceb37 not found: ID does not exist" containerID="0bea62f9fbb02adca1905dce89523e4f4c9f21d56c91c98805f8856138eceb37" Dec 01 09:13:28 crc kubenswrapper[4689]: I1201 09:13:28.077949 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0bea62f9fbb02adca1905dce89523e4f4c9f21d56c91c98805f8856138eceb37"} err="failed to get container status \"0bea62f9fbb02adca1905dce89523e4f4c9f21d56c91c98805f8856138eceb37\": rpc error: code = NotFound desc = could not find container \"0bea62f9fbb02adca1905dce89523e4f4c9f21d56c91c98805f8856138eceb37\": container with ID starting with 0bea62f9fbb02adca1905dce89523e4f4c9f21d56c91c98805f8856138eceb37 not found: ID does not exist" Dec 01 09:13:28 crc kubenswrapper[4689]: I1201 09:13:28.111198 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/14713a8f-36bf-48fa-bfb2-3c384ad7abd0-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-vf78w\" (UID: \"14713a8f-36bf-48fa-bfb2-3c384ad7abd0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vf78w" Dec 01 09:13:28 crc kubenswrapper[4689]: I1201 09:13:28.111261 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/14713a8f-36bf-48fa-bfb2-3c384ad7abd0-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-vf78w\" (UID: \"14713a8f-36bf-48fa-bfb2-3c384ad7abd0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vf78w" Dec 01 09:13:28 crc kubenswrapper[4689]: I1201 09:13:28.111485 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjrq4\" (UniqueName: \"kubernetes.io/projected/14713a8f-36bf-48fa-bfb2-3c384ad7abd0-kube-api-access-mjrq4\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-vf78w\" (UID: \"14713a8f-36bf-48fa-bfb2-3c384ad7abd0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vf78w" Dec 01 09:13:28 crc kubenswrapper[4689]: I1201 09:13:28.213409 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjrq4\" (UniqueName: \"kubernetes.io/projected/14713a8f-36bf-48fa-bfb2-3c384ad7abd0-kube-api-access-mjrq4\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-vf78w\" (UID: \"14713a8f-36bf-48fa-bfb2-3c384ad7abd0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vf78w" Dec 01 09:13:28 crc kubenswrapper[4689]: I1201 09:13:28.213861 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/14713a8f-36bf-48fa-bfb2-3c384ad7abd0-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-vf78w\" (UID: \"14713a8f-36bf-48fa-bfb2-3c384ad7abd0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vf78w" Dec 01 09:13:28 crc kubenswrapper[4689]: I1201 09:13:28.214719 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/14713a8f-36bf-48fa-bfb2-3c384ad7abd0-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-vf78w\" (UID: \"14713a8f-36bf-48fa-bfb2-3c384ad7abd0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vf78w" Dec 01 09:13:28 crc kubenswrapper[4689]: I1201 09:13:28.220812 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/14713a8f-36bf-48fa-bfb2-3c384ad7abd0-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-vf78w\" (UID: \"14713a8f-36bf-48fa-bfb2-3c384ad7abd0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vf78w" Dec 01 09:13:28 crc kubenswrapper[4689]: I1201 09:13:28.236829 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/14713a8f-36bf-48fa-bfb2-3c384ad7abd0-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-vf78w\" (UID: \"14713a8f-36bf-48fa-bfb2-3c384ad7abd0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vf78w" Dec 01 09:13:28 crc kubenswrapper[4689]: I1201 09:13:28.245104 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjrq4\" (UniqueName: \"kubernetes.io/projected/14713a8f-36bf-48fa-bfb2-3c384ad7abd0-kube-api-access-mjrq4\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-vf78w\" (UID: \"14713a8f-36bf-48fa-bfb2-3c384ad7abd0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vf78w" Dec 01 09:13:28 crc kubenswrapper[4689]: I1201 09:13:28.405479 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vf78w" Dec 01 09:13:28 crc kubenswrapper[4689]: I1201 09:13:28.927637 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vf78w"] Dec 01 09:13:29 crc kubenswrapper[4689]: I1201 09:13:29.062619 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e3faeb5-df82-4f9d-95ae-666d3d17b984" path="/var/lib/kubelet/pods/6e3faeb5-df82-4f9d-95ae-666d3d17b984/volumes" Dec 01 09:13:29 crc kubenswrapper[4689]: I1201 09:13:29.915286 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vf78w" event={"ID":"14713a8f-36bf-48fa-bfb2-3c384ad7abd0","Type":"ContainerStarted","Data":"c826ecea2076603706dd2b965bba2a34dc9b9670e53c87131cdb6bb142a643e5"} Dec 01 09:13:30 crc kubenswrapper[4689]: I1201 09:13:30.925936 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vf78w" event={"ID":"14713a8f-36bf-48fa-bfb2-3c384ad7abd0","Type":"ContainerStarted","Data":"b6728609bbe1638e0f25ac1a41e7766e871140c616dbabe35617263b823b7921"} Dec 01 09:13:30 crc kubenswrapper[4689]: I1201 09:13:30.948844 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vf78w" podStartSLOduration=3.080951987 podStartE2EDuration="3.948823239s" podCreationTimestamp="2025-12-01 09:13:27 +0000 UTC" firstStartedPulling="2025-12-01 09:13:28.928323639 +0000 UTC m=+2089.000611543" lastFinishedPulling="2025-12-01 09:13:29.796194891 +0000 UTC m=+2089.868482795" observedRunningTime="2025-12-01 09:13:30.946784804 +0000 UTC m=+2091.019072708" watchObservedRunningTime="2025-12-01 09:13:30.948823239 +0000 UTC m=+2091.021111143" Dec 01 09:13:35 crc kubenswrapper[4689]: I1201 09:13:35.461400 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-k6n9v" Dec 01 09:13:35 crc kubenswrapper[4689]: I1201 09:13:35.526630 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-k6n9v" Dec 01 09:13:36 crc kubenswrapper[4689]: I1201 09:13:36.247676 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k6n9v"] Dec 01 09:13:36 crc kubenswrapper[4689]: I1201 09:13:36.997612 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-k6n9v" podUID="3e91eb5d-6bfb-460e-b950-6df17a600bf0" containerName="registry-server" containerID="cri-o://db6c343c7d5f03f447aac85b61f9b5370a7116f5af1b7d5151fc45ee75843bbc" gracePeriod=2 Dec 01 09:13:37 crc kubenswrapper[4689]: I1201 09:13:37.455961 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k6n9v" Dec 01 09:13:37 crc kubenswrapper[4689]: I1201 09:13:37.491335 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e91eb5d-6bfb-460e-b950-6df17a600bf0-catalog-content\") pod \"3e91eb5d-6bfb-460e-b950-6df17a600bf0\" (UID: \"3e91eb5d-6bfb-460e-b950-6df17a600bf0\") " Dec 01 09:13:37 crc kubenswrapper[4689]: I1201 09:13:37.491519 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fr567\" (UniqueName: \"kubernetes.io/projected/3e91eb5d-6bfb-460e-b950-6df17a600bf0-kube-api-access-fr567\") pod \"3e91eb5d-6bfb-460e-b950-6df17a600bf0\" (UID: \"3e91eb5d-6bfb-460e-b950-6df17a600bf0\") " Dec 01 09:13:37 crc kubenswrapper[4689]: I1201 09:13:37.491568 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e91eb5d-6bfb-460e-b950-6df17a600bf0-utilities\") pod \"3e91eb5d-6bfb-460e-b950-6df17a600bf0\" (UID: \"3e91eb5d-6bfb-460e-b950-6df17a600bf0\") " Dec 01 09:13:37 crc kubenswrapper[4689]: I1201 09:13:37.493030 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e91eb5d-6bfb-460e-b950-6df17a600bf0-utilities" (OuterVolumeSpecName: "utilities") pod "3e91eb5d-6bfb-460e-b950-6df17a600bf0" (UID: "3e91eb5d-6bfb-460e-b950-6df17a600bf0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:13:37 crc kubenswrapper[4689]: I1201 09:13:37.502680 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e91eb5d-6bfb-460e-b950-6df17a600bf0-kube-api-access-fr567" (OuterVolumeSpecName: "kube-api-access-fr567") pod "3e91eb5d-6bfb-460e-b950-6df17a600bf0" (UID: "3e91eb5d-6bfb-460e-b950-6df17a600bf0"). InnerVolumeSpecName "kube-api-access-fr567". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:13:37 crc kubenswrapper[4689]: I1201 09:13:37.593781 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fr567\" (UniqueName: \"kubernetes.io/projected/3e91eb5d-6bfb-460e-b950-6df17a600bf0-kube-api-access-fr567\") on node \"crc\" DevicePath \"\"" Dec 01 09:13:37 crc kubenswrapper[4689]: I1201 09:13:37.593977 4689 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e91eb5d-6bfb-460e-b950-6df17a600bf0-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 09:13:37 crc kubenswrapper[4689]: I1201 09:13:37.607885 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e91eb5d-6bfb-460e-b950-6df17a600bf0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3e91eb5d-6bfb-460e-b950-6df17a600bf0" (UID: "3e91eb5d-6bfb-460e-b950-6df17a600bf0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:13:37 crc kubenswrapper[4689]: I1201 09:13:37.695170 4689 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e91eb5d-6bfb-460e-b950-6df17a600bf0-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 09:13:38 crc kubenswrapper[4689]: I1201 09:13:38.007773 4689 generic.go:334] "Generic (PLEG): container finished" podID="3e91eb5d-6bfb-460e-b950-6df17a600bf0" containerID="db6c343c7d5f03f447aac85b61f9b5370a7116f5af1b7d5151fc45ee75843bbc" exitCode=0 Dec 01 09:13:38 crc kubenswrapper[4689]: I1201 09:13:38.007825 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k6n9v" Dec 01 09:13:38 crc kubenswrapper[4689]: I1201 09:13:38.007857 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k6n9v" event={"ID":"3e91eb5d-6bfb-460e-b950-6df17a600bf0","Type":"ContainerDied","Data":"db6c343c7d5f03f447aac85b61f9b5370a7116f5af1b7d5151fc45ee75843bbc"} Dec 01 09:13:38 crc kubenswrapper[4689]: I1201 09:13:38.007938 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k6n9v" event={"ID":"3e91eb5d-6bfb-460e-b950-6df17a600bf0","Type":"ContainerDied","Data":"a05b90941e71d1b527fa4eb5bb7fa16048f82907f3d89ff0b5d71c96f001195a"} Dec 01 09:13:38 crc kubenswrapper[4689]: I1201 09:13:38.007972 4689 scope.go:117] "RemoveContainer" containerID="db6c343c7d5f03f447aac85b61f9b5370a7116f5af1b7d5151fc45ee75843bbc" Dec 01 09:13:38 crc kubenswrapper[4689]: I1201 09:13:38.042569 4689 scope.go:117] "RemoveContainer" containerID="0a243fe648c333e200bab4be74f4468215f2c48faa8cb47b5581c5c75768aec7" Dec 01 09:13:38 crc kubenswrapper[4689]: I1201 09:13:38.049968 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k6n9v"] Dec 01 09:13:38 crc kubenswrapper[4689]: I1201 09:13:38.059355 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-k6n9v"] Dec 01 09:13:38 crc kubenswrapper[4689]: I1201 09:13:38.064583 4689 scope.go:117] "RemoveContainer" containerID="70c421073287951b0f01a9ea14f05bf19ace046f2def2ba00c100e82689dfd09" Dec 01 09:13:38 crc kubenswrapper[4689]: I1201 09:13:38.106132 4689 scope.go:117] "RemoveContainer" containerID="db6c343c7d5f03f447aac85b61f9b5370a7116f5af1b7d5151fc45ee75843bbc" Dec 01 09:13:38 crc kubenswrapper[4689]: E1201 09:13:38.106601 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db6c343c7d5f03f447aac85b61f9b5370a7116f5af1b7d5151fc45ee75843bbc\": container with ID starting with db6c343c7d5f03f447aac85b61f9b5370a7116f5af1b7d5151fc45ee75843bbc not found: ID does not exist" containerID="db6c343c7d5f03f447aac85b61f9b5370a7116f5af1b7d5151fc45ee75843bbc" Dec 01 09:13:38 crc kubenswrapper[4689]: I1201 09:13:38.106650 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db6c343c7d5f03f447aac85b61f9b5370a7116f5af1b7d5151fc45ee75843bbc"} err="failed to get container status \"db6c343c7d5f03f447aac85b61f9b5370a7116f5af1b7d5151fc45ee75843bbc\": rpc error: code = NotFound desc = could not find container \"db6c343c7d5f03f447aac85b61f9b5370a7116f5af1b7d5151fc45ee75843bbc\": container with ID starting with db6c343c7d5f03f447aac85b61f9b5370a7116f5af1b7d5151fc45ee75843bbc not found: ID does not exist" Dec 01 09:13:38 crc kubenswrapper[4689]: I1201 09:13:38.106683 4689 scope.go:117] "RemoveContainer" containerID="0a243fe648c333e200bab4be74f4468215f2c48faa8cb47b5581c5c75768aec7" Dec 01 09:13:38 crc kubenswrapper[4689]: E1201 09:13:38.107102 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a243fe648c333e200bab4be74f4468215f2c48faa8cb47b5581c5c75768aec7\": container with ID starting with 0a243fe648c333e200bab4be74f4468215f2c48faa8cb47b5581c5c75768aec7 not found: ID does not exist" containerID="0a243fe648c333e200bab4be74f4468215f2c48faa8cb47b5581c5c75768aec7" Dec 01 09:13:38 crc kubenswrapper[4689]: I1201 09:13:38.107134 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a243fe648c333e200bab4be74f4468215f2c48faa8cb47b5581c5c75768aec7"} err="failed to get container status \"0a243fe648c333e200bab4be74f4468215f2c48faa8cb47b5581c5c75768aec7\": rpc error: code = NotFound desc = could not find container \"0a243fe648c333e200bab4be74f4468215f2c48faa8cb47b5581c5c75768aec7\": container with ID starting with 0a243fe648c333e200bab4be74f4468215f2c48faa8cb47b5581c5c75768aec7 not found: ID does not exist" Dec 01 09:13:38 crc kubenswrapper[4689]: I1201 09:13:38.107151 4689 scope.go:117] "RemoveContainer" containerID="70c421073287951b0f01a9ea14f05bf19ace046f2def2ba00c100e82689dfd09" Dec 01 09:13:38 crc kubenswrapper[4689]: E1201 09:13:38.107682 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70c421073287951b0f01a9ea14f05bf19ace046f2def2ba00c100e82689dfd09\": container with ID starting with 70c421073287951b0f01a9ea14f05bf19ace046f2def2ba00c100e82689dfd09 not found: ID does not exist" containerID="70c421073287951b0f01a9ea14f05bf19ace046f2def2ba00c100e82689dfd09" Dec 01 09:13:38 crc kubenswrapper[4689]: I1201 09:13:38.107711 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70c421073287951b0f01a9ea14f05bf19ace046f2def2ba00c100e82689dfd09"} err="failed to get container status \"70c421073287951b0f01a9ea14f05bf19ace046f2def2ba00c100e82689dfd09\": rpc error: code = NotFound desc = could not find container \"70c421073287951b0f01a9ea14f05bf19ace046f2def2ba00c100e82689dfd09\": container with ID starting with 70c421073287951b0f01a9ea14f05bf19ace046f2def2ba00c100e82689dfd09 not found: ID does not exist" Dec 01 09:13:39 crc kubenswrapper[4689]: I1201 09:13:39.057290 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e91eb5d-6bfb-460e-b950-6df17a600bf0" path="/var/lib/kubelet/pods/3e91eb5d-6bfb-460e-b950-6df17a600bf0/volumes" Dec 01 09:14:09 crc kubenswrapper[4689]: I1201 09:14:09.147280 4689 patch_prober.go:28] interesting pod/machine-config-daemon-hmdnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 09:14:09 crc kubenswrapper[4689]: I1201 09:14:09.147848 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 09:14:39 crc kubenswrapper[4689]: I1201 09:14:39.146927 4689 patch_prober.go:28] interesting pod/machine-config-daemon-hmdnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 09:14:39 crc kubenswrapper[4689]: I1201 09:14:39.147624 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 09:14:53 crc kubenswrapper[4689]: I1201 09:14:53.741134 4689 generic.go:334] "Generic (PLEG): container finished" podID="14713a8f-36bf-48fa-bfb2-3c384ad7abd0" containerID="b6728609bbe1638e0f25ac1a41e7766e871140c616dbabe35617263b823b7921" exitCode=0 Dec 01 09:14:53 crc kubenswrapper[4689]: I1201 09:14:53.741230 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vf78w" event={"ID":"14713a8f-36bf-48fa-bfb2-3c384ad7abd0","Type":"ContainerDied","Data":"b6728609bbe1638e0f25ac1a41e7766e871140c616dbabe35617263b823b7921"} Dec 01 09:14:55 crc kubenswrapper[4689]: I1201 09:14:55.249641 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vf78w" Dec 01 09:14:55 crc kubenswrapper[4689]: I1201 09:14:55.419172 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/14713a8f-36bf-48fa-bfb2-3c384ad7abd0-inventory\") pod \"14713a8f-36bf-48fa-bfb2-3c384ad7abd0\" (UID: \"14713a8f-36bf-48fa-bfb2-3c384ad7abd0\") " Dec 01 09:14:55 crc kubenswrapper[4689]: I1201 09:14:55.419673 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjrq4\" (UniqueName: \"kubernetes.io/projected/14713a8f-36bf-48fa-bfb2-3c384ad7abd0-kube-api-access-mjrq4\") pod \"14713a8f-36bf-48fa-bfb2-3c384ad7abd0\" (UID: \"14713a8f-36bf-48fa-bfb2-3c384ad7abd0\") " Dec 01 09:14:55 crc kubenswrapper[4689]: I1201 09:14:55.419829 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/14713a8f-36bf-48fa-bfb2-3c384ad7abd0-ssh-key\") pod \"14713a8f-36bf-48fa-bfb2-3c384ad7abd0\" (UID: \"14713a8f-36bf-48fa-bfb2-3c384ad7abd0\") " Dec 01 09:14:55 crc kubenswrapper[4689]: I1201 09:14:55.426600 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14713a8f-36bf-48fa-bfb2-3c384ad7abd0-kube-api-access-mjrq4" (OuterVolumeSpecName: "kube-api-access-mjrq4") pod "14713a8f-36bf-48fa-bfb2-3c384ad7abd0" (UID: "14713a8f-36bf-48fa-bfb2-3c384ad7abd0"). InnerVolumeSpecName "kube-api-access-mjrq4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:14:55 crc kubenswrapper[4689]: I1201 09:14:55.467554 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14713a8f-36bf-48fa-bfb2-3c384ad7abd0-inventory" (OuterVolumeSpecName: "inventory") pod "14713a8f-36bf-48fa-bfb2-3c384ad7abd0" (UID: "14713a8f-36bf-48fa-bfb2-3c384ad7abd0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:14:55 crc kubenswrapper[4689]: I1201 09:14:55.472892 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14713a8f-36bf-48fa-bfb2-3c384ad7abd0-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "14713a8f-36bf-48fa-bfb2-3c384ad7abd0" (UID: "14713a8f-36bf-48fa-bfb2-3c384ad7abd0"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:14:55 crc kubenswrapper[4689]: I1201 09:14:55.522682 4689 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/14713a8f-36bf-48fa-bfb2-3c384ad7abd0-inventory\") on node \"crc\" DevicePath \"\"" Dec 01 09:14:55 crc kubenswrapper[4689]: I1201 09:14:55.522722 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mjrq4\" (UniqueName: \"kubernetes.io/projected/14713a8f-36bf-48fa-bfb2-3c384ad7abd0-kube-api-access-mjrq4\") on node \"crc\" DevicePath \"\"" Dec 01 09:14:55 crc kubenswrapper[4689]: I1201 09:14:55.522733 4689 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/14713a8f-36bf-48fa-bfb2-3c384ad7abd0-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 01 09:14:55 crc kubenswrapper[4689]: I1201 09:14:55.762604 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vf78w" event={"ID":"14713a8f-36bf-48fa-bfb2-3c384ad7abd0","Type":"ContainerDied","Data":"c826ecea2076603706dd2b965bba2a34dc9b9670e53c87131cdb6bb142a643e5"} Dec 01 09:14:55 crc kubenswrapper[4689]: I1201 09:14:55.762894 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c826ecea2076603706dd2b965bba2a34dc9b9670e53c87131cdb6bb142a643e5" Dec 01 09:14:55 crc kubenswrapper[4689]: I1201 09:14:55.762693 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vf78w" Dec 01 09:14:55 crc kubenswrapper[4689]: I1201 09:14:55.870020 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-42d6w"] Dec 01 09:14:55 crc kubenswrapper[4689]: E1201 09:14:55.870493 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14713a8f-36bf-48fa-bfb2-3c384ad7abd0" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Dec 01 09:14:55 crc kubenswrapper[4689]: I1201 09:14:55.870516 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="14713a8f-36bf-48fa-bfb2-3c384ad7abd0" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Dec 01 09:14:55 crc kubenswrapper[4689]: E1201 09:14:55.870548 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e91eb5d-6bfb-460e-b950-6df17a600bf0" containerName="extract-content" Dec 01 09:14:55 crc kubenswrapper[4689]: I1201 09:14:55.870555 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e91eb5d-6bfb-460e-b950-6df17a600bf0" containerName="extract-content" Dec 01 09:14:55 crc kubenswrapper[4689]: E1201 09:14:55.870570 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e91eb5d-6bfb-460e-b950-6df17a600bf0" containerName="registry-server" Dec 01 09:14:55 crc kubenswrapper[4689]: I1201 09:14:55.870576 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e91eb5d-6bfb-460e-b950-6df17a600bf0" containerName="registry-server" Dec 01 09:14:55 crc kubenswrapper[4689]: E1201 09:14:55.870590 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e91eb5d-6bfb-460e-b950-6df17a600bf0" containerName="extract-utilities" Dec 01 09:14:55 crc kubenswrapper[4689]: I1201 09:14:55.870596 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e91eb5d-6bfb-460e-b950-6df17a600bf0" containerName="extract-utilities" Dec 01 09:14:55 crc kubenswrapper[4689]: I1201 09:14:55.870803 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e91eb5d-6bfb-460e-b950-6df17a600bf0" containerName="registry-server" Dec 01 09:14:55 crc kubenswrapper[4689]: I1201 09:14:55.870829 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="14713a8f-36bf-48fa-bfb2-3c384ad7abd0" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Dec 01 09:14:55 crc kubenswrapper[4689]: I1201 09:14:55.871571 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-42d6w" Dec 01 09:14:55 crc kubenswrapper[4689]: I1201 09:14:55.874016 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 01 09:14:55 crc kubenswrapper[4689]: I1201 09:14:55.874311 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 01 09:14:55 crc kubenswrapper[4689]: I1201 09:14:55.874858 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-sh59x" Dec 01 09:14:55 crc kubenswrapper[4689]: I1201 09:14:55.875509 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 01 09:14:55 crc kubenswrapper[4689]: I1201 09:14:55.888063 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-42d6w"] Dec 01 09:14:56 crc kubenswrapper[4689]: I1201 09:14:56.033196 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dc01b01d-6ad2-4595-ab0f-42cc127d1a7a-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-42d6w\" (UID: \"dc01b01d-6ad2-4595-ab0f-42cc127d1a7a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-42d6w" Dec 01 09:14:56 crc kubenswrapper[4689]: I1201 09:14:56.033282 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dc01b01d-6ad2-4595-ab0f-42cc127d1a7a-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-42d6w\" (UID: \"dc01b01d-6ad2-4595-ab0f-42cc127d1a7a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-42d6w" Dec 01 09:14:56 crc kubenswrapper[4689]: I1201 09:14:56.033413 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzfl6\" (UniqueName: \"kubernetes.io/projected/dc01b01d-6ad2-4595-ab0f-42cc127d1a7a-kube-api-access-jzfl6\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-42d6w\" (UID: \"dc01b01d-6ad2-4595-ab0f-42cc127d1a7a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-42d6w" Dec 01 09:14:56 crc kubenswrapper[4689]: I1201 09:14:56.134789 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzfl6\" (UniqueName: \"kubernetes.io/projected/dc01b01d-6ad2-4595-ab0f-42cc127d1a7a-kube-api-access-jzfl6\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-42d6w\" (UID: \"dc01b01d-6ad2-4595-ab0f-42cc127d1a7a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-42d6w" Dec 01 09:14:56 crc kubenswrapper[4689]: I1201 09:14:56.134906 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dc01b01d-6ad2-4595-ab0f-42cc127d1a7a-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-42d6w\" (UID: \"dc01b01d-6ad2-4595-ab0f-42cc127d1a7a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-42d6w" Dec 01 09:14:56 crc kubenswrapper[4689]: I1201 09:14:56.134966 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dc01b01d-6ad2-4595-ab0f-42cc127d1a7a-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-42d6w\" (UID: \"dc01b01d-6ad2-4595-ab0f-42cc127d1a7a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-42d6w" Dec 01 09:14:56 crc kubenswrapper[4689]: I1201 09:14:56.139207 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dc01b01d-6ad2-4595-ab0f-42cc127d1a7a-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-42d6w\" (UID: \"dc01b01d-6ad2-4595-ab0f-42cc127d1a7a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-42d6w" Dec 01 09:14:56 crc kubenswrapper[4689]: I1201 09:14:56.139273 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dc01b01d-6ad2-4595-ab0f-42cc127d1a7a-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-42d6w\" (UID: \"dc01b01d-6ad2-4595-ab0f-42cc127d1a7a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-42d6w" Dec 01 09:14:56 crc kubenswrapper[4689]: I1201 09:14:56.158268 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzfl6\" (UniqueName: \"kubernetes.io/projected/dc01b01d-6ad2-4595-ab0f-42cc127d1a7a-kube-api-access-jzfl6\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-42d6w\" (UID: \"dc01b01d-6ad2-4595-ab0f-42cc127d1a7a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-42d6w" Dec 01 09:14:56 crc kubenswrapper[4689]: I1201 09:14:56.189627 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-42d6w" Dec 01 09:14:56 crc kubenswrapper[4689]: I1201 09:14:56.769411 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-42d6w"] Dec 01 09:14:57 crc kubenswrapper[4689]: I1201 09:14:57.787935 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-42d6w" event={"ID":"dc01b01d-6ad2-4595-ab0f-42cc127d1a7a","Type":"ContainerStarted","Data":"4c038fc492bfad51f732135893a087a3183b04a62ad5f274cea168ef6d6ea82f"} Dec 01 09:14:58 crc kubenswrapper[4689]: I1201 09:14:58.799817 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-42d6w" event={"ID":"dc01b01d-6ad2-4595-ab0f-42cc127d1a7a","Type":"ContainerStarted","Data":"b8583e9263c5b17d93d0151a93cceb53003738613df9b8ec9d12df3f006b528b"} Dec 01 09:14:58 crc kubenswrapper[4689]: I1201 09:14:58.823017 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-42d6w" podStartSLOduration=2.863335422 podStartE2EDuration="3.822991229s" podCreationTimestamp="2025-12-01 09:14:55 +0000 UTC" firstStartedPulling="2025-12-01 09:14:56.771906553 +0000 UTC m=+2176.844194457" lastFinishedPulling="2025-12-01 09:14:57.73156236 +0000 UTC m=+2177.803850264" observedRunningTime="2025-12-01 09:14:58.815463934 +0000 UTC m=+2178.887751838" watchObservedRunningTime="2025-12-01 09:14:58.822991229 +0000 UTC m=+2178.895279133" Dec 01 09:15:00 crc kubenswrapper[4689]: I1201 09:15:00.153833 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409675-zzdpl"] Dec 01 09:15:00 crc kubenswrapper[4689]: I1201 09:15:00.155741 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409675-zzdpl" Dec 01 09:15:00 crc kubenswrapper[4689]: I1201 09:15:00.159729 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 01 09:15:00 crc kubenswrapper[4689]: I1201 09:15:00.165621 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409675-zzdpl"] Dec 01 09:15:00 crc kubenswrapper[4689]: I1201 09:15:00.166891 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 01 09:15:00 crc kubenswrapper[4689]: I1201 09:15:00.324879 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4258\" (UniqueName: \"kubernetes.io/projected/dd36efa6-5f28-4753-b93b-7a574aa0b7e6-kube-api-access-r4258\") pod \"collect-profiles-29409675-zzdpl\" (UID: \"dd36efa6-5f28-4753-b93b-7a574aa0b7e6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409675-zzdpl" Dec 01 09:15:00 crc kubenswrapper[4689]: I1201 09:15:00.324946 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dd36efa6-5f28-4753-b93b-7a574aa0b7e6-secret-volume\") pod \"collect-profiles-29409675-zzdpl\" (UID: \"dd36efa6-5f28-4753-b93b-7a574aa0b7e6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409675-zzdpl" Dec 01 09:15:00 crc kubenswrapper[4689]: I1201 09:15:00.325436 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dd36efa6-5f28-4753-b93b-7a574aa0b7e6-config-volume\") pod \"collect-profiles-29409675-zzdpl\" (UID: \"dd36efa6-5f28-4753-b93b-7a574aa0b7e6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409675-zzdpl" Dec 01 09:15:00 crc kubenswrapper[4689]: I1201 09:15:00.427566 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dd36efa6-5f28-4753-b93b-7a574aa0b7e6-config-volume\") pod \"collect-profiles-29409675-zzdpl\" (UID: \"dd36efa6-5f28-4753-b93b-7a574aa0b7e6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409675-zzdpl" Dec 01 09:15:00 crc kubenswrapper[4689]: I1201 09:15:00.427666 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4258\" (UniqueName: \"kubernetes.io/projected/dd36efa6-5f28-4753-b93b-7a574aa0b7e6-kube-api-access-r4258\") pod \"collect-profiles-29409675-zzdpl\" (UID: \"dd36efa6-5f28-4753-b93b-7a574aa0b7e6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409675-zzdpl" Dec 01 09:15:00 crc kubenswrapper[4689]: I1201 09:15:00.427713 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dd36efa6-5f28-4753-b93b-7a574aa0b7e6-secret-volume\") pod \"collect-profiles-29409675-zzdpl\" (UID: \"dd36efa6-5f28-4753-b93b-7a574aa0b7e6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409675-zzdpl" Dec 01 09:15:00 crc kubenswrapper[4689]: I1201 09:15:00.428723 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dd36efa6-5f28-4753-b93b-7a574aa0b7e6-config-volume\") pod \"collect-profiles-29409675-zzdpl\" (UID: \"dd36efa6-5f28-4753-b93b-7a574aa0b7e6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409675-zzdpl" Dec 01 09:15:00 crc kubenswrapper[4689]: I1201 09:15:00.433865 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dd36efa6-5f28-4753-b93b-7a574aa0b7e6-secret-volume\") pod \"collect-profiles-29409675-zzdpl\" (UID: \"dd36efa6-5f28-4753-b93b-7a574aa0b7e6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409675-zzdpl" Dec 01 09:15:00 crc kubenswrapper[4689]: I1201 09:15:00.454282 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4258\" (UniqueName: \"kubernetes.io/projected/dd36efa6-5f28-4753-b93b-7a574aa0b7e6-kube-api-access-r4258\") pod \"collect-profiles-29409675-zzdpl\" (UID: \"dd36efa6-5f28-4753-b93b-7a574aa0b7e6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409675-zzdpl" Dec 01 09:15:00 crc kubenswrapper[4689]: I1201 09:15:00.479175 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409675-zzdpl" Dec 01 09:15:01 crc kubenswrapper[4689]: I1201 09:15:01.005042 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409675-zzdpl"] Dec 01 09:15:01 crc kubenswrapper[4689]: W1201 09:15:01.038638 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddd36efa6_5f28_4753_b93b_7a574aa0b7e6.slice/crio-77d2acadf8f1a9aef764db9096f21d66e8062ddaa0296fed97e4097a2175f1a7 WatchSource:0}: Error finding container 77d2acadf8f1a9aef764db9096f21d66e8062ddaa0296fed97e4097a2175f1a7: Status 404 returned error can't find the container with id 77d2acadf8f1a9aef764db9096f21d66e8062ddaa0296fed97e4097a2175f1a7 Dec 01 09:15:01 crc kubenswrapper[4689]: I1201 09:15:01.842131 4689 generic.go:334] "Generic (PLEG): container finished" podID="dd36efa6-5f28-4753-b93b-7a574aa0b7e6" containerID="7453dfde7cdd18654ff13756d9b96ecf21f83d8639895b007bf8b3b71da36934" exitCode=0 Dec 01 09:15:01 crc kubenswrapper[4689]: I1201 09:15:01.842778 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29409675-zzdpl" event={"ID":"dd36efa6-5f28-4753-b93b-7a574aa0b7e6","Type":"ContainerDied","Data":"7453dfde7cdd18654ff13756d9b96ecf21f83d8639895b007bf8b3b71da36934"} Dec 01 09:15:01 crc kubenswrapper[4689]: I1201 09:15:01.843032 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29409675-zzdpl" event={"ID":"dd36efa6-5f28-4753-b93b-7a574aa0b7e6","Type":"ContainerStarted","Data":"77d2acadf8f1a9aef764db9096f21d66e8062ddaa0296fed97e4097a2175f1a7"} Dec 01 09:15:03 crc kubenswrapper[4689]: I1201 09:15:03.221525 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409675-zzdpl" Dec 01 09:15:03 crc kubenswrapper[4689]: I1201 09:15:03.406022 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r4258\" (UniqueName: \"kubernetes.io/projected/dd36efa6-5f28-4753-b93b-7a574aa0b7e6-kube-api-access-r4258\") pod \"dd36efa6-5f28-4753-b93b-7a574aa0b7e6\" (UID: \"dd36efa6-5f28-4753-b93b-7a574aa0b7e6\") " Dec 01 09:15:03 crc kubenswrapper[4689]: I1201 09:15:03.406084 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dd36efa6-5f28-4753-b93b-7a574aa0b7e6-secret-volume\") pod \"dd36efa6-5f28-4753-b93b-7a574aa0b7e6\" (UID: \"dd36efa6-5f28-4753-b93b-7a574aa0b7e6\") " Dec 01 09:15:03 crc kubenswrapper[4689]: I1201 09:15:03.406235 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dd36efa6-5f28-4753-b93b-7a574aa0b7e6-config-volume\") pod \"dd36efa6-5f28-4753-b93b-7a574aa0b7e6\" (UID: \"dd36efa6-5f28-4753-b93b-7a574aa0b7e6\") " Dec 01 09:15:03 crc kubenswrapper[4689]: I1201 09:15:03.407463 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd36efa6-5f28-4753-b93b-7a574aa0b7e6-config-volume" (OuterVolumeSpecName: "config-volume") pod "dd36efa6-5f28-4753-b93b-7a574aa0b7e6" (UID: "dd36efa6-5f28-4753-b93b-7a574aa0b7e6"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:15:03 crc kubenswrapper[4689]: I1201 09:15:03.416206 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd36efa6-5f28-4753-b93b-7a574aa0b7e6-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "dd36efa6-5f28-4753-b93b-7a574aa0b7e6" (UID: "dd36efa6-5f28-4753-b93b-7a574aa0b7e6"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:15:03 crc kubenswrapper[4689]: I1201 09:15:03.419735 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd36efa6-5f28-4753-b93b-7a574aa0b7e6-kube-api-access-r4258" (OuterVolumeSpecName: "kube-api-access-r4258") pod "dd36efa6-5f28-4753-b93b-7a574aa0b7e6" (UID: "dd36efa6-5f28-4753-b93b-7a574aa0b7e6"). InnerVolumeSpecName "kube-api-access-r4258". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:15:03 crc kubenswrapper[4689]: I1201 09:15:03.509719 4689 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dd36efa6-5f28-4753-b93b-7a574aa0b7e6-config-volume\") on node \"crc\" DevicePath \"\"" Dec 01 09:15:03 crc kubenswrapper[4689]: I1201 09:15:03.509787 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r4258\" (UniqueName: \"kubernetes.io/projected/dd36efa6-5f28-4753-b93b-7a574aa0b7e6-kube-api-access-r4258\") on node \"crc\" DevicePath \"\"" Dec 01 09:15:03 crc kubenswrapper[4689]: I1201 09:15:03.509802 4689 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dd36efa6-5f28-4753-b93b-7a574aa0b7e6-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 01 09:15:03 crc kubenswrapper[4689]: I1201 09:15:03.867881 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29409675-zzdpl" event={"ID":"dd36efa6-5f28-4753-b93b-7a574aa0b7e6","Type":"ContainerDied","Data":"77d2acadf8f1a9aef764db9096f21d66e8062ddaa0296fed97e4097a2175f1a7"} Dec 01 09:15:03 crc kubenswrapper[4689]: I1201 09:15:03.867963 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="77d2acadf8f1a9aef764db9096f21d66e8062ddaa0296fed97e4097a2175f1a7" Dec 01 09:15:03 crc kubenswrapper[4689]: I1201 09:15:03.867902 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409675-zzdpl" Dec 01 09:15:03 crc kubenswrapper[4689]: I1201 09:15:03.871279 4689 generic.go:334] "Generic (PLEG): container finished" podID="dc01b01d-6ad2-4595-ab0f-42cc127d1a7a" containerID="b8583e9263c5b17d93d0151a93cceb53003738613df9b8ec9d12df3f006b528b" exitCode=0 Dec 01 09:15:03 crc kubenswrapper[4689]: I1201 09:15:03.871329 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-42d6w" event={"ID":"dc01b01d-6ad2-4595-ab0f-42cc127d1a7a","Type":"ContainerDied","Data":"b8583e9263c5b17d93d0151a93cceb53003738613df9b8ec9d12df3f006b528b"} Dec 01 09:15:04 crc kubenswrapper[4689]: I1201 09:15:04.320578 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409630-9rhzp"] Dec 01 09:15:04 crc kubenswrapper[4689]: I1201 09:15:04.330358 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409630-9rhzp"] Dec 01 09:15:05 crc kubenswrapper[4689]: I1201 09:15:05.070062 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be1070d3-8d5b-4910-aee6-3fee2a360934" path="/var/lib/kubelet/pods/be1070d3-8d5b-4910-aee6-3fee2a360934/volumes" Dec 01 09:15:05 crc kubenswrapper[4689]: I1201 09:15:05.342531 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-42d6w" Dec 01 09:15:05 crc kubenswrapper[4689]: I1201 09:15:05.451642 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jzfl6\" (UniqueName: \"kubernetes.io/projected/dc01b01d-6ad2-4595-ab0f-42cc127d1a7a-kube-api-access-jzfl6\") pod \"dc01b01d-6ad2-4595-ab0f-42cc127d1a7a\" (UID: \"dc01b01d-6ad2-4595-ab0f-42cc127d1a7a\") " Dec 01 09:15:05 crc kubenswrapper[4689]: I1201 09:15:05.451873 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dc01b01d-6ad2-4595-ab0f-42cc127d1a7a-ssh-key\") pod \"dc01b01d-6ad2-4595-ab0f-42cc127d1a7a\" (UID: \"dc01b01d-6ad2-4595-ab0f-42cc127d1a7a\") " Dec 01 09:15:05 crc kubenswrapper[4689]: I1201 09:15:05.451967 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dc01b01d-6ad2-4595-ab0f-42cc127d1a7a-inventory\") pod \"dc01b01d-6ad2-4595-ab0f-42cc127d1a7a\" (UID: \"dc01b01d-6ad2-4595-ab0f-42cc127d1a7a\") " Dec 01 09:15:05 crc kubenswrapper[4689]: I1201 09:15:05.461642 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc01b01d-6ad2-4595-ab0f-42cc127d1a7a-kube-api-access-jzfl6" (OuterVolumeSpecName: "kube-api-access-jzfl6") pod "dc01b01d-6ad2-4595-ab0f-42cc127d1a7a" (UID: "dc01b01d-6ad2-4595-ab0f-42cc127d1a7a"). InnerVolumeSpecName "kube-api-access-jzfl6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:15:05 crc kubenswrapper[4689]: I1201 09:15:05.493474 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc01b01d-6ad2-4595-ab0f-42cc127d1a7a-inventory" (OuterVolumeSpecName: "inventory") pod "dc01b01d-6ad2-4595-ab0f-42cc127d1a7a" (UID: "dc01b01d-6ad2-4595-ab0f-42cc127d1a7a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:15:05 crc kubenswrapper[4689]: I1201 09:15:05.503698 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc01b01d-6ad2-4595-ab0f-42cc127d1a7a-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "dc01b01d-6ad2-4595-ab0f-42cc127d1a7a" (UID: "dc01b01d-6ad2-4595-ab0f-42cc127d1a7a"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:15:05 crc kubenswrapper[4689]: I1201 09:15:05.554043 4689 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dc01b01d-6ad2-4595-ab0f-42cc127d1a7a-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 01 09:15:05 crc kubenswrapper[4689]: I1201 09:15:05.554077 4689 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dc01b01d-6ad2-4595-ab0f-42cc127d1a7a-inventory\") on node \"crc\" DevicePath \"\"" Dec 01 09:15:05 crc kubenswrapper[4689]: I1201 09:15:05.554087 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jzfl6\" (UniqueName: \"kubernetes.io/projected/dc01b01d-6ad2-4595-ab0f-42cc127d1a7a-kube-api-access-jzfl6\") on node \"crc\" DevicePath \"\"" Dec 01 09:15:05 crc kubenswrapper[4689]: I1201 09:15:05.889562 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-42d6w" event={"ID":"dc01b01d-6ad2-4595-ab0f-42cc127d1a7a","Type":"ContainerDied","Data":"4c038fc492bfad51f732135893a087a3183b04a62ad5f274cea168ef6d6ea82f"} Dec 01 09:15:05 crc kubenswrapper[4689]: I1201 09:15:05.889841 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c038fc492bfad51f732135893a087a3183b04a62ad5f274cea168ef6d6ea82f" Dec 01 09:15:05 crc kubenswrapper[4689]: I1201 09:15:05.889619 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-42d6w" Dec 01 09:15:05 crc kubenswrapper[4689]: I1201 09:15:05.984682 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-x69xg"] Dec 01 09:15:05 crc kubenswrapper[4689]: E1201 09:15:05.987161 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd36efa6-5f28-4753-b93b-7a574aa0b7e6" containerName="collect-profiles" Dec 01 09:15:05 crc kubenswrapper[4689]: I1201 09:15:05.987201 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd36efa6-5f28-4753-b93b-7a574aa0b7e6" containerName="collect-profiles" Dec 01 09:15:05 crc kubenswrapper[4689]: E1201 09:15:05.987230 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc01b01d-6ad2-4595-ab0f-42cc127d1a7a" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Dec 01 09:15:05 crc kubenswrapper[4689]: I1201 09:15:05.987238 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc01b01d-6ad2-4595-ab0f-42cc127d1a7a" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Dec 01 09:15:05 crc kubenswrapper[4689]: I1201 09:15:05.987501 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd36efa6-5f28-4753-b93b-7a574aa0b7e6" containerName="collect-profiles" Dec 01 09:15:05 crc kubenswrapper[4689]: I1201 09:15:05.987533 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc01b01d-6ad2-4595-ab0f-42cc127d1a7a" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Dec 01 09:15:05 crc kubenswrapper[4689]: I1201 09:15:05.989111 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-x69xg" Dec 01 09:15:05 crc kubenswrapper[4689]: I1201 09:15:05.993763 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 01 09:15:05 crc kubenswrapper[4689]: I1201 09:15:05.993783 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-sh59x" Dec 01 09:15:05 crc kubenswrapper[4689]: I1201 09:15:05.993783 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 01 09:15:05 crc kubenswrapper[4689]: I1201 09:15:05.993902 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 01 09:15:06 crc kubenswrapper[4689]: I1201 09:15:06.001927 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-x69xg"] Dec 01 09:15:06 crc kubenswrapper[4689]: I1201 09:15:06.167255 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqjw5\" (UniqueName: \"kubernetes.io/projected/7b1625a4-a976-4cd2-8e93-7022d1571f1f-kube-api-access-gqjw5\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-x69xg\" (UID: \"7b1625a4-a976-4cd2-8e93-7022d1571f1f\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-x69xg" Dec 01 09:15:06 crc kubenswrapper[4689]: I1201 09:15:06.167310 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7b1625a4-a976-4cd2-8e93-7022d1571f1f-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-x69xg\" (UID: \"7b1625a4-a976-4cd2-8e93-7022d1571f1f\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-x69xg" Dec 01 09:15:06 crc kubenswrapper[4689]: I1201 09:15:06.167714 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7b1625a4-a976-4cd2-8e93-7022d1571f1f-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-x69xg\" (UID: \"7b1625a4-a976-4cd2-8e93-7022d1571f1f\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-x69xg" Dec 01 09:15:06 crc kubenswrapper[4689]: I1201 09:15:06.270460 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7b1625a4-a976-4cd2-8e93-7022d1571f1f-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-x69xg\" (UID: \"7b1625a4-a976-4cd2-8e93-7022d1571f1f\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-x69xg" Dec 01 09:15:06 crc kubenswrapper[4689]: I1201 09:15:06.270661 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqjw5\" (UniqueName: \"kubernetes.io/projected/7b1625a4-a976-4cd2-8e93-7022d1571f1f-kube-api-access-gqjw5\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-x69xg\" (UID: \"7b1625a4-a976-4cd2-8e93-7022d1571f1f\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-x69xg" Dec 01 09:15:06 crc kubenswrapper[4689]: I1201 09:15:06.270692 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7b1625a4-a976-4cd2-8e93-7022d1571f1f-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-x69xg\" (UID: \"7b1625a4-a976-4cd2-8e93-7022d1571f1f\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-x69xg" Dec 01 09:15:06 crc kubenswrapper[4689]: I1201 09:15:06.276860 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7b1625a4-a976-4cd2-8e93-7022d1571f1f-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-x69xg\" (UID: \"7b1625a4-a976-4cd2-8e93-7022d1571f1f\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-x69xg" Dec 01 09:15:06 crc kubenswrapper[4689]: I1201 09:15:06.279975 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7b1625a4-a976-4cd2-8e93-7022d1571f1f-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-x69xg\" (UID: \"7b1625a4-a976-4cd2-8e93-7022d1571f1f\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-x69xg" Dec 01 09:15:06 crc kubenswrapper[4689]: I1201 09:15:06.300208 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqjw5\" (UniqueName: \"kubernetes.io/projected/7b1625a4-a976-4cd2-8e93-7022d1571f1f-kube-api-access-gqjw5\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-x69xg\" (UID: \"7b1625a4-a976-4cd2-8e93-7022d1571f1f\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-x69xg" Dec 01 09:15:06 crc kubenswrapper[4689]: I1201 09:15:06.325577 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-x69xg" Dec 01 09:15:06 crc kubenswrapper[4689]: I1201 09:15:06.884539 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-x69xg"] Dec 01 09:15:06 crc kubenswrapper[4689]: W1201 09:15:06.887762 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7b1625a4_a976_4cd2_8e93_7022d1571f1f.slice/crio-7b8980c1232b30a3825b586bdb90cad8ce5bbd4217bb20f39a25300f6b8cb719 WatchSource:0}: Error finding container 7b8980c1232b30a3825b586bdb90cad8ce5bbd4217bb20f39a25300f6b8cb719: Status 404 returned error can't find the container with id 7b8980c1232b30a3825b586bdb90cad8ce5bbd4217bb20f39a25300f6b8cb719 Dec 01 09:15:06 crc kubenswrapper[4689]: I1201 09:15:06.904538 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-x69xg" event={"ID":"7b1625a4-a976-4cd2-8e93-7022d1571f1f","Type":"ContainerStarted","Data":"7b8980c1232b30a3825b586bdb90cad8ce5bbd4217bb20f39a25300f6b8cb719"} Dec 01 09:15:07 crc kubenswrapper[4689]: I1201 09:15:07.931459 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-x69xg" event={"ID":"7b1625a4-a976-4cd2-8e93-7022d1571f1f","Type":"ContainerStarted","Data":"508b96ec583ed9843efd23be01e438778ebfa88fa5f24aeeaa89435086ad6e21"} Dec 01 09:15:07 crc kubenswrapper[4689]: I1201 09:15:07.960743 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-x69xg" podStartSLOduration=2.366677928 podStartE2EDuration="2.960713121s" podCreationTimestamp="2025-12-01 09:15:05 +0000 UTC" firstStartedPulling="2025-12-01 09:15:06.889810461 +0000 UTC m=+2186.962098365" lastFinishedPulling="2025-12-01 09:15:07.483845654 +0000 UTC m=+2187.556133558" observedRunningTime="2025-12-01 09:15:07.954061119 +0000 UTC m=+2188.026349023" watchObservedRunningTime="2025-12-01 09:15:07.960713121 +0000 UTC m=+2188.033001025" Dec 01 09:15:09 crc kubenswrapper[4689]: I1201 09:15:09.148059 4689 patch_prober.go:28] interesting pod/machine-config-daemon-hmdnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 09:15:09 crc kubenswrapper[4689]: I1201 09:15:09.148535 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 09:15:09 crc kubenswrapper[4689]: I1201 09:15:09.148596 4689 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" Dec 01 09:15:09 crc kubenswrapper[4689]: I1201 09:15:09.150726 4689 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6af8c01fe62e3173d12af05e7baaa005504b6f1d01dccfa4c248d8eb547589a3"} pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 01 09:15:09 crc kubenswrapper[4689]: I1201 09:15:09.150932 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" containerName="machine-config-daemon" containerID="cri-o://6af8c01fe62e3173d12af05e7baaa005504b6f1d01dccfa4c248d8eb547589a3" gracePeriod=600 Dec 01 09:15:09 crc kubenswrapper[4689]: E1201 09:15:09.800575 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:15:09 crc kubenswrapper[4689]: I1201 09:15:09.950302 4689 generic.go:334] "Generic (PLEG): container finished" podID="3947625d-75bf-4332-a233-1491b2ee9d96" containerID="6af8c01fe62e3173d12af05e7baaa005504b6f1d01dccfa4c248d8eb547589a3" exitCode=0 Dec 01 09:15:09 crc kubenswrapper[4689]: I1201 09:15:09.950347 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" event={"ID":"3947625d-75bf-4332-a233-1491b2ee9d96","Type":"ContainerDied","Data":"6af8c01fe62e3173d12af05e7baaa005504b6f1d01dccfa4c248d8eb547589a3"} Dec 01 09:15:09 crc kubenswrapper[4689]: I1201 09:15:09.950404 4689 scope.go:117] "RemoveContainer" containerID="fb65526bf7a453c4de10d095fe7ede8e63ed9c728cb3c6e1c38808977ec1a5f0" Dec 01 09:15:09 crc kubenswrapper[4689]: I1201 09:15:09.951164 4689 scope.go:117] "RemoveContainer" containerID="6af8c01fe62e3173d12af05e7baaa005504b6f1d01dccfa4c248d8eb547589a3" Dec 01 09:15:09 crc kubenswrapper[4689]: E1201 09:15:09.951609 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:15:25 crc kubenswrapper[4689]: I1201 09:15:25.048273 4689 scope.go:117] "RemoveContainer" containerID="6af8c01fe62e3173d12af05e7baaa005504b6f1d01dccfa4c248d8eb547589a3" Dec 01 09:15:25 crc kubenswrapper[4689]: E1201 09:15:25.049589 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:15:40 crc kubenswrapper[4689]: I1201 09:15:40.047098 4689 scope.go:117] "RemoveContainer" containerID="6af8c01fe62e3173d12af05e7baaa005504b6f1d01dccfa4c248d8eb547589a3" Dec 01 09:15:40 crc kubenswrapper[4689]: E1201 09:15:40.048095 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:15:48 crc kubenswrapper[4689]: I1201 09:15:48.103186 4689 scope.go:117] "RemoveContainer" containerID="43cb72af69f0beb62deeddfe1a7cceed942748ca60e676f91be5e13083a6d95c" Dec 01 09:15:52 crc kubenswrapper[4689]: I1201 09:15:52.398766 4689 generic.go:334] "Generic (PLEG): container finished" podID="7b1625a4-a976-4cd2-8e93-7022d1571f1f" containerID="508b96ec583ed9843efd23be01e438778ebfa88fa5f24aeeaa89435086ad6e21" exitCode=0 Dec 01 09:15:52 crc kubenswrapper[4689]: I1201 09:15:52.398827 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-x69xg" event={"ID":"7b1625a4-a976-4cd2-8e93-7022d1571f1f","Type":"ContainerDied","Data":"508b96ec583ed9843efd23be01e438778ebfa88fa5f24aeeaa89435086ad6e21"} Dec 01 09:15:53 crc kubenswrapper[4689]: I1201 09:15:53.951134 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-x69xg" Dec 01 09:15:53 crc kubenswrapper[4689]: I1201 09:15:53.997809 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7b1625a4-a976-4cd2-8e93-7022d1571f1f-ssh-key\") pod \"7b1625a4-a976-4cd2-8e93-7022d1571f1f\" (UID: \"7b1625a4-a976-4cd2-8e93-7022d1571f1f\") " Dec 01 09:15:53 crc kubenswrapper[4689]: I1201 09:15:53.998232 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gqjw5\" (UniqueName: \"kubernetes.io/projected/7b1625a4-a976-4cd2-8e93-7022d1571f1f-kube-api-access-gqjw5\") pod \"7b1625a4-a976-4cd2-8e93-7022d1571f1f\" (UID: \"7b1625a4-a976-4cd2-8e93-7022d1571f1f\") " Dec 01 09:15:53 crc kubenswrapper[4689]: I1201 09:15:53.998539 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7b1625a4-a976-4cd2-8e93-7022d1571f1f-inventory\") pod \"7b1625a4-a976-4cd2-8e93-7022d1571f1f\" (UID: \"7b1625a4-a976-4cd2-8e93-7022d1571f1f\") " Dec 01 09:15:54 crc kubenswrapper[4689]: I1201 09:15:54.015439 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b1625a4-a976-4cd2-8e93-7022d1571f1f-kube-api-access-gqjw5" (OuterVolumeSpecName: "kube-api-access-gqjw5") pod "7b1625a4-a976-4cd2-8e93-7022d1571f1f" (UID: "7b1625a4-a976-4cd2-8e93-7022d1571f1f"). InnerVolumeSpecName "kube-api-access-gqjw5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:15:54 crc kubenswrapper[4689]: I1201 09:15:54.029483 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b1625a4-a976-4cd2-8e93-7022d1571f1f-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "7b1625a4-a976-4cd2-8e93-7022d1571f1f" (UID: "7b1625a4-a976-4cd2-8e93-7022d1571f1f"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:15:54 crc kubenswrapper[4689]: I1201 09:15:54.032052 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b1625a4-a976-4cd2-8e93-7022d1571f1f-inventory" (OuterVolumeSpecName: "inventory") pod "7b1625a4-a976-4cd2-8e93-7022d1571f1f" (UID: "7b1625a4-a976-4cd2-8e93-7022d1571f1f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:15:54 crc kubenswrapper[4689]: I1201 09:15:54.047155 4689 scope.go:117] "RemoveContainer" containerID="6af8c01fe62e3173d12af05e7baaa005504b6f1d01dccfa4c248d8eb547589a3" Dec 01 09:15:54 crc kubenswrapper[4689]: E1201 09:15:54.047666 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:15:54 crc kubenswrapper[4689]: I1201 09:15:54.099749 4689 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7b1625a4-a976-4cd2-8e93-7022d1571f1f-inventory\") on node \"crc\" DevicePath \"\"" Dec 01 09:15:54 crc kubenswrapper[4689]: I1201 09:15:54.099792 4689 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7b1625a4-a976-4cd2-8e93-7022d1571f1f-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 01 09:15:54 crc kubenswrapper[4689]: I1201 09:15:54.099805 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gqjw5\" (UniqueName: \"kubernetes.io/projected/7b1625a4-a976-4cd2-8e93-7022d1571f1f-kube-api-access-gqjw5\") on node \"crc\" DevicePath \"\"" Dec 01 09:15:54 crc kubenswrapper[4689]: I1201 09:15:54.421591 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-x69xg" event={"ID":"7b1625a4-a976-4cd2-8e93-7022d1571f1f","Type":"ContainerDied","Data":"7b8980c1232b30a3825b586bdb90cad8ce5bbd4217bb20f39a25300f6b8cb719"} Dec 01 09:15:54 crc kubenswrapper[4689]: I1201 09:15:54.421660 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b8980c1232b30a3825b586bdb90cad8ce5bbd4217bb20f39a25300f6b8cb719" Dec 01 09:15:54 crc kubenswrapper[4689]: I1201 09:15:54.421699 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-x69xg" Dec 01 09:15:54 crc kubenswrapper[4689]: I1201 09:15:54.513988 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fndhp"] Dec 01 09:15:54 crc kubenswrapper[4689]: E1201 09:15:54.514438 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b1625a4-a976-4cd2-8e93-7022d1571f1f" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Dec 01 09:15:54 crc kubenswrapper[4689]: I1201 09:15:54.514457 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b1625a4-a976-4cd2-8e93-7022d1571f1f" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Dec 01 09:15:54 crc kubenswrapper[4689]: I1201 09:15:54.514685 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b1625a4-a976-4cd2-8e93-7022d1571f1f" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Dec 01 09:15:54 crc kubenswrapper[4689]: I1201 09:15:54.515279 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fndhp" Dec 01 09:15:54 crc kubenswrapper[4689]: I1201 09:15:54.520886 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 01 09:15:54 crc kubenswrapper[4689]: I1201 09:15:54.521112 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 01 09:15:54 crc kubenswrapper[4689]: I1201 09:15:54.522745 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-sh59x" Dec 01 09:15:54 crc kubenswrapper[4689]: I1201 09:15:54.538438 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 01 09:15:54 crc kubenswrapper[4689]: I1201 09:15:54.540946 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fndhp"] Dec 01 09:15:54 crc kubenswrapper[4689]: I1201 09:15:54.607872 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4ae39c64-0beb-4b8c-a08b-35aba6ecb704-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-fndhp\" (UID: \"4ae39c64-0beb-4b8c-a08b-35aba6ecb704\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fndhp" Dec 01 09:15:54 crc kubenswrapper[4689]: I1201 09:15:54.608076 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8d9kj\" (UniqueName: \"kubernetes.io/projected/4ae39c64-0beb-4b8c-a08b-35aba6ecb704-kube-api-access-8d9kj\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-fndhp\" (UID: \"4ae39c64-0beb-4b8c-a08b-35aba6ecb704\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fndhp" Dec 01 09:15:54 crc kubenswrapper[4689]: I1201 09:15:54.608183 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4ae39c64-0beb-4b8c-a08b-35aba6ecb704-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-fndhp\" (UID: \"4ae39c64-0beb-4b8c-a08b-35aba6ecb704\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fndhp" Dec 01 09:15:54 crc kubenswrapper[4689]: I1201 09:15:54.710067 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4ae39c64-0beb-4b8c-a08b-35aba6ecb704-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-fndhp\" (UID: \"4ae39c64-0beb-4b8c-a08b-35aba6ecb704\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fndhp" Dec 01 09:15:54 crc kubenswrapper[4689]: I1201 09:15:54.710211 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8d9kj\" (UniqueName: \"kubernetes.io/projected/4ae39c64-0beb-4b8c-a08b-35aba6ecb704-kube-api-access-8d9kj\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-fndhp\" (UID: \"4ae39c64-0beb-4b8c-a08b-35aba6ecb704\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fndhp" Dec 01 09:15:54 crc kubenswrapper[4689]: I1201 09:15:54.710274 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4ae39c64-0beb-4b8c-a08b-35aba6ecb704-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-fndhp\" (UID: \"4ae39c64-0beb-4b8c-a08b-35aba6ecb704\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fndhp" Dec 01 09:15:54 crc kubenswrapper[4689]: I1201 09:15:54.714083 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4ae39c64-0beb-4b8c-a08b-35aba6ecb704-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-fndhp\" (UID: \"4ae39c64-0beb-4b8c-a08b-35aba6ecb704\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fndhp" Dec 01 09:15:54 crc kubenswrapper[4689]: I1201 09:15:54.715336 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4ae39c64-0beb-4b8c-a08b-35aba6ecb704-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-fndhp\" (UID: \"4ae39c64-0beb-4b8c-a08b-35aba6ecb704\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fndhp" Dec 01 09:15:54 crc kubenswrapper[4689]: I1201 09:15:54.727095 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8d9kj\" (UniqueName: \"kubernetes.io/projected/4ae39c64-0beb-4b8c-a08b-35aba6ecb704-kube-api-access-8d9kj\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-fndhp\" (UID: \"4ae39c64-0beb-4b8c-a08b-35aba6ecb704\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fndhp" Dec 01 09:15:54 crc kubenswrapper[4689]: I1201 09:15:54.838469 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fndhp" Dec 01 09:15:55 crc kubenswrapper[4689]: I1201 09:15:55.413635 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fndhp"] Dec 01 09:15:55 crc kubenswrapper[4689]: I1201 09:15:55.438059 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fndhp" event={"ID":"4ae39c64-0beb-4b8c-a08b-35aba6ecb704","Type":"ContainerStarted","Data":"54688cdb0e880e1a6ce4a3e87f6b3c9ca4c7dbf312511306f822c4e7c3fe9e97"} Dec 01 09:15:56 crc kubenswrapper[4689]: I1201 09:15:56.448630 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fndhp" event={"ID":"4ae39c64-0beb-4b8c-a08b-35aba6ecb704","Type":"ContainerStarted","Data":"96bdf66308787238c3c9f30e9a0ca050eb3215c709bd030ff5f2ddfe17438f3e"} Dec 01 09:15:56 crc kubenswrapper[4689]: I1201 09:15:56.473982 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fndhp" podStartSLOduration=1.977725167 podStartE2EDuration="2.473961382s" podCreationTimestamp="2025-12-01 09:15:54 +0000 UTC" firstStartedPulling="2025-12-01 09:15:55.42057466 +0000 UTC m=+2235.492862564" lastFinishedPulling="2025-12-01 09:15:55.916810875 +0000 UTC m=+2235.989098779" observedRunningTime="2025-12-01 09:15:56.470549479 +0000 UTC m=+2236.542837393" watchObservedRunningTime="2025-12-01 09:15:56.473961382 +0000 UTC m=+2236.546249296" Dec 01 09:16:08 crc kubenswrapper[4689]: I1201 09:16:08.047440 4689 scope.go:117] "RemoveContainer" containerID="6af8c01fe62e3173d12af05e7baaa005504b6f1d01dccfa4c248d8eb547589a3" Dec 01 09:16:08 crc kubenswrapper[4689]: E1201 09:16:08.049207 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:16:21 crc kubenswrapper[4689]: I1201 09:16:21.057188 4689 scope.go:117] "RemoveContainer" containerID="6af8c01fe62e3173d12af05e7baaa005504b6f1d01dccfa4c248d8eb547589a3" Dec 01 09:16:21 crc kubenswrapper[4689]: E1201 09:16:21.058298 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:16:35 crc kubenswrapper[4689]: I1201 09:16:35.046972 4689 scope.go:117] "RemoveContainer" containerID="6af8c01fe62e3173d12af05e7baaa005504b6f1d01dccfa4c248d8eb547589a3" Dec 01 09:16:35 crc kubenswrapper[4689]: E1201 09:16:35.047676 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:16:46 crc kubenswrapper[4689]: I1201 09:16:46.047892 4689 scope.go:117] "RemoveContainer" containerID="6af8c01fe62e3173d12af05e7baaa005504b6f1d01dccfa4c248d8eb547589a3" Dec 01 09:16:46 crc kubenswrapper[4689]: E1201 09:16:46.048899 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:16:59 crc kubenswrapper[4689]: I1201 09:16:59.471025 4689 generic.go:334] "Generic (PLEG): container finished" podID="4ae39c64-0beb-4b8c-a08b-35aba6ecb704" containerID="96bdf66308787238c3c9f30e9a0ca050eb3215c709bd030ff5f2ddfe17438f3e" exitCode=0 Dec 01 09:16:59 crc kubenswrapper[4689]: I1201 09:16:59.471129 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fndhp" event={"ID":"4ae39c64-0beb-4b8c-a08b-35aba6ecb704","Type":"ContainerDied","Data":"96bdf66308787238c3c9f30e9a0ca050eb3215c709bd030ff5f2ddfe17438f3e"} Dec 01 09:17:00 crc kubenswrapper[4689]: I1201 09:17:00.901415 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fndhp" Dec 01 09:17:01 crc kubenswrapper[4689]: I1201 09:17:01.012544 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8d9kj\" (UniqueName: \"kubernetes.io/projected/4ae39c64-0beb-4b8c-a08b-35aba6ecb704-kube-api-access-8d9kj\") pod \"4ae39c64-0beb-4b8c-a08b-35aba6ecb704\" (UID: \"4ae39c64-0beb-4b8c-a08b-35aba6ecb704\") " Dec 01 09:17:01 crc kubenswrapper[4689]: I1201 09:17:01.012591 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4ae39c64-0beb-4b8c-a08b-35aba6ecb704-ssh-key\") pod \"4ae39c64-0beb-4b8c-a08b-35aba6ecb704\" (UID: \"4ae39c64-0beb-4b8c-a08b-35aba6ecb704\") " Dec 01 09:17:01 crc kubenswrapper[4689]: I1201 09:17:01.012672 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4ae39c64-0beb-4b8c-a08b-35aba6ecb704-inventory\") pod \"4ae39c64-0beb-4b8c-a08b-35aba6ecb704\" (UID: \"4ae39c64-0beb-4b8c-a08b-35aba6ecb704\") " Dec 01 09:17:01 crc kubenswrapper[4689]: I1201 09:17:01.019830 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ae39c64-0beb-4b8c-a08b-35aba6ecb704-kube-api-access-8d9kj" (OuterVolumeSpecName: "kube-api-access-8d9kj") pod "4ae39c64-0beb-4b8c-a08b-35aba6ecb704" (UID: "4ae39c64-0beb-4b8c-a08b-35aba6ecb704"). InnerVolumeSpecName "kube-api-access-8d9kj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:17:01 crc kubenswrapper[4689]: I1201 09:17:01.044119 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ae39c64-0beb-4b8c-a08b-35aba6ecb704-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "4ae39c64-0beb-4b8c-a08b-35aba6ecb704" (UID: "4ae39c64-0beb-4b8c-a08b-35aba6ecb704"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:17:01 crc kubenswrapper[4689]: I1201 09:17:01.056164 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ae39c64-0beb-4b8c-a08b-35aba6ecb704-inventory" (OuterVolumeSpecName: "inventory") pod "4ae39c64-0beb-4b8c-a08b-35aba6ecb704" (UID: "4ae39c64-0beb-4b8c-a08b-35aba6ecb704"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:17:01 crc kubenswrapper[4689]: I1201 09:17:01.056841 4689 scope.go:117] "RemoveContainer" containerID="6af8c01fe62e3173d12af05e7baaa005504b6f1d01dccfa4c248d8eb547589a3" Dec 01 09:17:01 crc kubenswrapper[4689]: E1201 09:17:01.057218 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:17:01 crc kubenswrapper[4689]: I1201 09:17:01.115446 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8d9kj\" (UniqueName: \"kubernetes.io/projected/4ae39c64-0beb-4b8c-a08b-35aba6ecb704-kube-api-access-8d9kj\") on node \"crc\" DevicePath \"\"" Dec 01 09:17:01 crc kubenswrapper[4689]: I1201 09:17:01.115486 4689 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4ae39c64-0beb-4b8c-a08b-35aba6ecb704-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 01 09:17:01 crc kubenswrapper[4689]: I1201 09:17:01.115497 4689 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4ae39c64-0beb-4b8c-a08b-35aba6ecb704-inventory\") on node \"crc\" DevicePath \"\"" Dec 01 09:17:01 crc kubenswrapper[4689]: I1201 09:17:01.491946 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fndhp" event={"ID":"4ae39c64-0beb-4b8c-a08b-35aba6ecb704","Type":"ContainerDied","Data":"54688cdb0e880e1a6ce4a3e87f6b3c9ca4c7dbf312511306f822c4e7c3fe9e97"} Dec 01 09:17:01 crc kubenswrapper[4689]: I1201 09:17:01.492226 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="54688cdb0e880e1a6ce4a3e87f6b3c9ca4c7dbf312511306f822c4e7c3fe9e97" Dec 01 09:17:01 crc kubenswrapper[4689]: I1201 09:17:01.491994 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fndhp" Dec 01 09:17:01 crc kubenswrapper[4689]: I1201 09:17:01.583455 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-q6z88"] Dec 01 09:17:01 crc kubenswrapper[4689]: E1201 09:17:01.583982 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ae39c64-0beb-4b8c-a08b-35aba6ecb704" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Dec 01 09:17:01 crc kubenswrapper[4689]: I1201 09:17:01.584006 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ae39c64-0beb-4b8c-a08b-35aba6ecb704" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Dec 01 09:17:01 crc kubenswrapper[4689]: I1201 09:17:01.584217 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ae39c64-0beb-4b8c-a08b-35aba6ecb704" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Dec 01 09:17:01 crc kubenswrapper[4689]: I1201 09:17:01.585139 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-q6z88" Dec 01 09:17:01 crc kubenswrapper[4689]: I1201 09:17:01.587189 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-sh59x" Dec 01 09:17:01 crc kubenswrapper[4689]: I1201 09:17:01.587843 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 01 09:17:01 crc kubenswrapper[4689]: I1201 09:17:01.587970 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 01 09:17:01 crc kubenswrapper[4689]: I1201 09:17:01.591312 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 01 09:17:01 crc kubenswrapper[4689]: I1201 09:17:01.614467 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-q6z88"] Dec 01 09:17:01 crc kubenswrapper[4689]: I1201 09:17:01.730860 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8fd75600-1f4f-4bfb-94fd-d9778efd0e5e-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-q6z88\" (UID: \"8fd75600-1f4f-4bfb-94fd-d9778efd0e5e\") " pod="openstack/ssh-known-hosts-edpm-deployment-q6z88" Dec 01 09:17:01 crc kubenswrapper[4689]: I1201 09:17:01.730910 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/8fd75600-1f4f-4bfb-94fd-d9778efd0e5e-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-q6z88\" (UID: \"8fd75600-1f4f-4bfb-94fd-d9778efd0e5e\") " pod="openstack/ssh-known-hosts-edpm-deployment-q6z88" Dec 01 09:17:01 crc kubenswrapper[4689]: I1201 09:17:01.731325 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hmnm\" (UniqueName: \"kubernetes.io/projected/8fd75600-1f4f-4bfb-94fd-d9778efd0e5e-kube-api-access-9hmnm\") pod \"ssh-known-hosts-edpm-deployment-q6z88\" (UID: \"8fd75600-1f4f-4bfb-94fd-d9778efd0e5e\") " pod="openstack/ssh-known-hosts-edpm-deployment-q6z88" Dec 01 09:17:01 crc kubenswrapper[4689]: I1201 09:17:01.833243 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8fd75600-1f4f-4bfb-94fd-d9778efd0e5e-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-q6z88\" (UID: \"8fd75600-1f4f-4bfb-94fd-d9778efd0e5e\") " pod="openstack/ssh-known-hosts-edpm-deployment-q6z88" Dec 01 09:17:01 crc kubenswrapper[4689]: I1201 09:17:01.833314 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/8fd75600-1f4f-4bfb-94fd-d9778efd0e5e-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-q6z88\" (UID: \"8fd75600-1f4f-4bfb-94fd-d9778efd0e5e\") " pod="openstack/ssh-known-hosts-edpm-deployment-q6z88" Dec 01 09:17:01 crc kubenswrapper[4689]: I1201 09:17:01.833490 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hmnm\" (UniqueName: \"kubernetes.io/projected/8fd75600-1f4f-4bfb-94fd-d9778efd0e5e-kube-api-access-9hmnm\") pod \"ssh-known-hosts-edpm-deployment-q6z88\" (UID: \"8fd75600-1f4f-4bfb-94fd-d9778efd0e5e\") " pod="openstack/ssh-known-hosts-edpm-deployment-q6z88" Dec 01 09:17:01 crc kubenswrapper[4689]: I1201 09:17:01.842945 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/8fd75600-1f4f-4bfb-94fd-d9778efd0e5e-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-q6z88\" (UID: \"8fd75600-1f4f-4bfb-94fd-d9778efd0e5e\") " pod="openstack/ssh-known-hosts-edpm-deployment-q6z88" Dec 01 09:17:01 crc kubenswrapper[4689]: I1201 09:17:01.843766 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8fd75600-1f4f-4bfb-94fd-d9778efd0e5e-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-q6z88\" (UID: \"8fd75600-1f4f-4bfb-94fd-d9778efd0e5e\") " pod="openstack/ssh-known-hosts-edpm-deployment-q6z88" Dec 01 09:17:01 crc kubenswrapper[4689]: I1201 09:17:01.851009 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hmnm\" (UniqueName: \"kubernetes.io/projected/8fd75600-1f4f-4bfb-94fd-d9778efd0e5e-kube-api-access-9hmnm\") pod \"ssh-known-hosts-edpm-deployment-q6z88\" (UID: \"8fd75600-1f4f-4bfb-94fd-d9778efd0e5e\") " pod="openstack/ssh-known-hosts-edpm-deployment-q6z88" Dec 01 09:17:01 crc kubenswrapper[4689]: I1201 09:17:01.903945 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-q6z88" Dec 01 09:17:02 crc kubenswrapper[4689]: I1201 09:17:02.512467 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-q6z88"] Dec 01 09:17:03 crc kubenswrapper[4689]: I1201 09:17:03.530484 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-q6z88" event={"ID":"8fd75600-1f4f-4bfb-94fd-d9778efd0e5e","Type":"ContainerStarted","Data":"8bd760a93db4507247c5846e205e9a916a07dd7f64725ff987d1aa4911ab7518"} Dec 01 09:17:04 crc kubenswrapper[4689]: I1201 09:17:04.540616 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-q6z88" event={"ID":"8fd75600-1f4f-4bfb-94fd-d9778efd0e5e","Type":"ContainerStarted","Data":"b1bcf6a8b4a8ba5fe271844b0adcb9c7d6004c1a731d8a1ee754f25921da66f9"} Dec 01 09:17:04 crc kubenswrapper[4689]: I1201 09:17:04.565710 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-q6z88" podStartSLOduration=2.694263062 podStartE2EDuration="3.565684602s" podCreationTimestamp="2025-12-01 09:17:01 +0000 UTC" firstStartedPulling="2025-12-01 09:17:02.527475767 +0000 UTC m=+2302.599763671" lastFinishedPulling="2025-12-01 09:17:03.398897307 +0000 UTC m=+2303.471185211" observedRunningTime="2025-12-01 09:17:04.55533903 +0000 UTC m=+2304.627626934" watchObservedRunningTime="2025-12-01 09:17:04.565684602 +0000 UTC m=+2304.637972506" Dec 01 09:17:11 crc kubenswrapper[4689]: I1201 09:17:11.607801 4689 generic.go:334] "Generic (PLEG): container finished" podID="8fd75600-1f4f-4bfb-94fd-d9778efd0e5e" containerID="b1bcf6a8b4a8ba5fe271844b0adcb9c7d6004c1a731d8a1ee754f25921da66f9" exitCode=0 Dec 01 09:17:11 crc kubenswrapper[4689]: I1201 09:17:11.607895 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-q6z88" event={"ID":"8fd75600-1f4f-4bfb-94fd-d9778efd0e5e","Type":"ContainerDied","Data":"b1bcf6a8b4a8ba5fe271844b0adcb9c7d6004c1a731d8a1ee754f25921da66f9"} Dec 01 09:17:13 crc kubenswrapper[4689]: I1201 09:17:13.047660 4689 scope.go:117] "RemoveContainer" containerID="6af8c01fe62e3173d12af05e7baaa005504b6f1d01dccfa4c248d8eb547589a3" Dec 01 09:17:13 crc kubenswrapper[4689]: E1201 09:17:13.048272 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:17:13 crc kubenswrapper[4689]: I1201 09:17:13.071682 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-q6z88" Dec 01 09:17:13 crc kubenswrapper[4689]: I1201 09:17:13.179181 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/8fd75600-1f4f-4bfb-94fd-d9778efd0e5e-inventory-0\") pod \"8fd75600-1f4f-4bfb-94fd-d9778efd0e5e\" (UID: \"8fd75600-1f4f-4bfb-94fd-d9778efd0e5e\") " Dec 01 09:17:13 crc kubenswrapper[4689]: I1201 09:17:13.179503 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9hmnm\" (UniqueName: \"kubernetes.io/projected/8fd75600-1f4f-4bfb-94fd-d9778efd0e5e-kube-api-access-9hmnm\") pod \"8fd75600-1f4f-4bfb-94fd-d9778efd0e5e\" (UID: \"8fd75600-1f4f-4bfb-94fd-d9778efd0e5e\") " Dec 01 09:17:13 crc kubenswrapper[4689]: I1201 09:17:13.179570 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8fd75600-1f4f-4bfb-94fd-d9778efd0e5e-ssh-key-openstack-edpm-ipam\") pod \"8fd75600-1f4f-4bfb-94fd-d9778efd0e5e\" (UID: \"8fd75600-1f4f-4bfb-94fd-d9778efd0e5e\") " Dec 01 09:17:13 crc kubenswrapper[4689]: I1201 09:17:13.196808 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fd75600-1f4f-4bfb-94fd-d9778efd0e5e-kube-api-access-9hmnm" (OuterVolumeSpecName: "kube-api-access-9hmnm") pod "8fd75600-1f4f-4bfb-94fd-d9778efd0e5e" (UID: "8fd75600-1f4f-4bfb-94fd-d9778efd0e5e"). InnerVolumeSpecName "kube-api-access-9hmnm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:17:13 crc kubenswrapper[4689]: I1201 09:17:13.213466 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fd75600-1f4f-4bfb-94fd-d9778efd0e5e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "8fd75600-1f4f-4bfb-94fd-d9778efd0e5e" (UID: "8fd75600-1f4f-4bfb-94fd-d9778efd0e5e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:17:13 crc kubenswrapper[4689]: I1201 09:17:13.233424 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fd75600-1f4f-4bfb-94fd-d9778efd0e5e-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "8fd75600-1f4f-4bfb-94fd-d9778efd0e5e" (UID: "8fd75600-1f4f-4bfb-94fd-d9778efd0e5e"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:17:13 crc kubenswrapper[4689]: I1201 09:17:13.283128 4689 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/8fd75600-1f4f-4bfb-94fd-d9778efd0e5e-inventory-0\") on node \"crc\" DevicePath \"\"" Dec 01 09:17:13 crc kubenswrapper[4689]: I1201 09:17:13.283195 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9hmnm\" (UniqueName: \"kubernetes.io/projected/8fd75600-1f4f-4bfb-94fd-d9778efd0e5e-kube-api-access-9hmnm\") on node \"crc\" DevicePath \"\"" Dec 01 09:17:13 crc kubenswrapper[4689]: I1201 09:17:13.283214 4689 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8fd75600-1f4f-4bfb-94fd-d9778efd0e5e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Dec 01 09:17:13 crc kubenswrapper[4689]: I1201 09:17:13.632969 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-q6z88" event={"ID":"8fd75600-1f4f-4bfb-94fd-d9778efd0e5e","Type":"ContainerDied","Data":"8bd760a93db4507247c5846e205e9a916a07dd7f64725ff987d1aa4911ab7518"} Dec 01 09:17:13 crc kubenswrapper[4689]: I1201 09:17:13.633023 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8bd760a93db4507247c5846e205e9a916a07dd7f64725ff987d1aa4911ab7518" Dec 01 09:17:13 crc kubenswrapper[4689]: I1201 09:17:13.633038 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-q6z88" Dec 01 09:17:13 crc kubenswrapper[4689]: I1201 09:17:13.707066 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-9vvhw"] Dec 01 09:17:13 crc kubenswrapper[4689]: E1201 09:17:13.707599 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fd75600-1f4f-4bfb-94fd-d9778efd0e5e" containerName="ssh-known-hosts-edpm-deployment" Dec 01 09:17:13 crc kubenswrapper[4689]: I1201 09:17:13.707621 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fd75600-1f4f-4bfb-94fd-d9778efd0e5e" containerName="ssh-known-hosts-edpm-deployment" Dec 01 09:17:13 crc kubenswrapper[4689]: I1201 09:17:13.708049 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fd75600-1f4f-4bfb-94fd-d9778efd0e5e" containerName="ssh-known-hosts-edpm-deployment" Dec 01 09:17:13 crc kubenswrapper[4689]: I1201 09:17:13.709450 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9vvhw" Dec 01 09:17:13 crc kubenswrapper[4689]: I1201 09:17:13.714864 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 01 09:17:13 crc kubenswrapper[4689]: I1201 09:17:13.715140 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 01 09:17:13 crc kubenswrapper[4689]: I1201 09:17:13.715278 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 01 09:17:13 crc kubenswrapper[4689]: I1201 09:17:13.715321 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-9vvhw"] Dec 01 09:17:13 crc kubenswrapper[4689]: I1201 09:17:13.715527 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-sh59x" Dec 01 09:17:13 crc kubenswrapper[4689]: I1201 09:17:13.892636 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvnhv\" (UniqueName: \"kubernetes.io/projected/3bdb7314-795b-4a29-a1a6-ba5f3bccdcbe-kube-api-access-lvnhv\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-9vvhw\" (UID: \"3bdb7314-795b-4a29-a1a6-ba5f3bccdcbe\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9vvhw" Dec 01 09:17:13 crc kubenswrapper[4689]: I1201 09:17:13.893047 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3bdb7314-795b-4a29-a1a6-ba5f3bccdcbe-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-9vvhw\" (UID: \"3bdb7314-795b-4a29-a1a6-ba5f3bccdcbe\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9vvhw" Dec 01 09:17:13 crc kubenswrapper[4689]: I1201 09:17:13.893196 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3bdb7314-795b-4a29-a1a6-ba5f3bccdcbe-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-9vvhw\" (UID: \"3bdb7314-795b-4a29-a1a6-ba5f3bccdcbe\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9vvhw" Dec 01 09:17:13 crc kubenswrapper[4689]: I1201 09:17:13.995137 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvnhv\" (UniqueName: \"kubernetes.io/projected/3bdb7314-795b-4a29-a1a6-ba5f3bccdcbe-kube-api-access-lvnhv\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-9vvhw\" (UID: \"3bdb7314-795b-4a29-a1a6-ba5f3bccdcbe\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9vvhw" Dec 01 09:17:13 crc kubenswrapper[4689]: I1201 09:17:13.995261 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3bdb7314-795b-4a29-a1a6-ba5f3bccdcbe-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-9vvhw\" (UID: \"3bdb7314-795b-4a29-a1a6-ba5f3bccdcbe\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9vvhw" Dec 01 09:17:13 crc kubenswrapper[4689]: I1201 09:17:13.995336 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3bdb7314-795b-4a29-a1a6-ba5f3bccdcbe-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-9vvhw\" (UID: \"3bdb7314-795b-4a29-a1a6-ba5f3bccdcbe\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9vvhw" Dec 01 09:17:14 crc kubenswrapper[4689]: I1201 09:17:14.000305 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3bdb7314-795b-4a29-a1a6-ba5f3bccdcbe-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-9vvhw\" (UID: \"3bdb7314-795b-4a29-a1a6-ba5f3bccdcbe\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9vvhw" Dec 01 09:17:14 crc kubenswrapper[4689]: I1201 09:17:14.000953 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3bdb7314-795b-4a29-a1a6-ba5f3bccdcbe-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-9vvhw\" (UID: \"3bdb7314-795b-4a29-a1a6-ba5f3bccdcbe\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9vvhw" Dec 01 09:17:14 crc kubenswrapper[4689]: I1201 09:17:14.023102 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvnhv\" (UniqueName: \"kubernetes.io/projected/3bdb7314-795b-4a29-a1a6-ba5f3bccdcbe-kube-api-access-lvnhv\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-9vvhw\" (UID: \"3bdb7314-795b-4a29-a1a6-ba5f3bccdcbe\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9vvhw" Dec 01 09:17:14 crc kubenswrapper[4689]: I1201 09:17:14.048875 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9vvhw" Dec 01 09:17:14 crc kubenswrapper[4689]: I1201 09:17:14.586896 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-9vvhw"] Dec 01 09:17:14 crc kubenswrapper[4689]: I1201 09:17:14.594592 4689 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 01 09:17:14 crc kubenswrapper[4689]: I1201 09:17:14.642942 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9vvhw" event={"ID":"3bdb7314-795b-4a29-a1a6-ba5f3bccdcbe","Type":"ContainerStarted","Data":"2aab0f2c795e955494b7e110e3b4ffbe29de356b6d55b001df3dd763eb481049"} Dec 01 09:17:15 crc kubenswrapper[4689]: I1201 09:17:15.654635 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9vvhw" event={"ID":"3bdb7314-795b-4a29-a1a6-ba5f3bccdcbe","Type":"ContainerStarted","Data":"a57e056285e42dec2e7a4449b834fd0d58bc1964b00c1261ed7425292c893dbf"} Dec 01 09:17:24 crc kubenswrapper[4689]: I1201 09:17:24.048571 4689 scope.go:117] "RemoveContainer" containerID="6af8c01fe62e3173d12af05e7baaa005504b6f1d01dccfa4c248d8eb547589a3" Dec 01 09:17:24 crc kubenswrapper[4689]: E1201 09:17:24.049225 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:17:24 crc kubenswrapper[4689]: I1201 09:17:24.741986 4689 generic.go:334] "Generic (PLEG): container finished" podID="3bdb7314-795b-4a29-a1a6-ba5f3bccdcbe" containerID="a57e056285e42dec2e7a4449b834fd0d58bc1964b00c1261ed7425292c893dbf" exitCode=0 Dec 01 09:17:24 crc kubenswrapper[4689]: I1201 09:17:24.742065 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9vvhw" event={"ID":"3bdb7314-795b-4a29-a1a6-ba5f3bccdcbe","Type":"ContainerDied","Data":"a57e056285e42dec2e7a4449b834fd0d58bc1964b00c1261ed7425292c893dbf"} Dec 01 09:17:26 crc kubenswrapper[4689]: I1201 09:17:26.217881 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9vvhw" Dec 01 09:17:26 crc kubenswrapper[4689]: I1201 09:17:26.343206 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3bdb7314-795b-4a29-a1a6-ba5f3bccdcbe-ssh-key\") pod \"3bdb7314-795b-4a29-a1a6-ba5f3bccdcbe\" (UID: \"3bdb7314-795b-4a29-a1a6-ba5f3bccdcbe\") " Dec 01 09:17:26 crc kubenswrapper[4689]: I1201 09:17:26.343610 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lvnhv\" (UniqueName: \"kubernetes.io/projected/3bdb7314-795b-4a29-a1a6-ba5f3bccdcbe-kube-api-access-lvnhv\") pod \"3bdb7314-795b-4a29-a1a6-ba5f3bccdcbe\" (UID: \"3bdb7314-795b-4a29-a1a6-ba5f3bccdcbe\") " Dec 01 09:17:26 crc kubenswrapper[4689]: I1201 09:17:26.343683 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3bdb7314-795b-4a29-a1a6-ba5f3bccdcbe-inventory\") pod \"3bdb7314-795b-4a29-a1a6-ba5f3bccdcbe\" (UID: \"3bdb7314-795b-4a29-a1a6-ba5f3bccdcbe\") " Dec 01 09:17:26 crc kubenswrapper[4689]: I1201 09:17:26.348890 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bdb7314-795b-4a29-a1a6-ba5f3bccdcbe-kube-api-access-lvnhv" (OuterVolumeSpecName: "kube-api-access-lvnhv") pod "3bdb7314-795b-4a29-a1a6-ba5f3bccdcbe" (UID: "3bdb7314-795b-4a29-a1a6-ba5f3bccdcbe"). InnerVolumeSpecName "kube-api-access-lvnhv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:17:26 crc kubenswrapper[4689]: I1201 09:17:26.381017 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bdb7314-795b-4a29-a1a6-ba5f3bccdcbe-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "3bdb7314-795b-4a29-a1a6-ba5f3bccdcbe" (UID: "3bdb7314-795b-4a29-a1a6-ba5f3bccdcbe"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:17:26 crc kubenswrapper[4689]: I1201 09:17:26.381045 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bdb7314-795b-4a29-a1a6-ba5f3bccdcbe-inventory" (OuterVolumeSpecName: "inventory") pod "3bdb7314-795b-4a29-a1a6-ba5f3bccdcbe" (UID: "3bdb7314-795b-4a29-a1a6-ba5f3bccdcbe"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:17:26 crc kubenswrapper[4689]: I1201 09:17:26.445797 4689 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3bdb7314-795b-4a29-a1a6-ba5f3bccdcbe-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 01 09:17:26 crc kubenswrapper[4689]: I1201 09:17:26.446016 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lvnhv\" (UniqueName: \"kubernetes.io/projected/3bdb7314-795b-4a29-a1a6-ba5f3bccdcbe-kube-api-access-lvnhv\") on node \"crc\" DevicePath \"\"" Dec 01 09:17:26 crc kubenswrapper[4689]: I1201 09:17:26.446080 4689 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3bdb7314-795b-4a29-a1a6-ba5f3bccdcbe-inventory\") on node \"crc\" DevicePath \"\"" Dec 01 09:17:26 crc kubenswrapper[4689]: I1201 09:17:26.761392 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9vvhw" event={"ID":"3bdb7314-795b-4a29-a1a6-ba5f3bccdcbe","Type":"ContainerDied","Data":"2aab0f2c795e955494b7e110e3b4ffbe29de356b6d55b001df3dd763eb481049"} Dec 01 09:17:26 crc kubenswrapper[4689]: I1201 09:17:26.761441 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2aab0f2c795e955494b7e110e3b4ffbe29de356b6d55b001df3dd763eb481049" Dec 01 09:17:26 crc kubenswrapper[4689]: I1201 09:17:26.761745 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-9vvhw" Dec 01 09:17:26 crc kubenswrapper[4689]: I1201 09:17:26.862769 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mjpf5"] Dec 01 09:17:26 crc kubenswrapper[4689]: E1201 09:17:26.863296 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bdb7314-795b-4a29-a1a6-ba5f3bccdcbe" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Dec 01 09:17:26 crc kubenswrapper[4689]: I1201 09:17:26.863315 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bdb7314-795b-4a29-a1a6-ba5f3bccdcbe" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Dec 01 09:17:26 crc kubenswrapper[4689]: I1201 09:17:26.863599 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bdb7314-795b-4a29-a1a6-ba5f3bccdcbe" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Dec 01 09:17:26 crc kubenswrapper[4689]: I1201 09:17:26.864420 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mjpf5" Dec 01 09:17:26 crc kubenswrapper[4689]: I1201 09:17:26.866716 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-sh59x" Dec 01 09:17:26 crc kubenswrapper[4689]: I1201 09:17:26.866990 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 01 09:17:26 crc kubenswrapper[4689]: I1201 09:17:26.867165 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 01 09:17:26 crc kubenswrapper[4689]: I1201 09:17:26.867470 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 01 09:17:26 crc kubenswrapper[4689]: I1201 09:17:26.882585 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mjpf5"] Dec 01 09:17:27 crc kubenswrapper[4689]: I1201 09:17:27.058108 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/defe39e2-091c-472e-aefe-7691672100e7-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-mjpf5\" (UID: \"defe39e2-091c-472e-aefe-7691672100e7\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mjpf5" Dec 01 09:17:27 crc kubenswrapper[4689]: I1201 09:17:27.058176 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/defe39e2-091c-472e-aefe-7691672100e7-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-mjpf5\" (UID: \"defe39e2-091c-472e-aefe-7691672100e7\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mjpf5" Dec 01 09:17:27 crc kubenswrapper[4689]: I1201 09:17:27.059102 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5sjj\" (UniqueName: \"kubernetes.io/projected/defe39e2-091c-472e-aefe-7691672100e7-kube-api-access-t5sjj\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-mjpf5\" (UID: \"defe39e2-091c-472e-aefe-7691672100e7\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mjpf5" Dec 01 09:17:27 crc kubenswrapper[4689]: I1201 09:17:27.160541 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/defe39e2-091c-472e-aefe-7691672100e7-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-mjpf5\" (UID: \"defe39e2-091c-472e-aefe-7691672100e7\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mjpf5" Dec 01 09:17:27 crc kubenswrapper[4689]: I1201 09:17:27.160613 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/defe39e2-091c-472e-aefe-7691672100e7-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-mjpf5\" (UID: \"defe39e2-091c-472e-aefe-7691672100e7\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mjpf5" Dec 01 09:17:27 crc kubenswrapper[4689]: I1201 09:17:27.160733 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5sjj\" (UniqueName: \"kubernetes.io/projected/defe39e2-091c-472e-aefe-7691672100e7-kube-api-access-t5sjj\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-mjpf5\" (UID: \"defe39e2-091c-472e-aefe-7691672100e7\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mjpf5" Dec 01 09:17:27 crc kubenswrapper[4689]: I1201 09:17:27.165024 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/defe39e2-091c-472e-aefe-7691672100e7-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-mjpf5\" (UID: \"defe39e2-091c-472e-aefe-7691672100e7\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mjpf5" Dec 01 09:17:27 crc kubenswrapper[4689]: I1201 09:17:27.169986 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/defe39e2-091c-472e-aefe-7691672100e7-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-mjpf5\" (UID: \"defe39e2-091c-472e-aefe-7691672100e7\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mjpf5" Dec 01 09:17:27 crc kubenswrapper[4689]: I1201 09:17:27.179015 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5sjj\" (UniqueName: \"kubernetes.io/projected/defe39e2-091c-472e-aefe-7691672100e7-kube-api-access-t5sjj\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-mjpf5\" (UID: \"defe39e2-091c-472e-aefe-7691672100e7\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mjpf5" Dec 01 09:17:27 crc kubenswrapper[4689]: I1201 09:17:27.183582 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mjpf5" Dec 01 09:17:27 crc kubenswrapper[4689]: I1201 09:17:27.740413 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mjpf5"] Dec 01 09:17:27 crc kubenswrapper[4689]: I1201 09:17:27.772338 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mjpf5" event={"ID":"defe39e2-091c-472e-aefe-7691672100e7","Type":"ContainerStarted","Data":"ec89d2d278d6ac5bf686f621834436913dfe21f8be197167537ca129b5c0a951"} Dec 01 09:17:28 crc kubenswrapper[4689]: I1201 09:17:28.782001 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mjpf5" event={"ID":"defe39e2-091c-472e-aefe-7691672100e7","Type":"ContainerStarted","Data":"8da90d30f40d70595e85dc0210ad301b33f0c64e1a70dcabc5c9030d4bc334af"} Dec 01 09:17:28 crc kubenswrapper[4689]: I1201 09:17:28.804463 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mjpf5" podStartSLOduration=2.178688255 podStartE2EDuration="2.804444602s" podCreationTimestamp="2025-12-01 09:17:26 +0000 UTC" firstStartedPulling="2025-12-01 09:17:27.756211021 +0000 UTC m=+2327.828498925" lastFinishedPulling="2025-12-01 09:17:28.381967368 +0000 UTC m=+2328.454255272" observedRunningTime="2025-12-01 09:17:28.803566678 +0000 UTC m=+2328.875854602" watchObservedRunningTime="2025-12-01 09:17:28.804444602 +0000 UTC m=+2328.876732526" Dec 01 09:17:37 crc kubenswrapper[4689]: I1201 09:17:37.048200 4689 scope.go:117] "RemoveContainer" containerID="6af8c01fe62e3173d12af05e7baaa005504b6f1d01dccfa4c248d8eb547589a3" Dec 01 09:17:37 crc kubenswrapper[4689]: E1201 09:17:37.048928 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:17:38 crc kubenswrapper[4689]: I1201 09:17:38.892900 4689 generic.go:334] "Generic (PLEG): container finished" podID="defe39e2-091c-472e-aefe-7691672100e7" containerID="8da90d30f40d70595e85dc0210ad301b33f0c64e1a70dcabc5c9030d4bc334af" exitCode=0 Dec 01 09:17:38 crc kubenswrapper[4689]: I1201 09:17:38.892964 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mjpf5" event={"ID":"defe39e2-091c-472e-aefe-7691672100e7","Type":"ContainerDied","Data":"8da90d30f40d70595e85dc0210ad301b33f0c64e1a70dcabc5c9030d4bc334af"} Dec 01 09:17:40 crc kubenswrapper[4689]: I1201 09:17:40.411808 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mjpf5" Dec 01 09:17:40 crc kubenswrapper[4689]: I1201 09:17:40.544175 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5sjj\" (UniqueName: \"kubernetes.io/projected/defe39e2-091c-472e-aefe-7691672100e7-kube-api-access-t5sjj\") pod \"defe39e2-091c-472e-aefe-7691672100e7\" (UID: \"defe39e2-091c-472e-aefe-7691672100e7\") " Dec 01 09:17:40 crc kubenswrapper[4689]: I1201 09:17:40.544444 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/defe39e2-091c-472e-aefe-7691672100e7-ssh-key\") pod \"defe39e2-091c-472e-aefe-7691672100e7\" (UID: \"defe39e2-091c-472e-aefe-7691672100e7\") " Dec 01 09:17:40 crc kubenswrapper[4689]: I1201 09:17:40.544488 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/defe39e2-091c-472e-aefe-7691672100e7-inventory\") pod \"defe39e2-091c-472e-aefe-7691672100e7\" (UID: \"defe39e2-091c-472e-aefe-7691672100e7\") " Dec 01 09:17:40 crc kubenswrapper[4689]: I1201 09:17:40.550116 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/defe39e2-091c-472e-aefe-7691672100e7-kube-api-access-t5sjj" (OuterVolumeSpecName: "kube-api-access-t5sjj") pod "defe39e2-091c-472e-aefe-7691672100e7" (UID: "defe39e2-091c-472e-aefe-7691672100e7"). InnerVolumeSpecName "kube-api-access-t5sjj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:17:40 crc kubenswrapper[4689]: I1201 09:17:40.587666 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/defe39e2-091c-472e-aefe-7691672100e7-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "defe39e2-091c-472e-aefe-7691672100e7" (UID: "defe39e2-091c-472e-aefe-7691672100e7"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:17:40 crc kubenswrapper[4689]: I1201 09:17:40.587736 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/defe39e2-091c-472e-aefe-7691672100e7-inventory" (OuterVolumeSpecName: "inventory") pod "defe39e2-091c-472e-aefe-7691672100e7" (UID: "defe39e2-091c-472e-aefe-7691672100e7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:17:40 crc kubenswrapper[4689]: I1201 09:17:40.646858 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t5sjj\" (UniqueName: \"kubernetes.io/projected/defe39e2-091c-472e-aefe-7691672100e7-kube-api-access-t5sjj\") on node \"crc\" DevicePath \"\"" Dec 01 09:17:40 crc kubenswrapper[4689]: I1201 09:17:40.646943 4689 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/defe39e2-091c-472e-aefe-7691672100e7-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 01 09:17:40 crc kubenswrapper[4689]: I1201 09:17:40.646958 4689 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/defe39e2-091c-472e-aefe-7691672100e7-inventory\") on node \"crc\" DevicePath \"\"" Dec 01 09:17:40 crc kubenswrapper[4689]: I1201 09:17:40.912833 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mjpf5" event={"ID":"defe39e2-091c-472e-aefe-7691672100e7","Type":"ContainerDied","Data":"ec89d2d278d6ac5bf686f621834436913dfe21f8be197167537ca129b5c0a951"} Dec 01 09:17:40 crc kubenswrapper[4689]: I1201 09:17:40.912880 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec89d2d278d6ac5bf686f621834436913dfe21f8be197167537ca129b5c0a951" Dec 01 09:17:40 crc kubenswrapper[4689]: I1201 09:17:40.912909 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mjpf5" Dec 01 09:17:41 crc kubenswrapper[4689]: I1201 09:17:41.015538 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cbwf4"] Dec 01 09:17:41 crc kubenswrapper[4689]: E1201 09:17:41.016033 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="defe39e2-091c-472e-aefe-7691672100e7" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Dec 01 09:17:41 crc kubenswrapper[4689]: I1201 09:17:41.016060 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="defe39e2-091c-472e-aefe-7691672100e7" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Dec 01 09:17:41 crc kubenswrapper[4689]: I1201 09:17:41.016824 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="defe39e2-091c-472e-aefe-7691672100e7" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Dec 01 09:17:41 crc kubenswrapper[4689]: I1201 09:17:41.017675 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cbwf4" Dec 01 09:17:41 crc kubenswrapper[4689]: I1201 09:17:41.021381 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Dec 01 09:17:41 crc kubenswrapper[4689]: I1201 09:17:41.021601 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Dec 01 09:17:41 crc kubenswrapper[4689]: I1201 09:17:41.021713 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 01 09:17:41 crc kubenswrapper[4689]: I1201 09:17:41.021861 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Dec 01 09:17:41 crc kubenswrapper[4689]: I1201 09:17:41.021976 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Dec 01 09:17:41 crc kubenswrapper[4689]: I1201 09:17:41.022157 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 01 09:17:41 crc kubenswrapper[4689]: I1201 09:17:41.022272 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-sh59x" Dec 01 09:17:41 crc kubenswrapper[4689]: I1201 09:17:41.026838 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 01 09:17:41 crc kubenswrapper[4689]: I1201 09:17:41.034698 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cbwf4"] Dec 01 09:17:41 crc kubenswrapper[4689]: I1201 09:17:41.163431 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c8be40c2-4b56-46d0-b99b-0fd198004a03-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cbwf4\" (UID: \"c8be40c2-4b56-46d0-b99b-0fd198004a03\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cbwf4" Dec 01 09:17:41 crc kubenswrapper[4689]: I1201 09:17:41.163490 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8be40c2-4b56-46d0-b99b-0fd198004a03-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cbwf4\" (UID: \"c8be40c2-4b56-46d0-b99b-0fd198004a03\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cbwf4" Dec 01 09:17:41 crc kubenswrapper[4689]: I1201 09:17:41.163556 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcs6l\" (UniqueName: \"kubernetes.io/projected/c8be40c2-4b56-46d0-b99b-0fd198004a03-kube-api-access-kcs6l\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cbwf4\" (UID: \"c8be40c2-4b56-46d0-b99b-0fd198004a03\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cbwf4" Dec 01 09:17:41 crc kubenswrapper[4689]: I1201 09:17:41.163605 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8be40c2-4b56-46d0-b99b-0fd198004a03-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cbwf4\" (UID: \"c8be40c2-4b56-46d0-b99b-0fd198004a03\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cbwf4" Dec 01 09:17:41 crc kubenswrapper[4689]: I1201 09:17:41.163646 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c8be40c2-4b56-46d0-b99b-0fd198004a03-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cbwf4\" (UID: \"c8be40c2-4b56-46d0-b99b-0fd198004a03\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cbwf4" Dec 01 09:17:41 crc kubenswrapper[4689]: I1201 09:17:41.163683 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8be40c2-4b56-46d0-b99b-0fd198004a03-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cbwf4\" (UID: \"c8be40c2-4b56-46d0-b99b-0fd198004a03\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cbwf4" Dec 01 09:17:41 crc kubenswrapper[4689]: I1201 09:17:41.163707 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c8be40c2-4b56-46d0-b99b-0fd198004a03-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cbwf4\" (UID: \"c8be40c2-4b56-46d0-b99b-0fd198004a03\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cbwf4" Dec 01 09:17:41 crc kubenswrapper[4689]: I1201 09:17:41.163725 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8be40c2-4b56-46d0-b99b-0fd198004a03-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cbwf4\" (UID: \"c8be40c2-4b56-46d0-b99b-0fd198004a03\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cbwf4" Dec 01 09:17:41 crc kubenswrapper[4689]: I1201 09:17:41.163747 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c8be40c2-4b56-46d0-b99b-0fd198004a03-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cbwf4\" (UID: \"c8be40c2-4b56-46d0-b99b-0fd198004a03\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cbwf4" Dec 01 09:17:41 crc kubenswrapper[4689]: I1201 09:17:41.163768 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c8be40c2-4b56-46d0-b99b-0fd198004a03-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cbwf4\" (UID: \"c8be40c2-4b56-46d0-b99b-0fd198004a03\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cbwf4" Dec 01 09:17:41 crc kubenswrapper[4689]: I1201 09:17:41.163815 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8be40c2-4b56-46d0-b99b-0fd198004a03-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cbwf4\" (UID: \"c8be40c2-4b56-46d0-b99b-0fd198004a03\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cbwf4" Dec 01 09:17:41 crc kubenswrapper[4689]: I1201 09:17:41.163831 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8be40c2-4b56-46d0-b99b-0fd198004a03-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cbwf4\" (UID: \"c8be40c2-4b56-46d0-b99b-0fd198004a03\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cbwf4" Dec 01 09:17:41 crc kubenswrapper[4689]: I1201 09:17:41.163853 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8be40c2-4b56-46d0-b99b-0fd198004a03-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cbwf4\" (UID: \"c8be40c2-4b56-46d0-b99b-0fd198004a03\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cbwf4" Dec 01 09:17:41 crc kubenswrapper[4689]: I1201 09:17:41.163904 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c8be40c2-4b56-46d0-b99b-0fd198004a03-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cbwf4\" (UID: \"c8be40c2-4b56-46d0-b99b-0fd198004a03\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cbwf4" Dec 01 09:17:41 crc kubenswrapper[4689]: I1201 09:17:41.265938 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8be40c2-4b56-46d0-b99b-0fd198004a03-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cbwf4\" (UID: \"c8be40c2-4b56-46d0-b99b-0fd198004a03\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cbwf4" Dec 01 09:17:41 crc kubenswrapper[4689]: I1201 09:17:41.265975 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8be40c2-4b56-46d0-b99b-0fd198004a03-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cbwf4\" (UID: \"c8be40c2-4b56-46d0-b99b-0fd198004a03\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cbwf4" Dec 01 09:17:41 crc kubenswrapper[4689]: I1201 09:17:41.266020 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8be40c2-4b56-46d0-b99b-0fd198004a03-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cbwf4\" (UID: \"c8be40c2-4b56-46d0-b99b-0fd198004a03\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cbwf4" Dec 01 09:17:41 crc kubenswrapper[4689]: I1201 09:17:41.266079 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c8be40c2-4b56-46d0-b99b-0fd198004a03-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cbwf4\" (UID: \"c8be40c2-4b56-46d0-b99b-0fd198004a03\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cbwf4" Dec 01 09:17:41 crc kubenswrapper[4689]: I1201 09:17:41.266102 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c8be40c2-4b56-46d0-b99b-0fd198004a03-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cbwf4\" (UID: \"c8be40c2-4b56-46d0-b99b-0fd198004a03\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cbwf4" Dec 01 09:17:41 crc kubenswrapper[4689]: I1201 09:17:41.266135 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8be40c2-4b56-46d0-b99b-0fd198004a03-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cbwf4\" (UID: \"c8be40c2-4b56-46d0-b99b-0fd198004a03\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cbwf4" Dec 01 09:17:41 crc kubenswrapper[4689]: I1201 09:17:41.266175 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcs6l\" (UniqueName: \"kubernetes.io/projected/c8be40c2-4b56-46d0-b99b-0fd198004a03-kube-api-access-kcs6l\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cbwf4\" (UID: \"c8be40c2-4b56-46d0-b99b-0fd198004a03\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cbwf4" Dec 01 09:17:41 crc kubenswrapper[4689]: I1201 09:17:41.266229 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8be40c2-4b56-46d0-b99b-0fd198004a03-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cbwf4\" (UID: \"c8be40c2-4b56-46d0-b99b-0fd198004a03\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cbwf4" Dec 01 09:17:41 crc kubenswrapper[4689]: I1201 09:17:41.266272 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c8be40c2-4b56-46d0-b99b-0fd198004a03-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cbwf4\" (UID: \"c8be40c2-4b56-46d0-b99b-0fd198004a03\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cbwf4" Dec 01 09:17:41 crc kubenswrapper[4689]: I1201 09:17:41.266310 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8be40c2-4b56-46d0-b99b-0fd198004a03-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cbwf4\" (UID: \"c8be40c2-4b56-46d0-b99b-0fd198004a03\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cbwf4" Dec 01 09:17:41 crc kubenswrapper[4689]: I1201 09:17:41.266341 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c8be40c2-4b56-46d0-b99b-0fd198004a03-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cbwf4\" (UID: \"c8be40c2-4b56-46d0-b99b-0fd198004a03\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cbwf4" Dec 01 09:17:41 crc kubenswrapper[4689]: I1201 09:17:41.266388 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8be40c2-4b56-46d0-b99b-0fd198004a03-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cbwf4\" (UID: \"c8be40c2-4b56-46d0-b99b-0fd198004a03\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cbwf4" Dec 01 09:17:41 crc kubenswrapper[4689]: I1201 09:17:41.266419 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c8be40c2-4b56-46d0-b99b-0fd198004a03-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cbwf4\" (UID: \"c8be40c2-4b56-46d0-b99b-0fd198004a03\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cbwf4" Dec 01 09:17:41 crc kubenswrapper[4689]: I1201 09:17:41.266444 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c8be40c2-4b56-46d0-b99b-0fd198004a03-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cbwf4\" (UID: \"c8be40c2-4b56-46d0-b99b-0fd198004a03\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cbwf4" Dec 01 09:17:41 crc kubenswrapper[4689]: I1201 09:17:41.270901 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8be40c2-4b56-46d0-b99b-0fd198004a03-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cbwf4\" (UID: \"c8be40c2-4b56-46d0-b99b-0fd198004a03\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cbwf4" Dec 01 09:17:41 crc kubenswrapper[4689]: I1201 09:17:41.271169 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c8be40c2-4b56-46d0-b99b-0fd198004a03-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cbwf4\" (UID: \"c8be40c2-4b56-46d0-b99b-0fd198004a03\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cbwf4" Dec 01 09:17:41 crc kubenswrapper[4689]: I1201 09:17:41.272181 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c8be40c2-4b56-46d0-b99b-0fd198004a03-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cbwf4\" (UID: \"c8be40c2-4b56-46d0-b99b-0fd198004a03\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cbwf4" Dec 01 09:17:41 crc kubenswrapper[4689]: I1201 09:17:41.272472 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c8be40c2-4b56-46d0-b99b-0fd198004a03-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cbwf4\" (UID: \"c8be40c2-4b56-46d0-b99b-0fd198004a03\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cbwf4" Dec 01 09:17:41 crc kubenswrapper[4689]: I1201 09:17:41.272672 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8be40c2-4b56-46d0-b99b-0fd198004a03-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cbwf4\" (UID: \"c8be40c2-4b56-46d0-b99b-0fd198004a03\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cbwf4" Dec 01 09:17:41 crc kubenswrapper[4689]: I1201 09:17:41.272699 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c8be40c2-4b56-46d0-b99b-0fd198004a03-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cbwf4\" (UID: \"c8be40c2-4b56-46d0-b99b-0fd198004a03\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cbwf4" Dec 01 09:17:41 crc kubenswrapper[4689]: I1201 09:17:41.274414 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8be40c2-4b56-46d0-b99b-0fd198004a03-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cbwf4\" (UID: \"c8be40c2-4b56-46d0-b99b-0fd198004a03\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cbwf4" Dec 01 09:17:41 crc kubenswrapper[4689]: I1201 09:17:41.276292 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8be40c2-4b56-46d0-b99b-0fd198004a03-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cbwf4\" (UID: \"c8be40c2-4b56-46d0-b99b-0fd198004a03\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cbwf4" Dec 01 09:17:41 crc kubenswrapper[4689]: I1201 09:17:41.276896 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c8be40c2-4b56-46d0-b99b-0fd198004a03-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cbwf4\" (UID: \"c8be40c2-4b56-46d0-b99b-0fd198004a03\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cbwf4" Dec 01 09:17:41 crc kubenswrapper[4689]: I1201 09:17:41.277282 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8be40c2-4b56-46d0-b99b-0fd198004a03-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cbwf4\" (UID: \"c8be40c2-4b56-46d0-b99b-0fd198004a03\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cbwf4" Dec 01 09:17:41 crc kubenswrapper[4689]: I1201 09:17:41.278207 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8be40c2-4b56-46d0-b99b-0fd198004a03-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cbwf4\" (UID: \"c8be40c2-4b56-46d0-b99b-0fd198004a03\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cbwf4" Dec 01 09:17:41 crc kubenswrapper[4689]: I1201 09:17:41.278652 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c8be40c2-4b56-46d0-b99b-0fd198004a03-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cbwf4\" (UID: \"c8be40c2-4b56-46d0-b99b-0fd198004a03\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cbwf4" Dec 01 09:17:41 crc kubenswrapper[4689]: I1201 09:17:41.279780 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8be40c2-4b56-46d0-b99b-0fd198004a03-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cbwf4\" (UID: \"c8be40c2-4b56-46d0-b99b-0fd198004a03\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cbwf4" Dec 01 09:17:41 crc kubenswrapper[4689]: I1201 09:17:41.293209 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcs6l\" (UniqueName: \"kubernetes.io/projected/c8be40c2-4b56-46d0-b99b-0fd198004a03-kube-api-access-kcs6l\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cbwf4\" (UID: \"c8be40c2-4b56-46d0-b99b-0fd198004a03\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cbwf4" Dec 01 09:17:41 crc kubenswrapper[4689]: I1201 09:17:41.346786 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cbwf4" Dec 01 09:17:41 crc kubenswrapper[4689]: I1201 09:17:41.885841 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cbwf4"] Dec 01 09:17:41 crc kubenswrapper[4689]: I1201 09:17:41.929308 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cbwf4" event={"ID":"c8be40c2-4b56-46d0-b99b-0fd198004a03","Type":"ContainerStarted","Data":"888885240ecac9a5d6e9e19f58a4d11705b0ea714442c807f85dbdb2f928fd1b"} Dec 01 09:17:43 crc kubenswrapper[4689]: I1201 09:17:43.955282 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cbwf4" event={"ID":"c8be40c2-4b56-46d0-b99b-0fd198004a03","Type":"ContainerStarted","Data":"2e99bc84cda33e12818725b9bed45f273d71571b6ec139fa87ee74a0c8fc1f1a"} Dec 01 09:17:43 crc kubenswrapper[4689]: I1201 09:17:43.979392 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cbwf4" podStartSLOduration=2.528648115 podStartE2EDuration="3.979351125s" podCreationTimestamp="2025-12-01 09:17:40 +0000 UTC" firstStartedPulling="2025-12-01 09:17:41.900697717 +0000 UTC m=+2341.972985631" lastFinishedPulling="2025-12-01 09:17:43.351400737 +0000 UTC m=+2343.423688641" observedRunningTime="2025-12-01 09:17:43.977183836 +0000 UTC m=+2344.049471760" watchObservedRunningTime="2025-12-01 09:17:43.979351125 +0000 UTC m=+2344.051639029" Dec 01 09:17:49 crc kubenswrapper[4689]: I1201 09:17:49.047422 4689 scope.go:117] "RemoveContainer" containerID="6af8c01fe62e3173d12af05e7baaa005504b6f1d01dccfa4c248d8eb547589a3" Dec 01 09:17:49 crc kubenswrapper[4689]: E1201 09:17:49.048334 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:18:03 crc kubenswrapper[4689]: I1201 09:18:03.048347 4689 scope.go:117] "RemoveContainer" containerID="6af8c01fe62e3173d12af05e7baaa005504b6f1d01dccfa4c248d8eb547589a3" Dec 01 09:18:03 crc kubenswrapper[4689]: E1201 09:18:03.049294 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:18:17 crc kubenswrapper[4689]: I1201 09:18:17.049007 4689 scope.go:117] "RemoveContainer" containerID="6af8c01fe62e3173d12af05e7baaa005504b6f1d01dccfa4c248d8eb547589a3" Dec 01 09:18:17 crc kubenswrapper[4689]: E1201 09:18:17.049759 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:18:25 crc kubenswrapper[4689]: I1201 09:18:25.417292 4689 generic.go:334] "Generic (PLEG): container finished" podID="c8be40c2-4b56-46d0-b99b-0fd198004a03" containerID="2e99bc84cda33e12818725b9bed45f273d71571b6ec139fa87ee74a0c8fc1f1a" exitCode=0 Dec 01 09:18:25 crc kubenswrapper[4689]: I1201 09:18:25.417490 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cbwf4" event={"ID":"c8be40c2-4b56-46d0-b99b-0fd198004a03","Type":"ContainerDied","Data":"2e99bc84cda33e12818725b9bed45f273d71571b6ec139fa87ee74a0c8fc1f1a"} Dec 01 09:18:26 crc kubenswrapper[4689]: I1201 09:18:26.861864 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cbwf4" Dec 01 09:18:26 crc kubenswrapper[4689]: I1201 09:18:26.969054 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c8be40c2-4b56-46d0-b99b-0fd198004a03-ssh-key\") pod \"c8be40c2-4b56-46d0-b99b-0fd198004a03\" (UID: \"c8be40c2-4b56-46d0-b99b-0fd198004a03\") " Dec 01 09:18:26 crc kubenswrapper[4689]: I1201 09:18:26.969143 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c8be40c2-4b56-46d0-b99b-0fd198004a03-openstack-edpm-ipam-ovn-default-certs-0\") pod \"c8be40c2-4b56-46d0-b99b-0fd198004a03\" (UID: \"c8be40c2-4b56-46d0-b99b-0fd198004a03\") " Dec 01 09:18:26 crc kubenswrapper[4689]: I1201 09:18:26.969181 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8be40c2-4b56-46d0-b99b-0fd198004a03-telemetry-combined-ca-bundle\") pod \"c8be40c2-4b56-46d0-b99b-0fd198004a03\" (UID: \"c8be40c2-4b56-46d0-b99b-0fd198004a03\") " Dec 01 09:18:26 crc kubenswrapper[4689]: I1201 09:18:26.969215 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8be40c2-4b56-46d0-b99b-0fd198004a03-ovn-combined-ca-bundle\") pod \"c8be40c2-4b56-46d0-b99b-0fd198004a03\" (UID: \"c8be40c2-4b56-46d0-b99b-0fd198004a03\") " Dec 01 09:18:26 crc kubenswrapper[4689]: I1201 09:18:26.969238 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kcs6l\" (UniqueName: \"kubernetes.io/projected/c8be40c2-4b56-46d0-b99b-0fd198004a03-kube-api-access-kcs6l\") pod \"c8be40c2-4b56-46d0-b99b-0fd198004a03\" (UID: \"c8be40c2-4b56-46d0-b99b-0fd198004a03\") " Dec 01 09:18:26 crc kubenswrapper[4689]: I1201 09:18:26.969257 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8be40c2-4b56-46d0-b99b-0fd198004a03-repo-setup-combined-ca-bundle\") pod \"c8be40c2-4b56-46d0-b99b-0fd198004a03\" (UID: \"c8be40c2-4b56-46d0-b99b-0fd198004a03\") " Dec 01 09:18:26 crc kubenswrapper[4689]: I1201 09:18:26.969297 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8be40c2-4b56-46d0-b99b-0fd198004a03-libvirt-combined-ca-bundle\") pod \"c8be40c2-4b56-46d0-b99b-0fd198004a03\" (UID: \"c8be40c2-4b56-46d0-b99b-0fd198004a03\") " Dec 01 09:18:26 crc kubenswrapper[4689]: I1201 09:18:26.969344 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c8be40c2-4b56-46d0-b99b-0fd198004a03-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"c8be40c2-4b56-46d0-b99b-0fd198004a03\" (UID: \"c8be40c2-4b56-46d0-b99b-0fd198004a03\") " Dec 01 09:18:26 crc kubenswrapper[4689]: I1201 09:18:26.969401 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c8be40c2-4b56-46d0-b99b-0fd198004a03-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"c8be40c2-4b56-46d0-b99b-0fd198004a03\" (UID: \"c8be40c2-4b56-46d0-b99b-0fd198004a03\") " Dec 01 09:18:26 crc kubenswrapper[4689]: I1201 09:18:26.969446 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8be40c2-4b56-46d0-b99b-0fd198004a03-neutron-metadata-combined-ca-bundle\") pod \"c8be40c2-4b56-46d0-b99b-0fd198004a03\" (UID: \"c8be40c2-4b56-46d0-b99b-0fd198004a03\") " Dec 01 09:18:26 crc kubenswrapper[4689]: I1201 09:18:26.969496 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c8be40c2-4b56-46d0-b99b-0fd198004a03-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"c8be40c2-4b56-46d0-b99b-0fd198004a03\" (UID: \"c8be40c2-4b56-46d0-b99b-0fd198004a03\") " Dec 01 09:18:26 crc kubenswrapper[4689]: I1201 09:18:26.969535 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8be40c2-4b56-46d0-b99b-0fd198004a03-nova-combined-ca-bundle\") pod \"c8be40c2-4b56-46d0-b99b-0fd198004a03\" (UID: \"c8be40c2-4b56-46d0-b99b-0fd198004a03\") " Dec 01 09:18:26 crc kubenswrapper[4689]: I1201 09:18:26.969569 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c8be40c2-4b56-46d0-b99b-0fd198004a03-inventory\") pod \"c8be40c2-4b56-46d0-b99b-0fd198004a03\" (UID: \"c8be40c2-4b56-46d0-b99b-0fd198004a03\") " Dec 01 09:18:26 crc kubenswrapper[4689]: I1201 09:18:26.969610 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8be40c2-4b56-46d0-b99b-0fd198004a03-bootstrap-combined-ca-bundle\") pod \"c8be40c2-4b56-46d0-b99b-0fd198004a03\" (UID: \"c8be40c2-4b56-46d0-b99b-0fd198004a03\") " Dec 01 09:18:26 crc kubenswrapper[4689]: I1201 09:18:26.979278 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8be40c2-4b56-46d0-b99b-0fd198004a03-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "c8be40c2-4b56-46d0-b99b-0fd198004a03" (UID: "c8be40c2-4b56-46d0-b99b-0fd198004a03"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:18:26 crc kubenswrapper[4689]: I1201 09:18:26.981502 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8be40c2-4b56-46d0-b99b-0fd198004a03-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "c8be40c2-4b56-46d0-b99b-0fd198004a03" (UID: "c8be40c2-4b56-46d0-b99b-0fd198004a03"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:18:26 crc kubenswrapper[4689]: I1201 09:18:26.982008 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8be40c2-4b56-46d0-b99b-0fd198004a03-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "c8be40c2-4b56-46d0-b99b-0fd198004a03" (UID: "c8be40c2-4b56-46d0-b99b-0fd198004a03"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:18:26 crc kubenswrapper[4689]: I1201 09:18:26.985602 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8be40c2-4b56-46d0-b99b-0fd198004a03-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "c8be40c2-4b56-46d0-b99b-0fd198004a03" (UID: "c8be40c2-4b56-46d0-b99b-0fd198004a03"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:18:26 crc kubenswrapper[4689]: I1201 09:18:26.985678 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8be40c2-4b56-46d0-b99b-0fd198004a03-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "c8be40c2-4b56-46d0-b99b-0fd198004a03" (UID: "c8be40c2-4b56-46d0-b99b-0fd198004a03"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:18:26 crc kubenswrapper[4689]: I1201 09:18:26.985727 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8be40c2-4b56-46d0-b99b-0fd198004a03-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "c8be40c2-4b56-46d0-b99b-0fd198004a03" (UID: "c8be40c2-4b56-46d0-b99b-0fd198004a03"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:18:26 crc kubenswrapper[4689]: I1201 09:18:26.986139 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8be40c2-4b56-46d0-b99b-0fd198004a03-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "c8be40c2-4b56-46d0-b99b-0fd198004a03" (UID: "c8be40c2-4b56-46d0-b99b-0fd198004a03"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:18:26 crc kubenswrapper[4689]: I1201 09:18:26.986923 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8be40c2-4b56-46d0-b99b-0fd198004a03-kube-api-access-kcs6l" (OuterVolumeSpecName: "kube-api-access-kcs6l") pod "c8be40c2-4b56-46d0-b99b-0fd198004a03" (UID: "c8be40c2-4b56-46d0-b99b-0fd198004a03"). InnerVolumeSpecName "kube-api-access-kcs6l". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:18:26 crc kubenswrapper[4689]: I1201 09:18:26.987654 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8be40c2-4b56-46d0-b99b-0fd198004a03-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "c8be40c2-4b56-46d0-b99b-0fd198004a03" (UID: "c8be40c2-4b56-46d0-b99b-0fd198004a03"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:18:26 crc kubenswrapper[4689]: I1201 09:18:26.993858 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8be40c2-4b56-46d0-b99b-0fd198004a03-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "c8be40c2-4b56-46d0-b99b-0fd198004a03" (UID: "c8be40c2-4b56-46d0-b99b-0fd198004a03"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:18:26 crc kubenswrapper[4689]: I1201 09:18:26.994340 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8be40c2-4b56-46d0-b99b-0fd198004a03-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "c8be40c2-4b56-46d0-b99b-0fd198004a03" (UID: "c8be40c2-4b56-46d0-b99b-0fd198004a03"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:18:27 crc kubenswrapper[4689]: I1201 09:18:27.000622 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8be40c2-4b56-46d0-b99b-0fd198004a03-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "c8be40c2-4b56-46d0-b99b-0fd198004a03" (UID: "c8be40c2-4b56-46d0-b99b-0fd198004a03"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:18:27 crc kubenswrapper[4689]: I1201 09:18:27.030102 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8be40c2-4b56-46d0-b99b-0fd198004a03-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "c8be40c2-4b56-46d0-b99b-0fd198004a03" (UID: "c8be40c2-4b56-46d0-b99b-0fd198004a03"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:18:27 crc kubenswrapper[4689]: I1201 09:18:27.032400 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8be40c2-4b56-46d0-b99b-0fd198004a03-inventory" (OuterVolumeSpecName: "inventory") pod "c8be40c2-4b56-46d0-b99b-0fd198004a03" (UID: "c8be40c2-4b56-46d0-b99b-0fd198004a03"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:18:27 crc kubenswrapper[4689]: I1201 09:18:27.072125 4689 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8be40c2-4b56-46d0-b99b-0fd198004a03-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 09:18:27 crc kubenswrapper[4689]: I1201 09:18:27.072159 4689 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c8be40c2-4b56-46d0-b99b-0fd198004a03-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 01 09:18:27 crc kubenswrapper[4689]: I1201 09:18:27.072170 4689 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c8be40c2-4b56-46d0-b99b-0fd198004a03-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 01 09:18:27 crc kubenswrapper[4689]: I1201 09:18:27.072180 4689 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8be40c2-4b56-46d0-b99b-0fd198004a03-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 09:18:27 crc kubenswrapper[4689]: I1201 09:18:27.072191 4689 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8be40c2-4b56-46d0-b99b-0fd198004a03-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 09:18:27 crc kubenswrapper[4689]: I1201 09:18:27.072200 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kcs6l\" (UniqueName: \"kubernetes.io/projected/c8be40c2-4b56-46d0-b99b-0fd198004a03-kube-api-access-kcs6l\") on node \"crc\" DevicePath \"\"" Dec 01 09:18:27 crc kubenswrapper[4689]: I1201 09:18:27.072209 4689 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8be40c2-4b56-46d0-b99b-0fd198004a03-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 09:18:27 crc kubenswrapper[4689]: I1201 09:18:27.072218 4689 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8be40c2-4b56-46d0-b99b-0fd198004a03-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 09:18:27 crc kubenswrapper[4689]: I1201 09:18:27.072228 4689 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c8be40c2-4b56-46d0-b99b-0fd198004a03-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 01 09:18:27 crc kubenswrapper[4689]: I1201 09:18:27.072240 4689 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c8be40c2-4b56-46d0-b99b-0fd198004a03-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 01 09:18:27 crc kubenswrapper[4689]: I1201 09:18:27.072250 4689 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8be40c2-4b56-46d0-b99b-0fd198004a03-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 09:18:27 crc kubenswrapper[4689]: I1201 09:18:27.072261 4689 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c8be40c2-4b56-46d0-b99b-0fd198004a03-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 01 09:18:27 crc kubenswrapper[4689]: I1201 09:18:27.072272 4689 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8be40c2-4b56-46d0-b99b-0fd198004a03-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 09:18:27 crc kubenswrapper[4689]: I1201 09:18:27.072281 4689 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c8be40c2-4b56-46d0-b99b-0fd198004a03-inventory\") on node \"crc\" DevicePath \"\"" Dec 01 09:18:27 crc kubenswrapper[4689]: E1201 09:18:27.249584 4689 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc8be40c2_4b56_46d0_b99b_0fd198004a03.slice\": RecentStats: unable to find data in memory cache]" Dec 01 09:18:27 crc kubenswrapper[4689]: I1201 09:18:27.454211 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cbwf4" event={"ID":"c8be40c2-4b56-46d0-b99b-0fd198004a03","Type":"ContainerDied","Data":"888885240ecac9a5d6e9e19f58a4d11705b0ea714442c807f85dbdb2f928fd1b"} Dec 01 09:18:27 crc kubenswrapper[4689]: I1201 09:18:27.454258 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="888885240ecac9a5d6e9e19f58a4d11705b0ea714442c807f85dbdb2f928fd1b" Dec 01 09:18:27 crc kubenswrapper[4689]: I1201 09:18:27.454343 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cbwf4" Dec 01 09:18:27 crc kubenswrapper[4689]: I1201 09:18:27.553839 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-9d8rc"] Dec 01 09:18:27 crc kubenswrapper[4689]: E1201 09:18:27.554227 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8be40c2-4b56-46d0-b99b-0fd198004a03" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Dec 01 09:18:27 crc kubenswrapper[4689]: I1201 09:18:27.554252 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8be40c2-4b56-46d0-b99b-0fd198004a03" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Dec 01 09:18:27 crc kubenswrapper[4689]: I1201 09:18:27.554515 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8be40c2-4b56-46d0-b99b-0fd198004a03" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Dec 01 09:18:27 crc kubenswrapper[4689]: I1201 09:18:27.555169 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9d8rc" Dec 01 09:18:27 crc kubenswrapper[4689]: I1201 09:18:27.561431 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 01 09:18:27 crc kubenswrapper[4689]: I1201 09:18:27.561609 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 01 09:18:27 crc kubenswrapper[4689]: I1201 09:18:27.563613 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Dec 01 09:18:27 crc kubenswrapper[4689]: I1201 09:18:27.564300 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 01 09:18:27 crc kubenswrapper[4689]: I1201 09:18:27.564451 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-sh59x" Dec 01 09:18:27 crc kubenswrapper[4689]: I1201 09:18:27.565098 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-9d8rc"] Dec 01 09:18:27 crc kubenswrapper[4689]: I1201 09:18:27.585101 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xvp9\" (UniqueName: \"kubernetes.io/projected/8df0b21e-33ac-48fa-b46f-558a7e4c37fc-kube-api-access-9xvp9\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9d8rc\" (UID: \"8df0b21e-33ac-48fa-b46f-558a7e4c37fc\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9d8rc" Dec 01 09:18:27 crc kubenswrapper[4689]: I1201 09:18:27.585152 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8df0b21e-33ac-48fa-b46f-558a7e4c37fc-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9d8rc\" (UID: \"8df0b21e-33ac-48fa-b46f-558a7e4c37fc\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9d8rc" Dec 01 09:18:27 crc kubenswrapper[4689]: I1201 09:18:27.585218 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/8df0b21e-33ac-48fa-b46f-558a7e4c37fc-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9d8rc\" (UID: \"8df0b21e-33ac-48fa-b46f-558a7e4c37fc\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9d8rc" Dec 01 09:18:27 crc kubenswrapper[4689]: I1201 09:18:27.585642 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8df0b21e-33ac-48fa-b46f-558a7e4c37fc-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9d8rc\" (UID: \"8df0b21e-33ac-48fa-b46f-558a7e4c37fc\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9d8rc" Dec 01 09:18:27 crc kubenswrapper[4689]: I1201 09:18:27.585812 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8df0b21e-33ac-48fa-b46f-558a7e4c37fc-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9d8rc\" (UID: \"8df0b21e-33ac-48fa-b46f-558a7e4c37fc\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9d8rc" Dec 01 09:18:27 crc kubenswrapper[4689]: I1201 09:18:27.686884 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8df0b21e-33ac-48fa-b46f-558a7e4c37fc-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9d8rc\" (UID: \"8df0b21e-33ac-48fa-b46f-558a7e4c37fc\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9d8rc" Dec 01 09:18:27 crc kubenswrapper[4689]: I1201 09:18:27.686980 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xvp9\" (UniqueName: \"kubernetes.io/projected/8df0b21e-33ac-48fa-b46f-558a7e4c37fc-kube-api-access-9xvp9\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9d8rc\" (UID: \"8df0b21e-33ac-48fa-b46f-558a7e4c37fc\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9d8rc" Dec 01 09:18:27 crc kubenswrapper[4689]: I1201 09:18:27.687009 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8df0b21e-33ac-48fa-b46f-558a7e4c37fc-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9d8rc\" (UID: \"8df0b21e-33ac-48fa-b46f-558a7e4c37fc\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9d8rc" Dec 01 09:18:27 crc kubenswrapper[4689]: I1201 09:18:27.687028 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/8df0b21e-33ac-48fa-b46f-558a7e4c37fc-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9d8rc\" (UID: \"8df0b21e-33ac-48fa-b46f-558a7e4c37fc\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9d8rc" Dec 01 09:18:27 crc kubenswrapper[4689]: I1201 09:18:27.687084 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8df0b21e-33ac-48fa-b46f-558a7e4c37fc-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9d8rc\" (UID: \"8df0b21e-33ac-48fa-b46f-558a7e4c37fc\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9d8rc" Dec 01 09:18:27 crc kubenswrapper[4689]: I1201 09:18:27.691558 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/8df0b21e-33ac-48fa-b46f-558a7e4c37fc-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9d8rc\" (UID: \"8df0b21e-33ac-48fa-b46f-558a7e4c37fc\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9d8rc" Dec 01 09:18:27 crc kubenswrapper[4689]: I1201 09:18:27.697684 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8df0b21e-33ac-48fa-b46f-558a7e4c37fc-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9d8rc\" (UID: \"8df0b21e-33ac-48fa-b46f-558a7e4c37fc\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9d8rc" Dec 01 09:18:27 crc kubenswrapper[4689]: I1201 09:18:27.698184 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8df0b21e-33ac-48fa-b46f-558a7e4c37fc-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9d8rc\" (UID: \"8df0b21e-33ac-48fa-b46f-558a7e4c37fc\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9d8rc" Dec 01 09:18:27 crc kubenswrapper[4689]: I1201 09:18:27.706898 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8df0b21e-33ac-48fa-b46f-558a7e4c37fc-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9d8rc\" (UID: \"8df0b21e-33ac-48fa-b46f-558a7e4c37fc\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9d8rc" Dec 01 09:18:27 crc kubenswrapper[4689]: I1201 09:18:27.707271 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xvp9\" (UniqueName: \"kubernetes.io/projected/8df0b21e-33ac-48fa-b46f-558a7e4c37fc-kube-api-access-9xvp9\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9d8rc\" (UID: \"8df0b21e-33ac-48fa-b46f-558a7e4c37fc\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9d8rc" Dec 01 09:18:27 crc kubenswrapper[4689]: I1201 09:18:27.872778 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9d8rc" Dec 01 09:18:28 crc kubenswrapper[4689]: I1201 09:18:28.621779 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-9d8rc"] Dec 01 09:18:29 crc kubenswrapper[4689]: I1201 09:18:29.473681 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9d8rc" event={"ID":"8df0b21e-33ac-48fa-b46f-558a7e4c37fc","Type":"ContainerStarted","Data":"b135ebb42cfd568ea1668d1790a5c061555c115f84dae4a76326fcb491ba6406"} Dec 01 09:18:30 crc kubenswrapper[4689]: I1201 09:18:30.047153 4689 scope.go:117] "RemoveContainer" containerID="6af8c01fe62e3173d12af05e7baaa005504b6f1d01dccfa4c248d8eb547589a3" Dec 01 09:18:30 crc kubenswrapper[4689]: E1201 09:18:30.047777 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:18:30 crc kubenswrapper[4689]: I1201 09:18:30.483591 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9d8rc" event={"ID":"8df0b21e-33ac-48fa-b46f-558a7e4c37fc","Type":"ContainerStarted","Data":"9536f66eb09005a117776f66caf6ce5bae5768bd26170396f29cf147f437dca3"} Dec 01 09:18:30 crc kubenswrapper[4689]: I1201 09:18:30.505125 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9d8rc" podStartSLOduration=2.582958745 podStartE2EDuration="3.505106697s" podCreationTimestamp="2025-12-01 09:18:27 +0000 UTC" firstStartedPulling="2025-12-01 09:18:28.624628195 +0000 UTC m=+2388.696916099" lastFinishedPulling="2025-12-01 09:18:29.546776147 +0000 UTC m=+2389.619064051" observedRunningTime="2025-12-01 09:18:30.501179819 +0000 UTC m=+2390.573467723" watchObservedRunningTime="2025-12-01 09:18:30.505106697 +0000 UTC m=+2390.577394601" Dec 01 09:18:45 crc kubenswrapper[4689]: I1201 09:18:45.049080 4689 scope.go:117] "RemoveContainer" containerID="6af8c01fe62e3173d12af05e7baaa005504b6f1d01dccfa4c248d8eb547589a3" Dec 01 09:18:45 crc kubenswrapper[4689]: E1201 09:18:45.050326 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:18:59 crc kubenswrapper[4689]: I1201 09:18:59.047341 4689 scope.go:117] "RemoveContainer" containerID="6af8c01fe62e3173d12af05e7baaa005504b6f1d01dccfa4c248d8eb547589a3" Dec 01 09:18:59 crc kubenswrapper[4689]: E1201 09:18:59.048223 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:19:14 crc kubenswrapper[4689]: I1201 09:19:14.047925 4689 scope.go:117] "RemoveContainer" containerID="6af8c01fe62e3173d12af05e7baaa005504b6f1d01dccfa4c248d8eb547589a3" Dec 01 09:19:14 crc kubenswrapper[4689]: E1201 09:19:14.048802 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:19:26 crc kubenswrapper[4689]: I1201 09:19:26.047395 4689 scope.go:117] "RemoveContainer" containerID="6af8c01fe62e3173d12af05e7baaa005504b6f1d01dccfa4c248d8eb547589a3" Dec 01 09:19:26 crc kubenswrapper[4689]: E1201 09:19:26.048311 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:19:40 crc kubenswrapper[4689]: I1201 09:19:40.234313 4689 generic.go:334] "Generic (PLEG): container finished" podID="8df0b21e-33ac-48fa-b46f-558a7e4c37fc" containerID="9536f66eb09005a117776f66caf6ce5bae5768bd26170396f29cf147f437dca3" exitCode=0 Dec 01 09:19:40 crc kubenswrapper[4689]: I1201 09:19:40.234443 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9d8rc" event={"ID":"8df0b21e-33ac-48fa-b46f-558a7e4c37fc","Type":"ContainerDied","Data":"9536f66eb09005a117776f66caf6ce5bae5768bd26170396f29cf147f437dca3"} Dec 01 09:19:41 crc kubenswrapper[4689]: I1201 09:19:41.054181 4689 scope.go:117] "RemoveContainer" containerID="6af8c01fe62e3173d12af05e7baaa005504b6f1d01dccfa4c248d8eb547589a3" Dec 01 09:19:41 crc kubenswrapper[4689]: E1201 09:19:41.055471 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:19:41 crc kubenswrapper[4689]: I1201 09:19:41.660505 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9d8rc" Dec 01 09:19:41 crc kubenswrapper[4689]: I1201 09:19:41.801072 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xvp9\" (UniqueName: \"kubernetes.io/projected/8df0b21e-33ac-48fa-b46f-558a7e4c37fc-kube-api-access-9xvp9\") pod \"8df0b21e-33ac-48fa-b46f-558a7e4c37fc\" (UID: \"8df0b21e-33ac-48fa-b46f-558a7e4c37fc\") " Dec 01 09:19:41 crc kubenswrapper[4689]: I1201 09:19:41.801260 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8df0b21e-33ac-48fa-b46f-558a7e4c37fc-inventory\") pod \"8df0b21e-33ac-48fa-b46f-558a7e4c37fc\" (UID: \"8df0b21e-33ac-48fa-b46f-558a7e4c37fc\") " Dec 01 09:19:41 crc kubenswrapper[4689]: I1201 09:19:41.801326 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8df0b21e-33ac-48fa-b46f-558a7e4c37fc-ovn-combined-ca-bundle\") pod \"8df0b21e-33ac-48fa-b46f-558a7e4c37fc\" (UID: \"8df0b21e-33ac-48fa-b46f-558a7e4c37fc\") " Dec 01 09:19:41 crc kubenswrapper[4689]: I1201 09:19:41.801509 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/8df0b21e-33ac-48fa-b46f-558a7e4c37fc-ovncontroller-config-0\") pod \"8df0b21e-33ac-48fa-b46f-558a7e4c37fc\" (UID: \"8df0b21e-33ac-48fa-b46f-558a7e4c37fc\") " Dec 01 09:19:41 crc kubenswrapper[4689]: I1201 09:19:41.801531 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8df0b21e-33ac-48fa-b46f-558a7e4c37fc-ssh-key\") pod \"8df0b21e-33ac-48fa-b46f-558a7e4c37fc\" (UID: \"8df0b21e-33ac-48fa-b46f-558a7e4c37fc\") " Dec 01 09:19:41 crc kubenswrapper[4689]: I1201 09:19:41.808219 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8df0b21e-33ac-48fa-b46f-558a7e4c37fc-kube-api-access-9xvp9" (OuterVolumeSpecName: "kube-api-access-9xvp9") pod "8df0b21e-33ac-48fa-b46f-558a7e4c37fc" (UID: "8df0b21e-33ac-48fa-b46f-558a7e4c37fc"). InnerVolumeSpecName "kube-api-access-9xvp9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:19:41 crc kubenswrapper[4689]: I1201 09:19:41.811508 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8df0b21e-33ac-48fa-b46f-558a7e4c37fc-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "8df0b21e-33ac-48fa-b46f-558a7e4c37fc" (UID: "8df0b21e-33ac-48fa-b46f-558a7e4c37fc"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:19:41 crc kubenswrapper[4689]: I1201 09:19:41.835552 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8df0b21e-33ac-48fa-b46f-558a7e4c37fc-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "8df0b21e-33ac-48fa-b46f-558a7e4c37fc" (UID: "8df0b21e-33ac-48fa-b46f-558a7e4c37fc"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:19:41 crc kubenswrapper[4689]: I1201 09:19:41.839639 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8df0b21e-33ac-48fa-b46f-558a7e4c37fc-inventory" (OuterVolumeSpecName: "inventory") pod "8df0b21e-33ac-48fa-b46f-558a7e4c37fc" (UID: "8df0b21e-33ac-48fa-b46f-558a7e4c37fc"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:19:41 crc kubenswrapper[4689]: I1201 09:19:41.839860 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8df0b21e-33ac-48fa-b46f-558a7e4c37fc-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "8df0b21e-33ac-48fa-b46f-558a7e4c37fc" (UID: "8df0b21e-33ac-48fa-b46f-558a7e4c37fc"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:19:41 crc kubenswrapper[4689]: I1201 09:19:41.903576 4689 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8df0b21e-33ac-48fa-b46f-558a7e4c37fc-inventory\") on node \"crc\" DevicePath \"\"" Dec 01 09:19:41 crc kubenswrapper[4689]: I1201 09:19:41.903606 4689 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8df0b21e-33ac-48fa-b46f-558a7e4c37fc-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 09:19:41 crc kubenswrapper[4689]: I1201 09:19:41.903620 4689 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/8df0b21e-33ac-48fa-b46f-558a7e4c37fc-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Dec 01 09:19:41 crc kubenswrapper[4689]: I1201 09:19:41.903628 4689 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8df0b21e-33ac-48fa-b46f-558a7e4c37fc-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 01 09:19:41 crc kubenswrapper[4689]: I1201 09:19:41.903637 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xvp9\" (UniqueName: \"kubernetes.io/projected/8df0b21e-33ac-48fa-b46f-558a7e4c37fc-kube-api-access-9xvp9\") on node \"crc\" DevicePath \"\"" Dec 01 09:19:42 crc kubenswrapper[4689]: I1201 09:19:42.253409 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9d8rc" event={"ID":"8df0b21e-33ac-48fa-b46f-558a7e4c37fc","Type":"ContainerDied","Data":"b135ebb42cfd568ea1668d1790a5c061555c115f84dae4a76326fcb491ba6406"} Dec 01 09:19:42 crc kubenswrapper[4689]: I1201 09:19:42.253502 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b135ebb42cfd568ea1668d1790a5c061555c115f84dae4a76326fcb491ba6406" Dec 01 09:19:42 crc kubenswrapper[4689]: I1201 09:19:42.253446 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9d8rc" Dec 01 09:19:42 crc kubenswrapper[4689]: I1201 09:19:42.366724 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tmxfz"] Dec 01 09:19:42 crc kubenswrapper[4689]: E1201 09:19:42.367191 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8df0b21e-33ac-48fa-b46f-558a7e4c37fc" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Dec 01 09:19:42 crc kubenswrapper[4689]: I1201 09:19:42.367211 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="8df0b21e-33ac-48fa-b46f-558a7e4c37fc" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Dec 01 09:19:42 crc kubenswrapper[4689]: I1201 09:19:42.367426 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="8df0b21e-33ac-48fa-b46f-558a7e4c37fc" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Dec 01 09:19:42 crc kubenswrapper[4689]: I1201 09:19:42.368108 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tmxfz" Dec 01 09:19:42 crc kubenswrapper[4689]: I1201 09:19:42.370067 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 01 09:19:42 crc kubenswrapper[4689]: I1201 09:19:42.370193 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Dec 01 09:19:42 crc kubenswrapper[4689]: I1201 09:19:42.370266 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 01 09:19:42 crc kubenswrapper[4689]: I1201 09:19:42.370405 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 01 09:19:42 crc kubenswrapper[4689]: I1201 09:19:42.370568 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Dec 01 09:19:42 crc kubenswrapper[4689]: I1201 09:19:42.372652 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-sh59x" Dec 01 09:19:42 crc kubenswrapper[4689]: I1201 09:19:42.380743 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tmxfz"] Dec 01 09:19:42 crc kubenswrapper[4689]: I1201 09:19:42.515533 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcnc2\" (UniqueName: \"kubernetes.io/projected/ccee02a7-c83e-4eb5-a6e7-f2ad619d948a-kube-api-access-bcnc2\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tmxfz\" (UID: \"ccee02a7-c83e-4eb5-a6e7-f2ad619d948a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tmxfz" Dec 01 09:19:42 crc kubenswrapper[4689]: I1201 09:19:42.515591 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ccee02a7-c83e-4eb5-a6e7-f2ad619d948a-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tmxfz\" (UID: \"ccee02a7-c83e-4eb5-a6e7-f2ad619d948a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tmxfz" Dec 01 09:19:42 crc kubenswrapper[4689]: I1201 09:19:42.515822 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ccee02a7-c83e-4eb5-a6e7-f2ad619d948a-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tmxfz\" (UID: \"ccee02a7-c83e-4eb5-a6e7-f2ad619d948a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tmxfz" Dec 01 09:19:42 crc kubenswrapper[4689]: I1201 09:19:42.515862 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccee02a7-c83e-4eb5-a6e7-f2ad619d948a-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tmxfz\" (UID: \"ccee02a7-c83e-4eb5-a6e7-f2ad619d948a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tmxfz" Dec 01 09:19:42 crc kubenswrapper[4689]: I1201 09:19:42.515892 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/ccee02a7-c83e-4eb5-a6e7-f2ad619d948a-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tmxfz\" (UID: \"ccee02a7-c83e-4eb5-a6e7-f2ad619d948a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tmxfz" Dec 01 09:19:42 crc kubenswrapper[4689]: I1201 09:19:42.515922 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/ccee02a7-c83e-4eb5-a6e7-f2ad619d948a-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tmxfz\" (UID: \"ccee02a7-c83e-4eb5-a6e7-f2ad619d948a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tmxfz" Dec 01 09:19:42 crc kubenswrapper[4689]: I1201 09:19:42.617797 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ccee02a7-c83e-4eb5-a6e7-f2ad619d948a-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tmxfz\" (UID: \"ccee02a7-c83e-4eb5-a6e7-f2ad619d948a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tmxfz" Dec 01 09:19:42 crc kubenswrapper[4689]: I1201 09:19:42.618299 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ccee02a7-c83e-4eb5-a6e7-f2ad619d948a-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tmxfz\" (UID: \"ccee02a7-c83e-4eb5-a6e7-f2ad619d948a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tmxfz" Dec 01 09:19:42 crc kubenswrapper[4689]: I1201 09:19:42.618348 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccee02a7-c83e-4eb5-a6e7-f2ad619d948a-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tmxfz\" (UID: \"ccee02a7-c83e-4eb5-a6e7-f2ad619d948a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tmxfz" Dec 01 09:19:42 crc kubenswrapper[4689]: I1201 09:19:42.618410 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/ccee02a7-c83e-4eb5-a6e7-f2ad619d948a-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tmxfz\" (UID: \"ccee02a7-c83e-4eb5-a6e7-f2ad619d948a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tmxfz" Dec 01 09:19:42 crc kubenswrapper[4689]: I1201 09:19:42.618453 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/ccee02a7-c83e-4eb5-a6e7-f2ad619d948a-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tmxfz\" (UID: \"ccee02a7-c83e-4eb5-a6e7-f2ad619d948a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tmxfz" Dec 01 09:19:42 crc kubenswrapper[4689]: I1201 09:19:42.618500 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcnc2\" (UniqueName: \"kubernetes.io/projected/ccee02a7-c83e-4eb5-a6e7-f2ad619d948a-kube-api-access-bcnc2\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tmxfz\" (UID: \"ccee02a7-c83e-4eb5-a6e7-f2ad619d948a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tmxfz" Dec 01 09:19:42 crc kubenswrapper[4689]: I1201 09:19:42.621974 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/ccee02a7-c83e-4eb5-a6e7-f2ad619d948a-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tmxfz\" (UID: \"ccee02a7-c83e-4eb5-a6e7-f2ad619d948a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tmxfz" Dec 01 09:19:42 crc kubenswrapper[4689]: I1201 09:19:42.622066 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccee02a7-c83e-4eb5-a6e7-f2ad619d948a-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tmxfz\" (UID: \"ccee02a7-c83e-4eb5-a6e7-f2ad619d948a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tmxfz" Dec 01 09:19:42 crc kubenswrapper[4689]: I1201 09:19:42.623408 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ccee02a7-c83e-4eb5-a6e7-f2ad619d948a-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tmxfz\" (UID: \"ccee02a7-c83e-4eb5-a6e7-f2ad619d948a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tmxfz" Dec 01 09:19:42 crc kubenswrapper[4689]: I1201 09:19:42.625383 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ccee02a7-c83e-4eb5-a6e7-f2ad619d948a-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tmxfz\" (UID: \"ccee02a7-c83e-4eb5-a6e7-f2ad619d948a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tmxfz" Dec 01 09:19:42 crc kubenswrapper[4689]: I1201 09:19:42.626353 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/ccee02a7-c83e-4eb5-a6e7-f2ad619d948a-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tmxfz\" (UID: \"ccee02a7-c83e-4eb5-a6e7-f2ad619d948a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tmxfz" Dec 01 09:19:42 crc kubenswrapper[4689]: I1201 09:19:42.636595 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcnc2\" (UniqueName: \"kubernetes.io/projected/ccee02a7-c83e-4eb5-a6e7-f2ad619d948a-kube-api-access-bcnc2\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tmxfz\" (UID: \"ccee02a7-c83e-4eb5-a6e7-f2ad619d948a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tmxfz" Dec 01 09:19:42 crc kubenswrapper[4689]: I1201 09:19:42.686787 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tmxfz" Dec 01 09:19:43 crc kubenswrapper[4689]: I1201 09:19:43.291200 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tmxfz"] Dec 01 09:19:44 crc kubenswrapper[4689]: I1201 09:19:44.275351 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tmxfz" event={"ID":"ccee02a7-c83e-4eb5-a6e7-f2ad619d948a","Type":"ContainerStarted","Data":"dd96f5732ec921c5fab1a157a591233f30dd791ac769e649192f30cf056ea738"} Dec 01 09:19:44 crc kubenswrapper[4689]: I1201 09:19:44.275423 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tmxfz" event={"ID":"ccee02a7-c83e-4eb5-a6e7-f2ad619d948a","Type":"ContainerStarted","Data":"8894f9a965b1e01c9ef54f9e17b7d16f46a57e3dfbc687ef36ec84f1fcdd338c"} Dec 01 09:19:53 crc kubenswrapper[4689]: I1201 09:19:53.047729 4689 scope.go:117] "RemoveContainer" containerID="6af8c01fe62e3173d12af05e7baaa005504b6f1d01dccfa4c248d8eb547589a3" Dec 01 09:19:53 crc kubenswrapper[4689]: E1201 09:19:53.048478 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:20:06 crc kubenswrapper[4689]: I1201 09:20:06.047903 4689 scope.go:117] "RemoveContainer" containerID="6af8c01fe62e3173d12af05e7baaa005504b6f1d01dccfa4c248d8eb547589a3" Dec 01 09:20:06 crc kubenswrapper[4689]: E1201 09:20:06.048773 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:20:18 crc kubenswrapper[4689]: I1201 09:20:18.048079 4689 scope.go:117] "RemoveContainer" containerID="6af8c01fe62e3173d12af05e7baaa005504b6f1d01dccfa4c248d8eb547589a3" Dec 01 09:20:18 crc kubenswrapper[4689]: I1201 09:20:18.598903 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" event={"ID":"3947625d-75bf-4332-a233-1491b2ee9d96","Type":"ContainerStarted","Data":"6ffee746d9d222deafefbfb3f0b9f2751015c989eb3af976fcaabb3d08c27595"} Dec 01 09:20:18 crc kubenswrapper[4689]: I1201 09:20:18.627542 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tmxfz" podStartSLOduration=36.07396558 podStartE2EDuration="36.62751528s" podCreationTimestamp="2025-12-01 09:19:42 +0000 UTC" firstStartedPulling="2025-12-01 09:19:43.305743851 +0000 UTC m=+2463.378031755" lastFinishedPulling="2025-12-01 09:19:43.859293551 +0000 UTC m=+2463.931581455" observedRunningTime="2025-12-01 09:19:44.296281067 +0000 UTC m=+2464.368568991" watchObservedRunningTime="2025-12-01 09:20:18.62751528 +0000 UTC m=+2498.699803204" Dec 01 09:20:39 crc kubenswrapper[4689]: I1201 09:20:39.924711 4689 generic.go:334] "Generic (PLEG): container finished" podID="ccee02a7-c83e-4eb5-a6e7-f2ad619d948a" containerID="dd96f5732ec921c5fab1a157a591233f30dd791ac769e649192f30cf056ea738" exitCode=0 Dec 01 09:20:39 crc kubenswrapper[4689]: I1201 09:20:39.924781 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tmxfz" event={"ID":"ccee02a7-c83e-4eb5-a6e7-f2ad619d948a","Type":"ContainerDied","Data":"dd96f5732ec921c5fab1a157a591233f30dd791ac769e649192f30cf056ea738"} Dec 01 09:20:41 crc kubenswrapper[4689]: I1201 09:20:41.416864 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tmxfz" Dec 01 09:20:41 crc kubenswrapper[4689]: I1201 09:20:41.589785 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bcnc2\" (UniqueName: \"kubernetes.io/projected/ccee02a7-c83e-4eb5-a6e7-f2ad619d948a-kube-api-access-bcnc2\") pod \"ccee02a7-c83e-4eb5-a6e7-f2ad619d948a\" (UID: \"ccee02a7-c83e-4eb5-a6e7-f2ad619d948a\") " Dec 01 09:20:41 crc kubenswrapper[4689]: I1201 09:20:41.590133 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/ccee02a7-c83e-4eb5-a6e7-f2ad619d948a-neutron-ovn-metadata-agent-neutron-config-0\") pod \"ccee02a7-c83e-4eb5-a6e7-f2ad619d948a\" (UID: \"ccee02a7-c83e-4eb5-a6e7-f2ad619d948a\") " Dec 01 09:20:41 crc kubenswrapper[4689]: I1201 09:20:41.590239 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ccee02a7-c83e-4eb5-a6e7-f2ad619d948a-ssh-key\") pod \"ccee02a7-c83e-4eb5-a6e7-f2ad619d948a\" (UID: \"ccee02a7-c83e-4eb5-a6e7-f2ad619d948a\") " Dec 01 09:20:41 crc kubenswrapper[4689]: I1201 09:20:41.590810 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ccee02a7-c83e-4eb5-a6e7-f2ad619d948a-inventory\") pod \"ccee02a7-c83e-4eb5-a6e7-f2ad619d948a\" (UID: \"ccee02a7-c83e-4eb5-a6e7-f2ad619d948a\") " Dec 01 09:20:41 crc kubenswrapper[4689]: I1201 09:20:41.590953 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccee02a7-c83e-4eb5-a6e7-f2ad619d948a-neutron-metadata-combined-ca-bundle\") pod \"ccee02a7-c83e-4eb5-a6e7-f2ad619d948a\" (UID: \"ccee02a7-c83e-4eb5-a6e7-f2ad619d948a\") " Dec 01 09:20:41 crc kubenswrapper[4689]: I1201 09:20:41.591106 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/ccee02a7-c83e-4eb5-a6e7-f2ad619d948a-nova-metadata-neutron-config-0\") pod \"ccee02a7-c83e-4eb5-a6e7-f2ad619d948a\" (UID: \"ccee02a7-c83e-4eb5-a6e7-f2ad619d948a\") " Dec 01 09:20:41 crc kubenswrapper[4689]: I1201 09:20:41.611002 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ccee02a7-c83e-4eb5-a6e7-f2ad619d948a-kube-api-access-bcnc2" (OuterVolumeSpecName: "kube-api-access-bcnc2") pod "ccee02a7-c83e-4eb5-a6e7-f2ad619d948a" (UID: "ccee02a7-c83e-4eb5-a6e7-f2ad619d948a"). InnerVolumeSpecName "kube-api-access-bcnc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:20:41 crc kubenswrapper[4689]: I1201 09:20:41.616031 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccee02a7-c83e-4eb5-a6e7-f2ad619d948a-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "ccee02a7-c83e-4eb5-a6e7-f2ad619d948a" (UID: "ccee02a7-c83e-4eb5-a6e7-f2ad619d948a"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:20:41 crc kubenswrapper[4689]: I1201 09:20:41.623470 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccee02a7-c83e-4eb5-a6e7-f2ad619d948a-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "ccee02a7-c83e-4eb5-a6e7-f2ad619d948a" (UID: "ccee02a7-c83e-4eb5-a6e7-f2ad619d948a"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:20:41 crc kubenswrapper[4689]: I1201 09:20:41.629383 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccee02a7-c83e-4eb5-a6e7-f2ad619d948a-inventory" (OuterVolumeSpecName: "inventory") pod "ccee02a7-c83e-4eb5-a6e7-f2ad619d948a" (UID: "ccee02a7-c83e-4eb5-a6e7-f2ad619d948a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:20:41 crc kubenswrapper[4689]: I1201 09:20:41.641658 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccee02a7-c83e-4eb5-a6e7-f2ad619d948a-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "ccee02a7-c83e-4eb5-a6e7-f2ad619d948a" (UID: "ccee02a7-c83e-4eb5-a6e7-f2ad619d948a"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:20:41 crc kubenswrapper[4689]: I1201 09:20:41.651239 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccee02a7-c83e-4eb5-a6e7-f2ad619d948a-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "ccee02a7-c83e-4eb5-a6e7-f2ad619d948a" (UID: "ccee02a7-c83e-4eb5-a6e7-f2ad619d948a"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:20:41 crc kubenswrapper[4689]: I1201 09:20:41.693063 4689 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ccee02a7-c83e-4eb5-a6e7-f2ad619d948a-inventory\") on node \"crc\" DevicePath \"\"" Dec 01 09:20:41 crc kubenswrapper[4689]: I1201 09:20:41.693114 4689 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccee02a7-c83e-4eb5-a6e7-f2ad619d948a-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 09:20:41 crc kubenswrapper[4689]: I1201 09:20:41.693126 4689 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/ccee02a7-c83e-4eb5-a6e7-f2ad619d948a-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Dec 01 09:20:41 crc kubenswrapper[4689]: I1201 09:20:41.693136 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bcnc2\" (UniqueName: \"kubernetes.io/projected/ccee02a7-c83e-4eb5-a6e7-f2ad619d948a-kube-api-access-bcnc2\") on node \"crc\" DevicePath \"\"" Dec 01 09:20:41 crc kubenswrapper[4689]: I1201 09:20:41.693146 4689 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/ccee02a7-c83e-4eb5-a6e7-f2ad619d948a-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Dec 01 09:20:41 crc kubenswrapper[4689]: I1201 09:20:41.693172 4689 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ccee02a7-c83e-4eb5-a6e7-f2ad619d948a-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 01 09:20:41 crc kubenswrapper[4689]: I1201 09:20:41.954238 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tmxfz" event={"ID":"ccee02a7-c83e-4eb5-a6e7-f2ad619d948a","Type":"ContainerDied","Data":"8894f9a965b1e01c9ef54f9e17b7d16f46a57e3dfbc687ef36ec84f1fcdd338c"} Dec 01 09:20:41 crc kubenswrapper[4689]: I1201 09:20:41.954292 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8894f9a965b1e01c9ef54f9e17b7d16f46a57e3dfbc687ef36ec84f1fcdd338c" Dec 01 09:20:41 crc kubenswrapper[4689]: I1201 09:20:41.954609 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tmxfz" Dec 01 09:20:42 crc kubenswrapper[4689]: I1201 09:20:42.045213 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m5ms8"] Dec 01 09:20:42 crc kubenswrapper[4689]: E1201 09:20:42.045741 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccee02a7-c83e-4eb5-a6e7-f2ad619d948a" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Dec 01 09:20:42 crc kubenswrapper[4689]: I1201 09:20:42.045760 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccee02a7-c83e-4eb5-a6e7-f2ad619d948a" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Dec 01 09:20:42 crc kubenswrapper[4689]: I1201 09:20:42.045977 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccee02a7-c83e-4eb5-a6e7-f2ad619d948a" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Dec 01 09:20:42 crc kubenswrapper[4689]: I1201 09:20:42.046718 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m5ms8" Dec 01 09:20:42 crc kubenswrapper[4689]: I1201 09:20:42.049461 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 01 09:20:42 crc kubenswrapper[4689]: I1201 09:20:42.099357 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m5ms8"] Dec 01 09:20:42 crc kubenswrapper[4689]: I1201 09:20:42.099973 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Dec 01 09:20:42 crc kubenswrapper[4689]: I1201 09:20:42.100186 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-sh59x" Dec 01 09:20:42 crc kubenswrapper[4689]: I1201 09:20:42.100299 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 01 09:20:42 crc kubenswrapper[4689]: I1201 09:20:42.103473 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 01 09:20:42 crc kubenswrapper[4689]: I1201 09:20:42.204886 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79ac411d-051b-464c-ab78-a5e99ef18520-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-m5ms8\" (UID: \"79ac411d-051b-464c-ab78-a5e99ef18520\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m5ms8" Dec 01 09:20:42 crc kubenswrapper[4689]: I1201 09:20:42.204949 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/79ac411d-051b-464c-ab78-a5e99ef18520-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-m5ms8\" (UID: \"79ac411d-051b-464c-ab78-a5e99ef18520\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m5ms8" Dec 01 09:20:42 crc kubenswrapper[4689]: I1201 09:20:42.205001 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/79ac411d-051b-464c-ab78-a5e99ef18520-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-m5ms8\" (UID: \"79ac411d-051b-464c-ab78-a5e99ef18520\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m5ms8" Dec 01 09:20:42 crc kubenswrapper[4689]: I1201 09:20:42.205342 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6cwb\" (UniqueName: \"kubernetes.io/projected/79ac411d-051b-464c-ab78-a5e99ef18520-kube-api-access-k6cwb\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-m5ms8\" (UID: \"79ac411d-051b-464c-ab78-a5e99ef18520\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m5ms8" Dec 01 09:20:42 crc kubenswrapper[4689]: I1201 09:20:42.205459 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/79ac411d-051b-464c-ab78-a5e99ef18520-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-m5ms8\" (UID: \"79ac411d-051b-464c-ab78-a5e99ef18520\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m5ms8" Dec 01 09:20:42 crc kubenswrapper[4689]: I1201 09:20:42.307323 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6cwb\" (UniqueName: \"kubernetes.io/projected/79ac411d-051b-464c-ab78-a5e99ef18520-kube-api-access-k6cwb\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-m5ms8\" (UID: \"79ac411d-051b-464c-ab78-a5e99ef18520\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m5ms8" Dec 01 09:20:42 crc kubenswrapper[4689]: I1201 09:20:42.308037 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/79ac411d-051b-464c-ab78-a5e99ef18520-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-m5ms8\" (UID: \"79ac411d-051b-464c-ab78-a5e99ef18520\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m5ms8" Dec 01 09:20:42 crc kubenswrapper[4689]: I1201 09:20:42.309437 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79ac411d-051b-464c-ab78-a5e99ef18520-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-m5ms8\" (UID: \"79ac411d-051b-464c-ab78-a5e99ef18520\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m5ms8" Dec 01 09:20:42 crc kubenswrapper[4689]: I1201 09:20:42.309494 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/79ac411d-051b-464c-ab78-a5e99ef18520-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-m5ms8\" (UID: \"79ac411d-051b-464c-ab78-a5e99ef18520\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m5ms8" Dec 01 09:20:42 crc kubenswrapper[4689]: I1201 09:20:42.309588 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/79ac411d-051b-464c-ab78-a5e99ef18520-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-m5ms8\" (UID: \"79ac411d-051b-464c-ab78-a5e99ef18520\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m5ms8" Dec 01 09:20:42 crc kubenswrapper[4689]: I1201 09:20:42.313739 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/79ac411d-051b-464c-ab78-a5e99ef18520-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-m5ms8\" (UID: \"79ac411d-051b-464c-ab78-a5e99ef18520\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m5ms8" Dec 01 09:20:42 crc kubenswrapper[4689]: I1201 09:20:42.314722 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/79ac411d-051b-464c-ab78-a5e99ef18520-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-m5ms8\" (UID: \"79ac411d-051b-464c-ab78-a5e99ef18520\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m5ms8" Dec 01 09:20:42 crc kubenswrapper[4689]: I1201 09:20:42.315222 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79ac411d-051b-464c-ab78-a5e99ef18520-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-m5ms8\" (UID: \"79ac411d-051b-464c-ab78-a5e99ef18520\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m5ms8" Dec 01 09:20:42 crc kubenswrapper[4689]: I1201 09:20:42.321239 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/79ac411d-051b-464c-ab78-a5e99ef18520-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-m5ms8\" (UID: \"79ac411d-051b-464c-ab78-a5e99ef18520\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m5ms8" Dec 01 09:20:42 crc kubenswrapper[4689]: I1201 09:20:42.334598 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6cwb\" (UniqueName: \"kubernetes.io/projected/79ac411d-051b-464c-ab78-a5e99ef18520-kube-api-access-k6cwb\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-m5ms8\" (UID: \"79ac411d-051b-464c-ab78-a5e99ef18520\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m5ms8" Dec 01 09:20:42 crc kubenswrapper[4689]: I1201 09:20:42.433646 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m5ms8" Dec 01 09:20:42 crc kubenswrapper[4689]: I1201 09:20:42.969837 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m5ms8"] Dec 01 09:20:43 crc kubenswrapper[4689]: I1201 09:20:43.978323 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m5ms8" event={"ID":"79ac411d-051b-464c-ab78-a5e99ef18520","Type":"ContainerStarted","Data":"46867a06cad617b583526149ec8e9204cfdf7d2f9d1fe278a57d96a94f1d7f43"} Dec 01 09:20:43 crc kubenswrapper[4689]: I1201 09:20:43.978804 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m5ms8" event={"ID":"79ac411d-051b-464c-ab78-a5e99ef18520","Type":"ContainerStarted","Data":"2b8813652e7323893acc0f463128aa23d2c47ca86fdef4fefa2c287df18cd7b0"} Dec 01 09:20:44 crc kubenswrapper[4689]: I1201 09:20:44.004506 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m5ms8" podStartSLOduration=1.439407565 podStartE2EDuration="2.004483999s" podCreationTimestamp="2025-12-01 09:20:42 +0000 UTC" firstStartedPulling="2025-12-01 09:20:42.98492122 +0000 UTC m=+2523.057209124" lastFinishedPulling="2025-12-01 09:20:43.549997654 +0000 UTC m=+2523.622285558" observedRunningTime="2025-12-01 09:20:43.996343396 +0000 UTC m=+2524.068631310" watchObservedRunningTime="2025-12-01 09:20:44.004483999 +0000 UTC m=+2524.076771913" Dec 01 09:22:39 crc kubenswrapper[4689]: I1201 09:22:39.146718 4689 patch_prober.go:28] interesting pod/machine-config-daemon-hmdnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 09:22:39 crc kubenswrapper[4689]: I1201 09:22:39.148487 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 09:23:09 crc kubenswrapper[4689]: I1201 09:23:09.147308 4689 patch_prober.go:28] interesting pod/machine-config-daemon-hmdnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 09:23:09 crc kubenswrapper[4689]: I1201 09:23:09.147972 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 09:23:39 crc kubenswrapper[4689]: I1201 09:23:39.147318 4689 patch_prober.go:28] interesting pod/machine-config-daemon-hmdnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 09:23:39 crc kubenswrapper[4689]: I1201 09:23:39.147907 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 09:23:39 crc kubenswrapper[4689]: I1201 09:23:39.147975 4689 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" Dec 01 09:23:39 crc kubenswrapper[4689]: I1201 09:23:39.148952 4689 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6ffee746d9d222deafefbfb3f0b9f2751015c989eb3af976fcaabb3d08c27595"} pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 01 09:23:39 crc kubenswrapper[4689]: I1201 09:23:39.149059 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" containerName="machine-config-daemon" containerID="cri-o://6ffee746d9d222deafefbfb3f0b9f2751015c989eb3af976fcaabb3d08c27595" gracePeriod=600 Dec 01 09:23:39 crc kubenswrapper[4689]: I1201 09:23:39.790227 4689 generic.go:334] "Generic (PLEG): container finished" podID="3947625d-75bf-4332-a233-1491b2ee9d96" containerID="6ffee746d9d222deafefbfb3f0b9f2751015c989eb3af976fcaabb3d08c27595" exitCode=0 Dec 01 09:23:39 crc kubenswrapper[4689]: I1201 09:23:39.790307 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" event={"ID":"3947625d-75bf-4332-a233-1491b2ee9d96","Type":"ContainerDied","Data":"6ffee746d9d222deafefbfb3f0b9f2751015c989eb3af976fcaabb3d08c27595"} Dec 01 09:23:39 crc kubenswrapper[4689]: I1201 09:23:39.790769 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" event={"ID":"3947625d-75bf-4332-a233-1491b2ee9d96","Type":"ContainerStarted","Data":"061213c21e96b72086584b4bc2e0384184a8626ff20d0fe98b0c83c0e192169e"} Dec 01 09:23:39 crc kubenswrapper[4689]: I1201 09:23:39.790795 4689 scope.go:117] "RemoveContainer" containerID="6af8c01fe62e3173d12af05e7baaa005504b6f1d01dccfa4c248d8eb547589a3" Dec 01 09:24:25 crc kubenswrapper[4689]: I1201 09:24:25.437492 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8pckx"] Dec 01 09:24:25 crc kubenswrapper[4689]: I1201 09:24:25.442146 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8pckx" Dec 01 09:24:25 crc kubenswrapper[4689]: I1201 09:24:25.448096 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8pckx"] Dec 01 09:24:25 crc kubenswrapper[4689]: I1201 09:24:25.536433 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b34eca3f-e88d-4d41-9ddf-432b8a41a6cd-utilities\") pod \"redhat-operators-8pckx\" (UID: \"b34eca3f-e88d-4d41-9ddf-432b8a41a6cd\") " pod="openshift-marketplace/redhat-operators-8pckx" Dec 01 09:24:25 crc kubenswrapper[4689]: I1201 09:24:25.536492 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b34eca3f-e88d-4d41-9ddf-432b8a41a6cd-catalog-content\") pod \"redhat-operators-8pckx\" (UID: \"b34eca3f-e88d-4d41-9ddf-432b8a41a6cd\") " pod="openshift-marketplace/redhat-operators-8pckx" Dec 01 09:24:25 crc kubenswrapper[4689]: I1201 09:24:25.536583 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spscx\" (UniqueName: \"kubernetes.io/projected/b34eca3f-e88d-4d41-9ddf-432b8a41a6cd-kube-api-access-spscx\") pod \"redhat-operators-8pckx\" (UID: \"b34eca3f-e88d-4d41-9ddf-432b8a41a6cd\") " pod="openshift-marketplace/redhat-operators-8pckx" Dec 01 09:24:25 crc kubenswrapper[4689]: I1201 09:24:25.638885 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spscx\" (UniqueName: \"kubernetes.io/projected/b34eca3f-e88d-4d41-9ddf-432b8a41a6cd-kube-api-access-spscx\") pod \"redhat-operators-8pckx\" (UID: \"b34eca3f-e88d-4d41-9ddf-432b8a41a6cd\") " pod="openshift-marketplace/redhat-operators-8pckx" Dec 01 09:24:25 crc kubenswrapper[4689]: I1201 09:24:25.639068 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b34eca3f-e88d-4d41-9ddf-432b8a41a6cd-utilities\") pod \"redhat-operators-8pckx\" (UID: \"b34eca3f-e88d-4d41-9ddf-432b8a41a6cd\") " pod="openshift-marketplace/redhat-operators-8pckx" Dec 01 09:24:25 crc kubenswrapper[4689]: I1201 09:24:25.639099 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b34eca3f-e88d-4d41-9ddf-432b8a41a6cd-catalog-content\") pod \"redhat-operators-8pckx\" (UID: \"b34eca3f-e88d-4d41-9ddf-432b8a41a6cd\") " pod="openshift-marketplace/redhat-operators-8pckx" Dec 01 09:24:25 crc kubenswrapper[4689]: I1201 09:24:25.639713 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b34eca3f-e88d-4d41-9ddf-432b8a41a6cd-utilities\") pod \"redhat-operators-8pckx\" (UID: \"b34eca3f-e88d-4d41-9ddf-432b8a41a6cd\") " pod="openshift-marketplace/redhat-operators-8pckx" Dec 01 09:24:25 crc kubenswrapper[4689]: I1201 09:24:25.639771 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b34eca3f-e88d-4d41-9ddf-432b8a41a6cd-catalog-content\") pod \"redhat-operators-8pckx\" (UID: \"b34eca3f-e88d-4d41-9ddf-432b8a41a6cd\") " pod="openshift-marketplace/redhat-operators-8pckx" Dec 01 09:24:25 crc kubenswrapper[4689]: I1201 09:24:25.661441 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spscx\" (UniqueName: \"kubernetes.io/projected/b34eca3f-e88d-4d41-9ddf-432b8a41a6cd-kube-api-access-spscx\") pod \"redhat-operators-8pckx\" (UID: \"b34eca3f-e88d-4d41-9ddf-432b8a41a6cd\") " pod="openshift-marketplace/redhat-operators-8pckx" Dec 01 09:24:25 crc kubenswrapper[4689]: I1201 09:24:25.768858 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8pckx" Dec 01 09:24:26 crc kubenswrapper[4689]: I1201 09:24:26.280765 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8pckx"] Dec 01 09:24:27 crc kubenswrapper[4689]: I1201 09:24:27.265975 4689 generic.go:334] "Generic (PLEG): container finished" podID="b34eca3f-e88d-4d41-9ddf-432b8a41a6cd" containerID="bfb91b701f74709c1cf3967bb5bfcd1bbbed6766db7ee4f6f841fbe51ab5d6ff" exitCode=0 Dec 01 09:24:27 crc kubenswrapper[4689]: I1201 09:24:27.266543 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8pckx" event={"ID":"b34eca3f-e88d-4d41-9ddf-432b8a41a6cd","Type":"ContainerDied","Data":"bfb91b701f74709c1cf3967bb5bfcd1bbbed6766db7ee4f6f841fbe51ab5d6ff"} Dec 01 09:24:27 crc kubenswrapper[4689]: I1201 09:24:27.266570 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8pckx" event={"ID":"b34eca3f-e88d-4d41-9ddf-432b8a41a6cd","Type":"ContainerStarted","Data":"e26ade7a8ce1087e1a4e9730a6ba4d1d099a098a6347f13412301f586ae063e7"} Dec 01 09:24:27 crc kubenswrapper[4689]: I1201 09:24:27.268942 4689 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 01 09:24:29 crc kubenswrapper[4689]: I1201 09:24:29.295523 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8pckx" event={"ID":"b34eca3f-e88d-4d41-9ddf-432b8a41a6cd","Type":"ContainerStarted","Data":"c006b2dcb85c74403165669274b3836333cf4c6d1140994520149ddc1fa96b9f"} Dec 01 09:24:36 crc kubenswrapper[4689]: I1201 09:24:36.374731 4689 generic.go:334] "Generic (PLEG): container finished" podID="b34eca3f-e88d-4d41-9ddf-432b8a41a6cd" containerID="c006b2dcb85c74403165669274b3836333cf4c6d1140994520149ddc1fa96b9f" exitCode=0 Dec 01 09:24:36 crc kubenswrapper[4689]: I1201 09:24:36.374816 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8pckx" event={"ID":"b34eca3f-e88d-4d41-9ddf-432b8a41a6cd","Type":"ContainerDied","Data":"c006b2dcb85c74403165669274b3836333cf4c6d1140994520149ddc1fa96b9f"} Dec 01 09:24:39 crc kubenswrapper[4689]: I1201 09:24:39.412003 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8pckx" event={"ID":"b34eca3f-e88d-4d41-9ddf-432b8a41a6cd","Type":"ContainerStarted","Data":"97c3e142949a2a5141072aeb394f2fbdc790e4701e6078f597329887cc6606b7"} Dec 01 09:24:39 crc kubenswrapper[4689]: I1201 09:24:39.439793 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8pckx" podStartSLOduration=3.814334745 podStartE2EDuration="14.439757893s" podCreationTimestamp="2025-12-01 09:24:25 +0000 UTC" firstStartedPulling="2025-12-01 09:24:27.268732257 +0000 UTC m=+2747.341020161" lastFinishedPulling="2025-12-01 09:24:37.894155395 +0000 UTC m=+2757.966443309" observedRunningTime="2025-12-01 09:24:39.431512578 +0000 UTC m=+2759.503800502" watchObservedRunningTime="2025-12-01 09:24:39.439757893 +0000 UTC m=+2759.512045797" Dec 01 09:24:45 crc kubenswrapper[4689]: I1201 09:24:45.770015 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8pckx" Dec 01 09:24:45 crc kubenswrapper[4689]: I1201 09:24:45.770663 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8pckx" Dec 01 09:24:46 crc kubenswrapper[4689]: I1201 09:24:46.824207 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8pckx" podUID="b34eca3f-e88d-4d41-9ddf-432b8a41a6cd" containerName="registry-server" probeResult="failure" output=< Dec 01 09:24:46 crc kubenswrapper[4689]: timeout: failed to connect service ":50051" within 1s Dec 01 09:24:46 crc kubenswrapper[4689]: > Dec 01 09:24:55 crc kubenswrapper[4689]: I1201 09:24:55.815191 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8pckx" Dec 01 09:24:55 crc kubenswrapper[4689]: I1201 09:24:55.866576 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8pckx" Dec 01 09:24:56 crc kubenswrapper[4689]: I1201 09:24:56.625160 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8pckx"] Dec 01 09:24:57 crc kubenswrapper[4689]: I1201 09:24:57.598615 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8pckx" podUID="b34eca3f-e88d-4d41-9ddf-432b8a41a6cd" containerName="registry-server" containerID="cri-o://97c3e142949a2a5141072aeb394f2fbdc790e4701e6078f597329887cc6606b7" gracePeriod=2 Dec 01 09:24:58 crc kubenswrapper[4689]: I1201 09:24:58.136521 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8pckx" Dec 01 09:24:58 crc kubenswrapper[4689]: I1201 09:24:58.286912 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-spscx\" (UniqueName: \"kubernetes.io/projected/b34eca3f-e88d-4d41-9ddf-432b8a41a6cd-kube-api-access-spscx\") pod \"b34eca3f-e88d-4d41-9ddf-432b8a41a6cd\" (UID: \"b34eca3f-e88d-4d41-9ddf-432b8a41a6cd\") " Dec 01 09:24:58 crc kubenswrapper[4689]: I1201 09:24:58.287141 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b34eca3f-e88d-4d41-9ddf-432b8a41a6cd-catalog-content\") pod \"b34eca3f-e88d-4d41-9ddf-432b8a41a6cd\" (UID: \"b34eca3f-e88d-4d41-9ddf-432b8a41a6cd\") " Dec 01 09:24:58 crc kubenswrapper[4689]: I1201 09:24:58.287292 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b34eca3f-e88d-4d41-9ddf-432b8a41a6cd-utilities\") pod \"b34eca3f-e88d-4d41-9ddf-432b8a41a6cd\" (UID: \"b34eca3f-e88d-4d41-9ddf-432b8a41a6cd\") " Dec 01 09:24:58 crc kubenswrapper[4689]: I1201 09:24:58.287959 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b34eca3f-e88d-4d41-9ddf-432b8a41a6cd-utilities" (OuterVolumeSpecName: "utilities") pod "b34eca3f-e88d-4d41-9ddf-432b8a41a6cd" (UID: "b34eca3f-e88d-4d41-9ddf-432b8a41a6cd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:24:58 crc kubenswrapper[4689]: I1201 09:24:58.292305 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b34eca3f-e88d-4d41-9ddf-432b8a41a6cd-kube-api-access-spscx" (OuterVolumeSpecName: "kube-api-access-spscx") pod "b34eca3f-e88d-4d41-9ddf-432b8a41a6cd" (UID: "b34eca3f-e88d-4d41-9ddf-432b8a41a6cd"). InnerVolumeSpecName "kube-api-access-spscx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:24:58 crc kubenswrapper[4689]: I1201 09:24:58.390521 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-spscx\" (UniqueName: \"kubernetes.io/projected/b34eca3f-e88d-4d41-9ddf-432b8a41a6cd-kube-api-access-spscx\") on node \"crc\" DevicePath \"\"" Dec 01 09:24:58 crc kubenswrapper[4689]: I1201 09:24:58.390568 4689 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b34eca3f-e88d-4d41-9ddf-432b8a41a6cd-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 09:24:58 crc kubenswrapper[4689]: I1201 09:24:58.401184 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b34eca3f-e88d-4d41-9ddf-432b8a41a6cd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b34eca3f-e88d-4d41-9ddf-432b8a41a6cd" (UID: "b34eca3f-e88d-4d41-9ddf-432b8a41a6cd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:24:58 crc kubenswrapper[4689]: I1201 09:24:58.492493 4689 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b34eca3f-e88d-4d41-9ddf-432b8a41a6cd-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 09:24:58 crc kubenswrapper[4689]: I1201 09:24:58.607972 4689 generic.go:334] "Generic (PLEG): container finished" podID="b34eca3f-e88d-4d41-9ddf-432b8a41a6cd" containerID="97c3e142949a2a5141072aeb394f2fbdc790e4701e6078f597329887cc6606b7" exitCode=0 Dec 01 09:24:58 crc kubenswrapper[4689]: I1201 09:24:58.608028 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8pckx" Dec 01 09:24:58 crc kubenswrapper[4689]: I1201 09:24:58.608024 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8pckx" event={"ID":"b34eca3f-e88d-4d41-9ddf-432b8a41a6cd","Type":"ContainerDied","Data":"97c3e142949a2a5141072aeb394f2fbdc790e4701e6078f597329887cc6606b7"} Dec 01 09:24:58 crc kubenswrapper[4689]: I1201 09:24:58.608153 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8pckx" event={"ID":"b34eca3f-e88d-4d41-9ddf-432b8a41a6cd","Type":"ContainerDied","Data":"e26ade7a8ce1087e1a4e9730a6ba4d1d099a098a6347f13412301f586ae063e7"} Dec 01 09:24:58 crc kubenswrapper[4689]: I1201 09:24:58.608181 4689 scope.go:117] "RemoveContainer" containerID="97c3e142949a2a5141072aeb394f2fbdc790e4701e6078f597329887cc6606b7" Dec 01 09:24:58 crc kubenswrapper[4689]: I1201 09:24:58.634544 4689 scope.go:117] "RemoveContainer" containerID="c006b2dcb85c74403165669274b3836333cf4c6d1140994520149ddc1fa96b9f" Dec 01 09:24:58 crc kubenswrapper[4689]: I1201 09:24:58.649800 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8pckx"] Dec 01 09:24:58 crc kubenswrapper[4689]: I1201 09:24:58.659856 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8pckx"] Dec 01 09:24:58 crc kubenswrapper[4689]: I1201 09:24:58.662781 4689 scope.go:117] "RemoveContainer" containerID="bfb91b701f74709c1cf3967bb5bfcd1bbbed6766db7ee4f6f841fbe51ab5d6ff" Dec 01 09:24:58 crc kubenswrapper[4689]: I1201 09:24:58.713572 4689 scope.go:117] "RemoveContainer" containerID="97c3e142949a2a5141072aeb394f2fbdc790e4701e6078f597329887cc6606b7" Dec 01 09:24:58 crc kubenswrapper[4689]: E1201 09:24:58.714955 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97c3e142949a2a5141072aeb394f2fbdc790e4701e6078f597329887cc6606b7\": container with ID starting with 97c3e142949a2a5141072aeb394f2fbdc790e4701e6078f597329887cc6606b7 not found: ID does not exist" containerID="97c3e142949a2a5141072aeb394f2fbdc790e4701e6078f597329887cc6606b7" Dec 01 09:24:58 crc kubenswrapper[4689]: I1201 09:24:58.715016 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97c3e142949a2a5141072aeb394f2fbdc790e4701e6078f597329887cc6606b7"} err="failed to get container status \"97c3e142949a2a5141072aeb394f2fbdc790e4701e6078f597329887cc6606b7\": rpc error: code = NotFound desc = could not find container \"97c3e142949a2a5141072aeb394f2fbdc790e4701e6078f597329887cc6606b7\": container with ID starting with 97c3e142949a2a5141072aeb394f2fbdc790e4701e6078f597329887cc6606b7 not found: ID does not exist" Dec 01 09:24:58 crc kubenswrapper[4689]: I1201 09:24:58.715044 4689 scope.go:117] "RemoveContainer" containerID="c006b2dcb85c74403165669274b3836333cf4c6d1140994520149ddc1fa96b9f" Dec 01 09:24:58 crc kubenswrapper[4689]: E1201 09:24:58.718013 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c006b2dcb85c74403165669274b3836333cf4c6d1140994520149ddc1fa96b9f\": container with ID starting with c006b2dcb85c74403165669274b3836333cf4c6d1140994520149ddc1fa96b9f not found: ID does not exist" containerID="c006b2dcb85c74403165669274b3836333cf4c6d1140994520149ddc1fa96b9f" Dec 01 09:24:58 crc kubenswrapper[4689]: I1201 09:24:58.718078 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c006b2dcb85c74403165669274b3836333cf4c6d1140994520149ddc1fa96b9f"} err="failed to get container status \"c006b2dcb85c74403165669274b3836333cf4c6d1140994520149ddc1fa96b9f\": rpc error: code = NotFound desc = could not find container \"c006b2dcb85c74403165669274b3836333cf4c6d1140994520149ddc1fa96b9f\": container with ID starting with c006b2dcb85c74403165669274b3836333cf4c6d1140994520149ddc1fa96b9f not found: ID does not exist" Dec 01 09:24:58 crc kubenswrapper[4689]: I1201 09:24:58.718144 4689 scope.go:117] "RemoveContainer" containerID="bfb91b701f74709c1cf3967bb5bfcd1bbbed6766db7ee4f6f841fbe51ab5d6ff" Dec 01 09:24:58 crc kubenswrapper[4689]: E1201 09:24:58.718582 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bfb91b701f74709c1cf3967bb5bfcd1bbbed6766db7ee4f6f841fbe51ab5d6ff\": container with ID starting with bfb91b701f74709c1cf3967bb5bfcd1bbbed6766db7ee4f6f841fbe51ab5d6ff not found: ID does not exist" containerID="bfb91b701f74709c1cf3967bb5bfcd1bbbed6766db7ee4f6f841fbe51ab5d6ff" Dec 01 09:24:58 crc kubenswrapper[4689]: I1201 09:24:58.718620 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfb91b701f74709c1cf3967bb5bfcd1bbbed6766db7ee4f6f841fbe51ab5d6ff"} err="failed to get container status \"bfb91b701f74709c1cf3967bb5bfcd1bbbed6766db7ee4f6f841fbe51ab5d6ff\": rpc error: code = NotFound desc = could not find container \"bfb91b701f74709c1cf3967bb5bfcd1bbbed6766db7ee4f6f841fbe51ab5d6ff\": container with ID starting with bfb91b701f74709c1cf3967bb5bfcd1bbbed6766db7ee4f6f841fbe51ab5d6ff not found: ID does not exist" Dec 01 09:24:59 crc kubenswrapper[4689]: I1201 09:24:59.058043 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b34eca3f-e88d-4d41-9ddf-432b8a41a6cd" path="/var/lib/kubelet/pods/b34eca3f-e88d-4d41-9ddf-432b8a41a6cd/volumes" Dec 01 09:25:24 crc kubenswrapper[4689]: I1201 09:25:24.288769 4689 generic.go:334] "Generic (PLEG): container finished" podID="79ac411d-051b-464c-ab78-a5e99ef18520" containerID="46867a06cad617b583526149ec8e9204cfdf7d2f9d1fe278a57d96a94f1d7f43" exitCode=0 Dec 01 09:25:24 crc kubenswrapper[4689]: I1201 09:25:24.289087 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m5ms8" event={"ID":"79ac411d-051b-464c-ab78-a5e99ef18520","Type":"ContainerDied","Data":"46867a06cad617b583526149ec8e9204cfdf7d2f9d1fe278a57d96a94f1d7f43"} Dec 01 09:25:25 crc kubenswrapper[4689]: I1201 09:25:25.786256 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m5ms8" Dec 01 09:25:25 crc kubenswrapper[4689]: I1201 09:25:25.936103 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79ac411d-051b-464c-ab78-a5e99ef18520-libvirt-combined-ca-bundle\") pod \"79ac411d-051b-464c-ab78-a5e99ef18520\" (UID: \"79ac411d-051b-464c-ab78-a5e99ef18520\") " Dec 01 09:25:25 crc kubenswrapper[4689]: I1201 09:25:25.939069 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/79ac411d-051b-464c-ab78-a5e99ef18520-libvirt-secret-0\") pod \"79ac411d-051b-464c-ab78-a5e99ef18520\" (UID: \"79ac411d-051b-464c-ab78-a5e99ef18520\") " Dec 01 09:25:25 crc kubenswrapper[4689]: I1201 09:25:25.939169 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6cwb\" (UniqueName: \"kubernetes.io/projected/79ac411d-051b-464c-ab78-a5e99ef18520-kube-api-access-k6cwb\") pod \"79ac411d-051b-464c-ab78-a5e99ef18520\" (UID: \"79ac411d-051b-464c-ab78-a5e99ef18520\") " Dec 01 09:25:25 crc kubenswrapper[4689]: I1201 09:25:25.939200 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/79ac411d-051b-464c-ab78-a5e99ef18520-inventory\") pod \"79ac411d-051b-464c-ab78-a5e99ef18520\" (UID: \"79ac411d-051b-464c-ab78-a5e99ef18520\") " Dec 01 09:25:25 crc kubenswrapper[4689]: I1201 09:25:25.939233 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/79ac411d-051b-464c-ab78-a5e99ef18520-ssh-key\") pod \"79ac411d-051b-464c-ab78-a5e99ef18520\" (UID: \"79ac411d-051b-464c-ab78-a5e99ef18520\") " Dec 01 09:25:25 crc kubenswrapper[4689]: I1201 09:25:25.945084 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79ac411d-051b-464c-ab78-a5e99ef18520-kube-api-access-k6cwb" (OuterVolumeSpecName: "kube-api-access-k6cwb") pod "79ac411d-051b-464c-ab78-a5e99ef18520" (UID: "79ac411d-051b-464c-ab78-a5e99ef18520"). InnerVolumeSpecName "kube-api-access-k6cwb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:25:25 crc kubenswrapper[4689]: I1201 09:25:25.947470 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79ac411d-051b-464c-ab78-a5e99ef18520-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "79ac411d-051b-464c-ab78-a5e99ef18520" (UID: "79ac411d-051b-464c-ab78-a5e99ef18520"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:25:25 crc kubenswrapper[4689]: I1201 09:25:25.980614 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79ac411d-051b-464c-ab78-a5e99ef18520-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "79ac411d-051b-464c-ab78-a5e99ef18520" (UID: "79ac411d-051b-464c-ab78-a5e99ef18520"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:25:25 crc kubenswrapper[4689]: I1201 09:25:25.987535 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79ac411d-051b-464c-ab78-a5e99ef18520-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "79ac411d-051b-464c-ab78-a5e99ef18520" (UID: "79ac411d-051b-464c-ab78-a5e99ef18520"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:25:25 crc kubenswrapper[4689]: I1201 09:25:25.993975 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79ac411d-051b-464c-ab78-a5e99ef18520-inventory" (OuterVolumeSpecName: "inventory") pod "79ac411d-051b-464c-ab78-a5e99ef18520" (UID: "79ac411d-051b-464c-ab78-a5e99ef18520"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:25:26 crc kubenswrapper[4689]: I1201 09:25:26.046008 4689 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79ac411d-051b-464c-ab78-a5e99ef18520-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 09:25:26 crc kubenswrapper[4689]: I1201 09:25:26.046035 4689 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/79ac411d-051b-464c-ab78-a5e99ef18520-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Dec 01 09:25:26 crc kubenswrapper[4689]: I1201 09:25:26.046044 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k6cwb\" (UniqueName: \"kubernetes.io/projected/79ac411d-051b-464c-ab78-a5e99ef18520-kube-api-access-k6cwb\") on node \"crc\" DevicePath \"\"" Dec 01 09:25:26 crc kubenswrapper[4689]: I1201 09:25:26.046066 4689 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/79ac411d-051b-464c-ab78-a5e99ef18520-inventory\") on node \"crc\" DevicePath \"\"" Dec 01 09:25:26 crc kubenswrapper[4689]: I1201 09:25:26.046075 4689 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/79ac411d-051b-464c-ab78-a5e99ef18520-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 01 09:25:26 crc kubenswrapper[4689]: I1201 09:25:26.310245 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m5ms8" event={"ID":"79ac411d-051b-464c-ab78-a5e99ef18520","Type":"ContainerDied","Data":"2b8813652e7323893acc0f463128aa23d2c47ca86fdef4fefa2c287df18cd7b0"} Dec 01 09:25:26 crc kubenswrapper[4689]: I1201 09:25:26.310621 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b8813652e7323893acc0f463128aa23d2c47ca86fdef4fefa2c287df18cd7b0" Dec 01 09:25:26 crc kubenswrapper[4689]: I1201 09:25:26.310340 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m5ms8" Dec 01 09:25:26 crc kubenswrapper[4689]: I1201 09:25:26.417339 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-mkgfg"] Dec 01 09:25:26 crc kubenswrapper[4689]: E1201 09:25:26.417734 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79ac411d-051b-464c-ab78-a5e99ef18520" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Dec 01 09:25:26 crc kubenswrapper[4689]: I1201 09:25:26.417750 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="79ac411d-051b-464c-ab78-a5e99ef18520" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Dec 01 09:25:26 crc kubenswrapper[4689]: E1201 09:25:26.417763 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b34eca3f-e88d-4d41-9ddf-432b8a41a6cd" containerName="extract-content" Dec 01 09:25:26 crc kubenswrapper[4689]: I1201 09:25:26.417768 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="b34eca3f-e88d-4d41-9ddf-432b8a41a6cd" containerName="extract-content" Dec 01 09:25:26 crc kubenswrapper[4689]: E1201 09:25:26.417787 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b34eca3f-e88d-4d41-9ddf-432b8a41a6cd" containerName="registry-server" Dec 01 09:25:26 crc kubenswrapper[4689]: I1201 09:25:26.417794 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="b34eca3f-e88d-4d41-9ddf-432b8a41a6cd" containerName="registry-server" Dec 01 09:25:26 crc kubenswrapper[4689]: E1201 09:25:26.417810 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b34eca3f-e88d-4d41-9ddf-432b8a41a6cd" containerName="extract-utilities" Dec 01 09:25:26 crc kubenswrapper[4689]: I1201 09:25:26.417815 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="b34eca3f-e88d-4d41-9ddf-432b8a41a6cd" containerName="extract-utilities" Dec 01 09:25:26 crc kubenswrapper[4689]: I1201 09:25:26.417995 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="79ac411d-051b-464c-ab78-a5e99ef18520" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Dec 01 09:25:26 crc kubenswrapper[4689]: I1201 09:25:26.418013 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="b34eca3f-e88d-4d41-9ddf-432b8a41a6cd" containerName="registry-server" Dec 01 09:25:26 crc kubenswrapper[4689]: I1201 09:25:26.418628 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mkgfg" Dec 01 09:25:26 crc kubenswrapper[4689]: I1201 09:25:26.421968 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Dec 01 09:25:26 crc kubenswrapper[4689]: I1201 09:25:26.422225 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Dec 01 09:25:26 crc kubenswrapper[4689]: I1201 09:25:26.423009 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 01 09:25:26 crc kubenswrapper[4689]: I1201 09:25:26.424679 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Dec 01 09:25:26 crc kubenswrapper[4689]: I1201 09:25:26.425112 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 01 09:25:26 crc kubenswrapper[4689]: I1201 09:25:26.429132 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 01 09:25:26 crc kubenswrapper[4689]: I1201 09:25:26.432881 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-sh59x" Dec 01 09:25:26 crc kubenswrapper[4689]: I1201 09:25:26.433617 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-mkgfg"] Dec 01 09:25:26 crc kubenswrapper[4689]: I1201 09:25:26.452985 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/5351042e-776c-44c1-a6ad-bf530a24bfb7-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mkgfg\" (UID: \"5351042e-776c-44c1-a6ad-bf530a24bfb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mkgfg" Dec 01 09:25:26 crc kubenswrapper[4689]: I1201 09:25:26.453060 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5351042e-776c-44c1-a6ad-bf530a24bfb7-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mkgfg\" (UID: \"5351042e-776c-44c1-a6ad-bf530a24bfb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mkgfg" Dec 01 09:25:26 crc kubenswrapper[4689]: I1201 09:25:26.453122 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5351042e-776c-44c1-a6ad-bf530a24bfb7-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mkgfg\" (UID: \"5351042e-776c-44c1-a6ad-bf530a24bfb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mkgfg" Dec 01 09:25:26 crc kubenswrapper[4689]: I1201 09:25:26.453171 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5351042e-776c-44c1-a6ad-bf530a24bfb7-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mkgfg\" (UID: \"5351042e-776c-44c1-a6ad-bf530a24bfb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mkgfg" Dec 01 09:25:26 crc kubenswrapper[4689]: I1201 09:25:26.453196 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/5351042e-776c-44c1-a6ad-bf530a24bfb7-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mkgfg\" (UID: \"5351042e-776c-44c1-a6ad-bf530a24bfb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mkgfg" Dec 01 09:25:26 crc kubenswrapper[4689]: I1201 09:25:26.453227 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/5351042e-776c-44c1-a6ad-bf530a24bfb7-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mkgfg\" (UID: \"5351042e-776c-44c1-a6ad-bf530a24bfb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mkgfg" Dec 01 09:25:26 crc kubenswrapper[4689]: I1201 09:25:26.453359 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/5351042e-776c-44c1-a6ad-bf530a24bfb7-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mkgfg\" (UID: \"5351042e-776c-44c1-a6ad-bf530a24bfb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mkgfg" Dec 01 09:25:26 crc kubenswrapper[4689]: I1201 09:25:26.453497 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/5351042e-776c-44c1-a6ad-bf530a24bfb7-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mkgfg\" (UID: \"5351042e-776c-44c1-a6ad-bf530a24bfb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mkgfg" Dec 01 09:25:26 crc kubenswrapper[4689]: I1201 09:25:26.453579 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8596c\" (UniqueName: \"kubernetes.io/projected/5351042e-776c-44c1-a6ad-bf530a24bfb7-kube-api-access-8596c\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mkgfg\" (UID: \"5351042e-776c-44c1-a6ad-bf530a24bfb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mkgfg" Dec 01 09:25:26 crc kubenswrapper[4689]: I1201 09:25:26.555588 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/5351042e-776c-44c1-a6ad-bf530a24bfb7-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mkgfg\" (UID: \"5351042e-776c-44c1-a6ad-bf530a24bfb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mkgfg" Dec 01 09:25:26 crc kubenswrapper[4689]: I1201 09:25:26.555666 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8596c\" (UniqueName: \"kubernetes.io/projected/5351042e-776c-44c1-a6ad-bf530a24bfb7-kube-api-access-8596c\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mkgfg\" (UID: \"5351042e-776c-44c1-a6ad-bf530a24bfb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mkgfg" Dec 01 09:25:26 crc kubenswrapper[4689]: I1201 09:25:26.555715 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/5351042e-776c-44c1-a6ad-bf530a24bfb7-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mkgfg\" (UID: \"5351042e-776c-44c1-a6ad-bf530a24bfb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mkgfg" Dec 01 09:25:26 crc kubenswrapper[4689]: I1201 09:25:26.555772 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5351042e-776c-44c1-a6ad-bf530a24bfb7-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mkgfg\" (UID: \"5351042e-776c-44c1-a6ad-bf530a24bfb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mkgfg" Dec 01 09:25:26 crc kubenswrapper[4689]: I1201 09:25:26.555822 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5351042e-776c-44c1-a6ad-bf530a24bfb7-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mkgfg\" (UID: \"5351042e-776c-44c1-a6ad-bf530a24bfb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mkgfg" Dec 01 09:25:26 crc kubenswrapper[4689]: I1201 09:25:26.555891 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5351042e-776c-44c1-a6ad-bf530a24bfb7-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mkgfg\" (UID: \"5351042e-776c-44c1-a6ad-bf530a24bfb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mkgfg" Dec 01 09:25:26 crc kubenswrapper[4689]: I1201 09:25:26.555924 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/5351042e-776c-44c1-a6ad-bf530a24bfb7-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mkgfg\" (UID: \"5351042e-776c-44c1-a6ad-bf530a24bfb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mkgfg" Dec 01 09:25:26 crc kubenswrapper[4689]: I1201 09:25:26.555972 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/5351042e-776c-44c1-a6ad-bf530a24bfb7-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mkgfg\" (UID: \"5351042e-776c-44c1-a6ad-bf530a24bfb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mkgfg" Dec 01 09:25:26 crc kubenswrapper[4689]: I1201 09:25:26.556027 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/5351042e-776c-44c1-a6ad-bf530a24bfb7-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mkgfg\" (UID: \"5351042e-776c-44c1-a6ad-bf530a24bfb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mkgfg" Dec 01 09:25:26 crc kubenswrapper[4689]: I1201 09:25:26.556541 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/5351042e-776c-44c1-a6ad-bf530a24bfb7-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mkgfg\" (UID: \"5351042e-776c-44c1-a6ad-bf530a24bfb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mkgfg" Dec 01 09:25:26 crc kubenswrapper[4689]: I1201 09:25:26.560623 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/5351042e-776c-44c1-a6ad-bf530a24bfb7-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mkgfg\" (UID: \"5351042e-776c-44c1-a6ad-bf530a24bfb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mkgfg" Dec 01 09:25:26 crc kubenswrapper[4689]: I1201 09:25:26.564312 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5351042e-776c-44c1-a6ad-bf530a24bfb7-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mkgfg\" (UID: \"5351042e-776c-44c1-a6ad-bf530a24bfb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mkgfg" Dec 01 09:25:26 crc kubenswrapper[4689]: I1201 09:25:26.572584 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5351042e-776c-44c1-a6ad-bf530a24bfb7-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mkgfg\" (UID: \"5351042e-776c-44c1-a6ad-bf530a24bfb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mkgfg" Dec 01 09:25:26 crc kubenswrapper[4689]: I1201 09:25:26.572840 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/5351042e-776c-44c1-a6ad-bf530a24bfb7-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mkgfg\" (UID: \"5351042e-776c-44c1-a6ad-bf530a24bfb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mkgfg" Dec 01 09:25:26 crc kubenswrapper[4689]: I1201 09:25:26.573922 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/5351042e-776c-44c1-a6ad-bf530a24bfb7-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mkgfg\" (UID: \"5351042e-776c-44c1-a6ad-bf530a24bfb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mkgfg" Dec 01 09:25:26 crc kubenswrapper[4689]: I1201 09:25:26.574122 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5351042e-776c-44c1-a6ad-bf530a24bfb7-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mkgfg\" (UID: \"5351042e-776c-44c1-a6ad-bf530a24bfb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mkgfg" Dec 01 09:25:26 crc kubenswrapper[4689]: I1201 09:25:26.577907 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/5351042e-776c-44c1-a6ad-bf530a24bfb7-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mkgfg\" (UID: \"5351042e-776c-44c1-a6ad-bf530a24bfb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mkgfg" Dec 01 09:25:26 crc kubenswrapper[4689]: I1201 09:25:26.578253 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8596c\" (UniqueName: \"kubernetes.io/projected/5351042e-776c-44c1-a6ad-bf530a24bfb7-kube-api-access-8596c\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mkgfg\" (UID: \"5351042e-776c-44c1-a6ad-bf530a24bfb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mkgfg" Dec 01 09:25:26 crc kubenswrapper[4689]: I1201 09:25:26.737346 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mkgfg" Dec 01 09:25:27 crc kubenswrapper[4689]: I1201 09:25:27.314483 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-mkgfg"] Dec 01 09:25:27 crc kubenswrapper[4689]: I1201 09:25:27.324006 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mkgfg" event={"ID":"5351042e-776c-44c1-a6ad-bf530a24bfb7","Type":"ContainerStarted","Data":"1a314158d169b9a39dfee165ecb725a0885f3346afcbcf55710102874a59de12"} Dec 01 09:25:29 crc kubenswrapper[4689]: I1201 09:25:29.346660 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mkgfg" event={"ID":"5351042e-776c-44c1-a6ad-bf530a24bfb7","Type":"ContainerStarted","Data":"5aecb83a3dd2232fd2c4c455324d8bc8364ab6aa1834d835522632ee66a5bd59"} Dec 01 09:25:29 crc kubenswrapper[4689]: I1201 09:25:29.373736 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mkgfg" podStartSLOduration=2.8917367 podStartE2EDuration="3.373718045s" podCreationTimestamp="2025-12-01 09:25:26 +0000 UTC" firstStartedPulling="2025-12-01 09:25:27.315061103 +0000 UTC m=+2807.387349017" lastFinishedPulling="2025-12-01 09:25:27.797042438 +0000 UTC m=+2807.869330362" observedRunningTime="2025-12-01 09:25:29.365589472 +0000 UTC m=+2809.437877376" watchObservedRunningTime="2025-12-01 09:25:29.373718045 +0000 UTC m=+2809.446005949" Dec 01 09:25:39 crc kubenswrapper[4689]: I1201 09:25:39.147156 4689 patch_prober.go:28] interesting pod/machine-config-daemon-hmdnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 09:25:39 crc kubenswrapper[4689]: I1201 09:25:39.147720 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 09:26:09 crc kubenswrapper[4689]: I1201 09:26:09.147560 4689 patch_prober.go:28] interesting pod/machine-config-daemon-hmdnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 09:26:09 crc kubenswrapper[4689]: I1201 09:26:09.147898 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 09:26:39 crc kubenswrapper[4689]: I1201 09:26:39.147039 4689 patch_prober.go:28] interesting pod/machine-config-daemon-hmdnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 09:26:39 crc kubenswrapper[4689]: I1201 09:26:39.147704 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 09:26:39 crc kubenswrapper[4689]: I1201 09:26:39.147766 4689 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" Dec 01 09:26:39 crc kubenswrapper[4689]: I1201 09:26:39.148734 4689 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"061213c21e96b72086584b4bc2e0384184a8626ff20d0fe98b0c83c0e192169e"} pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 01 09:26:39 crc kubenswrapper[4689]: I1201 09:26:39.148802 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" containerName="machine-config-daemon" containerID="cri-o://061213c21e96b72086584b4bc2e0384184a8626ff20d0fe98b0c83c0e192169e" gracePeriod=600 Dec 01 09:26:39 crc kubenswrapper[4689]: E1201 09:26:39.276955 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:26:40 crc kubenswrapper[4689]: I1201 09:26:40.136248 4689 generic.go:334] "Generic (PLEG): container finished" podID="3947625d-75bf-4332-a233-1491b2ee9d96" containerID="061213c21e96b72086584b4bc2e0384184a8626ff20d0fe98b0c83c0e192169e" exitCode=0 Dec 01 09:26:40 crc kubenswrapper[4689]: I1201 09:26:40.136325 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" event={"ID":"3947625d-75bf-4332-a233-1491b2ee9d96","Type":"ContainerDied","Data":"061213c21e96b72086584b4bc2e0384184a8626ff20d0fe98b0c83c0e192169e"} Dec 01 09:26:40 crc kubenswrapper[4689]: I1201 09:26:40.136656 4689 scope.go:117] "RemoveContainer" containerID="6ffee746d9d222deafefbfb3f0b9f2751015c989eb3af976fcaabb3d08c27595" Dec 01 09:26:40 crc kubenswrapper[4689]: I1201 09:26:40.142041 4689 scope.go:117] "RemoveContainer" containerID="061213c21e96b72086584b4bc2e0384184a8626ff20d0fe98b0c83c0e192169e" Dec 01 09:26:40 crc kubenswrapper[4689]: E1201 09:26:40.142481 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:26:53 crc kubenswrapper[4689]: I1201 09:26:53.047660 4689 scope.go:117] "RemoveContainer" containerID="061213c21e96b72086584b4bc2e0384184a8626ff20d0fe98b0c83c0e192169e" Dec 01 09:26:53 crc kubenswrapper[4689]: E1201 09:26:53.048894 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:27:04 crc kubenswrapper[4689]: I1201 09:27:04.047528 4689 scope.go:117] "RemoveContainer" containerID="061213c21e96b72086584b4bc2e0384184a8626ff20d0fe98b0c83c0e192169e" Dec 01 09:27:04 crc kubenswrapper[4689]: E1201 09:27:04.048263 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:27:15 crc kubenswrapper[4689]: I1201 09:27:15.063593 4689 scope.go:117] "RemoveContainer" containerID="061213c21e96b72086584b4bc2e0384184a8626ff20d0fe98b0c83c0e192169e" Dec 01 09:27:15 crc kubenswrapper[4689]: E1201 09:27:15.064831 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:27:27 crc kubenswrapper[4689]: I1201 09:27:27.048301 4689 scope.go:117] "RemoveContainer" containerID="061213c21e96b72086584b4bc2e0384184a8626ff20d0fe98b0c83c0e192169e" Dec 01 09:27:27 crc kubenswrapper[4689]: E1201 09:27:27.049133 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:27:41 crc kubenswrapper[4689]: I1201 09:27:41.055234 4689 scope.go:117] "RemoveContainer" containerID="061213c21e96b72086584b4bc2e0384184a8626ff20d0fe98b0c83c0e192169e" Dec 01 09:27:41 crc kubenswrapper[4689]: E1201 09:27:41.057388 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:27:54 crc kubenswrapper[4689]: I1201 09:27:54.048475 4689 scope.go:117] "RemoveContainer" containerID="061213c21e96b72086584b4bc2e0384184a8626ff20d0fe98b0c83c0e192169e" Dec 01 09:27:54 crc kubenswrapper[4689]: E1201 09:27:54.050757 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:28:06 crc kubenswrapper[4689]: I1201 09:28:06.047323 4689 scope.go:117] "RemoveContainer" containerID="061213c21e96b72086584b4bc2e0384184a8626ff20d0fe98b0c83c0e192169e" Dec 01 09:28:06 crc kubenswrapper[4689]: E1201 09:28:06.048344 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:28:20 crc kubenswrapper[4689]: I1201 09:28:20.053690 4689 scope.go:117] "RemoveContainer" containerID="061213c21e96b72086584b4bc2e0384184a8626ff20d0fe98b0c83c0e192169e" Dec 01 09:28:20 crc kubenswrapper[4689]: E1201 09:28:20.054684 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:28:32 crc kubenswrapper[4689]: I1201 09:28:32.048392 4689 scope.go:117] "RemoveContainer" containerID="061213c21e96b72086584b4bc2e0384184a8626ff20d0fe98b0c83c0e192169e" Dec 01 09:28:32 crc kubenswrapper[4689]: E1201 09:28:32.049139 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:28:32 crc kubenswrapper[4689]: I1201 09:28:32.369163 4689 generic.go:334] "Generic (PLEG): container finished" podID="5351042e-776c-44c1-a6ad-bf530a24bfb7" containerID="5aecb83a3dd2232fd2c4c455324d8bc8364ab6aa1834d835522632ee66a5bd59" exitCode=0 Dec 01 09:28:32 crc kubenswrapper[4689]: I1201 09:28:32.369221 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mkgfg" event={"ID":"5351042e-776c-44c1-a6ad-bf530a24bfb7","Type":"ContainerDied","Data":"5aecb83a3dd2232fd2c4c455324d8bc8364ab6aa1834d835522632ee66a5bd59"} Dec 01 09:28:33 crc kubenswrapper[4689]: I1201 09:28:33.849556 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mkgfg" Dec 01 09:28:33 crc kubenswrapper[4689]: I1201 09:28:33.974207 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5351042e-776c-44c1-a6ad-bf530a24bfb7-inventory\") pod \"5351042e-776c-44c1-a6ad-bf530a24bfb7\" (UID: \"5351042e-776c-44c1-a6ad-bf530a24bfb7\") " Dec 01 09:28:33 crc kubenswrapper[4689]: I1201 09:28:33.974289 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/5351042e-776c-44c1-a6ad-bf530a24bfb7-nova-cell1-compute-config-1\") pod \"5351042e-776c-44c1-a6ad-bf530a24bfb7\" (UID: \"5351042e-776c-44c1-a6ad-bf530a24bfb7\") " Dec 01 09:28:33 crc kubenswrapper[4689]: I1201 09:28:33.974338 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/5351042e-776c-44c1-a6ad-bf530a24bfb7-nova-migration-ssh-key-0\") pod \"5351042e-776c-44c1-a6ad-bf530a24bfb7\" (UID: \"5351042e-776c-44c1-a6ad-bf530a24bfb7\") " Dec 01 09:28:33 crc kubenswrapper[4689]: I1201 09:28:33.974594 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5351042e-776c-44c1-a6ad-bf530a24bfb7-nova-combined-ca-bundle\") pod \"5351042e-776c-44c1-a6ad-bf530a24bfb7\" (UID: \"5351042e-776c-44c1-a6ad-bf530a24bfb7\") " Dec 01 09:28:33 crc kubenswrapper[4689]: I1201 09:28:33.974637 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/5351042e-776c-44c1-a6ad-bf530a24bfb7-nova-extra-config-0\") pod \"5351042e-776c-44c1-a6ad-bf530a24bfb7\" (UID: \"5351042e-776c-44c1-a6ad-bf530a24bfb7\") " Dec 01 09:28:33 crc kubenswrapper[4689]: I1201 09:28:33.974666 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/5351042e-776c-44c1-a6ad-bf530a24bfb7-nova-migration-ssh-key-1\") pod \"5351042e-776c-44c1-a6ad-bf530a24bfb7\" (UID: \"5351042e-776c-44c1-a6ad-bf530a24bfb7\") " Dec 01 09:28:33 crc kubenswrapper[4689]: I1201 09:28:33.974701 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5351042e-776c-44c1-a6ad-bf530a24bfb7-ssh-key\") pod \"5351042e-776c-44c1-a6ad-bf530a24bfb7\" (UID: \"5351042e-776c-44c1-a6ad-bf530a24bfb7\") " Dec 01 09:28:33 crc kubenswrapper[4689]: I1201 09:28:33.975721 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/5351042e-776c-44c1-a6ad-bf530a24bfb7-nova-cell1-compute-config-0\") pod \"5351042e-776c-44c1-a6ad-bf530a24bfb7\" (UID: \"5351042e-776c-44c1-a6ad-bf530a24bfb7\") " Dec 01 09:28:33 crc kubenswrapper[4689]: I1201 09:28:33.975814 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8596c\" (UniqueName: \"kubernetes.io/projected/5351042e-776c-44c1-a6ad-bf530a24bfb7-kube-api-access-8596c\") pod \"5351042e-776c-44c1-a6ad-bf530a24bfb7\" (UID: \"5351042e-776c-44c1-a6ad-bf530a24bfb7\") " Dec 01 09:28:33 crc kubenswrapper[4689]: I1201 09:28:33.981612 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5351042e-776c-44c1-a6ad-bf530a24bfb7-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "5351042e-776c-44c1-a6ad-bf530a24bfb7" (UID: "5351042e-776c-44c1-a6ad-bf530a24bfb7"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:28:33 crc kubenswrapper[4689]: I1201 09:28:33.984679 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5351042e-776c-44c1-a6ad-bf530a24bfb7-kube-api-access-8596c" (OuterVolumeSpecName: "kube-api-access-8596c") pod "5351042e-776c-44c1-a6ad-bf530a24bfb7" (UID: "5351042e-776c-44c1-a6ad-bf530a24bfb7"). InnerVolumeSpecName "kube-api-access-8596c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:28:34 crc kubenswrapper[4689]: I1201 09:28:34.017517 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5351042e-776c-44c1-a6ad-bf530a24bfb7-inventory" (OuterVolumeSpecName: "inventory") pod "5351042e-776c-44c1-a6ad-bf530a24bfb7" (UID: "5351042e-776c-44c1-a6ad-bf530a24bfb7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:28:34 crc kubenswrapper[4689]: I1201 09:28:34.018867 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5351042e-776c-44c1-a6ad-bf530a24bfb7-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "5351042e-776c-44c1-a6ad-bf530a24bfb7" (UID: "5351042e-776c-44c1-a6ad-bf530a24bfb7"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:28:34 crc kubenswrapper[4689]: I1201 09:28:34.020128 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5351042e-776c-44c1-a6ad-bf530a24bfb7-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "5351042e-776c-44c1-a6ad-bf530a24bfb7" (UID: "5351042e-776c-44c1-a6ad-bf530a24bfb7"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:28:34 crc kubenswrapper[4689]: I1201 09:28:34.020477 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5351042e-776c-44c1-a6ad-bf530a24bfb7-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "5351042e-776c-44c1-a6ad-bf530a24bfb7" (UID: "5351042e-776c-44c1-a6ad-bf530a24bfb7"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:28:34 crc kubenswrapper[4689]: I1201 09:28:34.021030 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5351042e-776c-44c1-a6ad-bf530a24bfb7-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "5351042e-776c-44c1-a6ad-bf530a24bfb7" (UID: "5351042e-776c-44c1-a6ad-bf530a24bfb7"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:28:34 crc kubenswrapper[4689]: I1201 09:28:34.029180 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5351042e-776c-44c1-a6ad-bf530a24bfb7-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "5351042e-776c-44c1-a6ad-bf530a24bfb7" (UID: "5351042e-776c-44c1-a6ad-bf530a24bfb7"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:28:34 crc kubenswrapper[4689]: I1201 09:28:34.045288 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5351042e-776c-44c1-a6ad-bf530a24bfb7-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "5351042e-776c-44c1-a6ad-bf530a24bfb7" (UID: "5351042e-776c-44c1-a6ad-bf530a24bfb7"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:28:34 crc kubenswrapper[4689]: I1201 09:28:34.083496 4689 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5351042e-776c-44c1-a6ad-bf530a24bfb7-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 01 09:28:34 crc kubenswrapper[4689]: I1201 09:28:34.083541 4689 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/5351042e-776c-44c1-a6ad-bf530a24bfb7-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Dec 01 09:28:34 crc kubenswrapper[4689]: I1201 09:28:34.083558 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8596c\" (UniqueName: \"kubernetes.io/projected/5351042e-776c-44c1-a6ad-bf530a24bfb7-kube-api-access-8596c\") on node \"crc\" DevicePath \"\"" Dec 01 09:28:34 crc kubenswrapper[4689]: I1201 09:28:34.083571 4689 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5351042e-776c-44c1-a6ad-bf530a24bfb7-inventory\") on node \"crc\" DevicePath \"\"" Dec 01 09:28:34 crc kubenswrapper[4689]: I1201 09:28:34.083582 4689 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/5351042e-776c-44c1-a6ad-bf530a24bfb7-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Dec 01 09:28:34 crc kubenswrapper[4689]: I1201 09:28:34.083594 4689 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/5351042e-776c-44c1-a6ad-bf530a24bfb7-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Dec 01 09:28:34 crc kubenswrapper[4689]: I1201 09:28:34.083604 4689 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5351042e-776c-44c1-a6ad-bf530a24bfb7-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 09:28:34 crc kubenswrapper[4689]: I1201 09:28:34.083616 4689 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/5351042e-776c-44c1-a6ad-bf530a24bfb7-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Dec 01 09:28:34 crc kubenswrapper[4689]: I1201 09:28:34.083626 4689 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/5351042e-776c-44c1-a6ad-bf530a24bfb7-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Dec 01 09:28:34 crc kubenswrapper[4689]: I1201 09:28:34.389831 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mkgfg" event={"ID":"5351042e-776c-44c1-a6ad-bf530a24bfb7","Type":"ContainerDied","Data":"1a314158d169b9a39dfee165ecb725a0885f3346afcbcf55710102874a59de12"} Dec 01 09:28:34 crc kubenswrapper[4689]: I1201 09:28:34.390139 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a314158d169b9a39dfee165ecb725a0885f3346afcbcf55710102874a59de12" Dec 01 09:28:34 crc kubenswrapper[4689]: I1201 09:28:34.389888 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mkgfg" Dec 01 09:28:34 crc kubenswrapper[4689]: I1201 09:28:34.499648 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g6njz"] Dec 01 09:28:34 crc kubenswrapper[4689]: E1201 09:28:34.500102 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5351042e-776c-44c1-a6ad-bf530a24bfb7" containerName="nova-edpm-deployment-openstack-edpm-ipam" Dec 01 09:28:34 crc kubenswrapper[4689]: I1201 09:28:34.500122 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="5351042e-776c-44c1-a6ad-bf530a24bfb7" containerName="nova-edpm-deployment-openstack-edpm-ipam" Dec 01 09:28:34 crc kubenswrapper[4689]: I1201 09:28:34.500380 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="5351042e-776c-44c1-a6ad-bf530a24bfb7" containerName="nova-edpm-deployment-openstack-edpm-ipam" Dec 01 09:28:34 crc kubenswrapper[4689]: I1201 09:28:34.501058 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g6njz" Dec 01 09:28:34 crc kubenswrapper[4689]: I1201 09:28:34.504349 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Dec 01 09:28:34 crc kubenswrapper[4689]: I1201 09:28:34.504617 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 01 09:28:34 crc kubenswrapper[4689]: I1201 09:28:34.504830 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 01 09:28:34 crc kubenswrapper[4689]: I1201 09:28:34.506664 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-sh59x" Dec 01 09:28:34 crc kubenswrapper[4689]: I1201 09:28:34.506927 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 01 09:28:34 crc kubenswrapper[4689]: I1201 09:28:34.513449 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g6njz"] Dec 01 09:28:34 crc kubenswrapper[4689]: I1201 09:28:34.692017 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4a88b941-7390-4f78-83e5-733fe9d39482-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g6njz\" (UID: \"4a88b941-7390-4f78-83e5-733fe9d39482\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g6njz" Dec 01 09:28:34 crc kubenswrapper[4689]: I1201 09:28:34.692070 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vwjx\" (UniqueName: \"kubernetes.io/projected/4a88b941-7390-4f78-83e5-733fe9d39482-kube-api-access-2vwjx\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g6njz\" (UID: \"4a88b941-7390-4f78-83e5-733fe9d39482\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g6njz" Dec 01 09:28:34 crc kubenswrapper[4689]: I1201 09:28:34.692099 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/4a88b941-7390-4f78-83e5-733fe9d39482-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g6njz\" (UID: \"4a88b941-7390-4f78-83e5-733fe9d39482\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g6njz" Dec 01 09:28:34 crc kubenswrapper[4689]: I1201 09:28:34.692349 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a88b941-7390-4f78-83e5-733fe9d39482-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g6njz\" (UID: \"4a88b941-7390-4f78-83e5-733fe9d39482\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g6njz" Dec 01 09:28:34 crc kubenswrapper[4689]: I1201 09:28:34.692530 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4a88b941-7390-4f78-83e5-733fe9d39482-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g6njz\" (UID: \"4a88b941-7390-4f78-83e5-733fe9d39482\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g6njz" Dec 01 09:28:34 crc kubenswrapper[4689]: I1201 09:28:34.692578 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/4a88b941-7390-4f78-83e5-733fe9d39482-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g6njz\" (UID: \"4a88b941-7390-4f78-83e5-733fe9d39482\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g6njz" Dec 01 09:28:34 crc kubenswrapper[4689]: I1201 09:28:34.692619 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/4a88b941-7390-4f78-83e5-733fe9d39482-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g6njz\" (UID: \"4a88b941-7390-4f78-83e5-733fe9d39482\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g6njz" Dec 01 09:28:34 crc kubenswrapper[4689]: I1201 09:28:34.794532 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4a88b941-7390-4f78-83e5-733fe9d39482-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g6njz\" (UID: \"4a88b941-7390-4f78-83e5-733fe9d39482\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g6njz" Dec 01 09:28:34 crc kubenswrapper[4689]: I1201 09:28:34.794581 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/4a88b941-7390-4f78-83e5-733fe9d39482-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g6njz\" (UID: \"4a88b941-7390-4f78-83e5-733fe9d39482\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g6njz" Dec 01 09:28:34 crc kubenswrapper[4689]: I1201 09:28:34.794606 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/4a88b941-7390-4f78-83e5-733fe9d39482-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g6njz\" (UID: \"4a88b941-7390-4f78-83e5-733fe9d39482\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g6njz" Dec 01 09:28:34 crc kubenswrapper[4689]: I1201 09:28:34.794753 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4a88b941-7390-4f78-83e5-733fe9d39482-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g6njz\" (UID: \"4a88b941-7390-4f78-83e5-733fe9d39482\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g6njz" Dec 01 09:28:34 crc kubenswrapper[4689]: I1201 09:28:34.794796 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vwjx\" (UniqueName: \"kubernetes.io/projected/4a88b941-7390-4f78-83e5-733fe9d39482-kube-api-access-2vwjx\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g6njz\" (UID: \"4a88b941-7390-4f78-83e5-733fe9d39482\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g6njz" Dec 01 09:28:34 crc kubenswrapper[4689]: I1201 09:28:34.794843 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/4a88b941-7390-4f78-83e5-733fe9d39482-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g6njz\" (UID: \"4a88b941-7390-4f78-83e5-733fe9d39482\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g6njz" Dec 01 09:28:34 crc kubenswrapper[4689]: I1201 09:28:34.794909 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a88b941-7390-4f78-83e5-733fe9d39482-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g6njz\" (UID: \"4a88b941-7390-4f78-83e5-733fe9d39482\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g6njz" Dec 01 09:28:34 crc kubenswrapper[4689]: I1201 09:28:34.799887 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/4a88b941-7390-4f78-83e5-733fe9d39482-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g6njz\" (UID: \"4a88b941-7390-4f78-83e5-733fe9d39482\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g6njz" Dec 01 09:28:34 crc kubenswrapper[4689]: I1201 09:28:34.799974 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a88b941-7390-4f78-83e5-733fe9d39482-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g6njz\" (UID: \"4a88b941-7390-4f78-83e5-733fe9d39482\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g6njz" Dec 01 09:28:34 crc kubenswrapper[4689]: I1201 09:28:34.806667 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4a88b941-7390-4f78-83e5-733fe9d39482-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g6njz\" (UID: \"4a88b941-7390-4f78-83e5-733fe9d39482\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g6njz" Dec 01 09:28:34 crc kubenswrapper[4689]: I1201 09:28:34.807654 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4a88b941-7390-4f78-83e5-733fe9d39482-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g6njz\" (UID: \"4a88b941-7390-4f78-83e5-733fe9d39482\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g6njz" Dec 01 09:28:34 crc kubenswrapper[4689]: I1201 09:28:34.808563 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/4a88b941-7390-4f78-83e5-733fe9d39482-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g6njz\" (UID: \"4a88b941-7390-4f78-83e5-733fe9d39482\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g6njz" Dec 01 09:28:34 crc kubenswrapper[4689]: I1201 09:28:34.808663 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/4a88b941-7390-4f78-83e5-733fe9d39482-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g6njz\" (UID: \"4a88b941-7390-4f78-83e5-733fe9d39482\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g6njz" Dec 01 09:28:34 crc kubenswrapper[4689]: I1201 09:28:34.815277 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vwjx\" (UniqueName: \"kubernetes.io/projected/4a88b941-7390-4f78-83e5-733fe9d39482-kube-api-access-2vwjx\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g6njz\" (UID: \"4a88b941-7390-4f78-83e5-733fe9d39482\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g6njz" Dec 01 09:28:34 crc kubenswrapper[4689]: I1201 09:28:34.822049 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g6njz" Dec 01 09:28:35 crc kubenswrapper[4689]: I1201 09:28:35.384461 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g6njz"] Dec 01 09:28:35 crc kubenswrapper[4689]: I1201 09:28:35.403153 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g6njz" event={"ID":"4a88b941-7390-4f78-83e5-733fe9d39482","Type":"ContainerStarted","Data":"b151b35a15df0ac329b424f1f1ede9e9419d937b3bf8afc909aad92e11b78135"} Dec 01 09:28:36 crc kubenswrapper[4689]: I1201 09:28:36.428562 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g6njz" event={"ID":"4a88b941-7390-4f78-83e5-733fe9d39482","Type":"ContainerStarted","Data":"560686cb12220ca6b3fb437d5b1689f92f87215e357f357b1d165da2b99c12a4"} Dec 01 09:28:36 crc kubenswrapper[4689]: I1201 09:28:36.456749 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g6njz" podStartSLOduration=1.813182241 podStartE2EDuration="2.456728625s" podCreationTimestamp="2025-12-01 09:28:34 +0000 UTC" firstStartedPulling="2025-12-01 09:28:35.394171123 +0000 UTC m=+2995.466459027" lastFinishedPulling="2025-12-01 09:28:36.037717507 +0000 UTC m=+2996.110005411" observedRunningTime="2025-12-01 09:28:36.449319053 +0000 UTC m=+2996.521606967" watchObservedRunningTime="2025-12-01 09:28:36.456728625 +0000 UTC m=+2996.529016519" Dec 01 09:28:46 crc kubenswrapper[4689]: I1201 09:28:46.047819 4689 scope.go:117] "RemoveContainer" containerID="061213c21e96b72086584b4bc2e0384184a8626ff20d0fe98b0c83c0e192169e" Dec 01 09:28:46 crc kubenswrapper[4689]: E1201 09:28:46.048695 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:28:59 crc kubenswrapper[4689]: I1201 09:28:59.047661 4689 scope.go:117] "RemoveContainer" containerID="061213c21e96b72086584b4bc2e0384184a8626ff20d0fe98b0c83c0e192169e" Dec 01 09:28:59 crc kubenswrapper[4689]: E1201 09:28:59.050161 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:29:11 crc kubenswrapper[4689]: I1201 09:29:11.054988 4689 scope.go:117] "RemoveContainer" containerID="061213c21e96b72086584b4bc2e0384184a8626ff20d0fe98b0c83c0e192169e" Dec 01 09:29:11 crc kubenswrapper[4689]: E1201 09:29:11.055961 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:29:26 crc kubenswrapper[4689]: I1201 09:29:26.048575 4689 scope.go:117] "RemoveContainer" containerID="061213c21e96b72086584b4bc2e0384184a8626ff20d0fe98b0c83c0e192169e" Dec 01 09:29:26 crc kubenswrapper[4689]: E1201 09:29:26.049423 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:29:40 crc kubenswrapper[4689]: I1201 09:29:40.048424 4689 scope.go:117] "RemoveContainer" containerID="061213c21e96b72086584b4bc2e0384184a8626ff20d0fe98b0c83c0e192169e" Dec 01 09:29:40 crc kubenswrapper[4689]: E1201 09:29:40.049804 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:29:51 crc kubenswrapper[4689]: I1201 09:29:51.055113 4689 scope.go:117] "RemoveContainer" containerID="061213c21e96b72086584b4bc2e0384184a8626ff20d0fe98b0c83c0e192169e" Dec 01 09:29:51 crc kubenswrapper[4689]: E1201 09:29:51.055855 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:30:00 crc kubenswrapper[4689]: I1201 09:30:00.155181 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409690-gq8bh"] Dec 01 09:30:00 crc kubenswrapper[4689]: I1201 09:30:00.159233 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409690-gq8bh" Dec 01 09:30:00 crc kubenswrapper[4689]: I1201 09:30:00.161569 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 01 09:30:00 crc kubenswrapper[4689]: I1201 09:30:00.161639 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 01 09:30:00 crc kubenswrapper[4689]: I1201 09:30:00.166289 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409690-gq8bh"] Dec 01 09:30:00 crc kubenswrapper[4689]: I1201 09:30:00.331016 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9816ea3b-c7d1-435d-a4ff-79cf3470c742-secret-volume\") pod \"collect-profiles-29409690-gq8bh\" (UID: \"9816ea3b-c7d1-435d-a4ff-79cf3470c742\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409690-gq8bh" Dec 01 09:30:00 crc kubenswrapper[4689]: I1201 09:30:00.331253 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9816ea3b-c7d1-435d-a4ff-79cf3470c742-config-volume\") pod \"collect-profiles-29409690-gq8bh\" (UID: \"9816ea3b-c7d1-435d-a4ff-79cf3470c742\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409690-gq8bh" Dec 01 09:30:00 crc kubenswrapper[4689]: I1201 09:30:00.331394 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hd4rl\" (UniqueName: \"kubernetes.io/projected/9816ea3b-c7d1-435d-a4ff-79cf3470c742-kube-api-access-hd4rl\") pod \"collect-profiles-29409690-gq8bh\" (UID: \"9816ea3b-c7d1-435d-a4ff-79cf3470c742\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409690-gq8bh" Dec 01 09:30:00 crc kubenswrapper[4689]: I1201 09:30:00.433123 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hd4rl\" (UniqueName: \"kubernetes.io/projected/9816ea3b-c7d1-435d-a4ff-79cf3470c742-kube-api-access-hd4rl\") pod \"collect-profiles-29409690-gq8bh\" (UID: \"9816ea3b-c7d1-435d-a4ff-79cf3470c742\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409690-gq8bh" Dec 01 09:30:00 crc kubenswrapper[4689]: I1201 09:30:00.433214 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9816ea3b-c7d1-435d-a4ff-79cf3470c742-secret-volume\") pod \"collect-profiles-29409690-gq8bh\" (UID: \"9816ea3b-c7d1-435d-a4ff-79cf3470c742\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409690-gq8bh" Dec 01 09:30:00 crc kubenswrapper[4689]: I1201 09:30:00.433316 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9816ea3b-c7d1-435d-a4ff-79cf3470c742-config-volume\") pod \"collect-profiles-29409690-gq8bh\" (UID: \"9816ea3b-c7d1-435d-a4ff-79cf3470c742\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409690-gq8bh" Dec 01 09:30:00 crc kubenswrapper[4689]: I1201 09:30:00.434400 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9816ea3b-c7d1-435d-a4ff-79cf3470c742-config-volume\") pod \"collect-profiles-29409690-gq8bh\" (UID: \"9816ea3b-c7d1-435d-a4ff-79cf3470c742\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409690-gq8bh" Dec 01 09:30:00 crc kubenswrapper[4689]: I1201 09:30:00.440334 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9816ea3b-c7d1-435d-a4ff-79cf3470c742-secret-volume\") pod \"collect-profiles-29409690-gq8bh\" (UID: \"9816ea3b-c7d1-435d-a4ff-79cf3470c742\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409690-gq8bh" Dec 01 09:30:00 crc kubenswrapper[4689]: I1201 09:30:00.451172 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hd4rl\" (UniqueName: \"kubernetes.io/projected/9816ea3b-c7d1-435d-a4ff-79cf3470c742-kube-api-access-hd4rl\") pod \"collect-profiles-29409690-gq8bh\" (UID: \"9816ea3b-c7d1-435d-a4ff-79cf3470c742\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409690-gq8bh" Dec 01 09:30:00 crc kubenswrapper[4689]: I1201 09:30:00.492779 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409690-gq8bh" Dec 01 09:30:01 crc kubenswrapper[4689]: I1201 09:30:01.022744 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409690-gq8bh"] Dec 01 09:30:01 crc kubenswrapper[4689]: I1201 09:30:01.302060 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29409690-gq8bh" event={"ID":"9816ea3b-c7d1-435d-a4ff-79cf3470c742","Type":"ContainerStarted","Data":"70d4dfc52a0eda50c02830e08b74b2b8f8038137e219f25608c32c4eb2695c83"} Dec 01 09:30:01 crc kubenswrapper[4689]: I1201 09:30:01.302121 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29409690-gq8bh" event={"ID":"9816ea3b-c7d1-435d-a4ff-79cf3470c742","Type":"ContainerStarted","Data":"9942d39e33e707712aa7dbb23a89a8981284189e905eb2e05dda1a763fc17bba"} Dec 01 09:30:01 crc kubenswrapper[4689]: I1201 09:30:01.328421 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29409690-gq8bh" podStartSLOduration=1.328399646 podStartE2EDuration="1.328399646s" podCreationTimestamp="2025-12-01 09:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:30:01.317964033 +0000 UTC m=+3081.390251937" watchObservedRunningTime="2025-12-01 09:30:01.328399646 +0000 UTC m=+3081.400687560" Dec 01 09:30:02 crc kubenswrapper[4689]: I1201 09:30:02.314282 4689 generic.go:334] "Generic (PLEG): container finished" podID="9816ea3b-c7d1-435d-a4ff-79cf3470c742" containerID="70d4dfc52a0eda50c02830e08b74b2b8f8038137e219f25608c32c4eb2695c83" exitCode=0 Dec 01 09:30:02 crc kubenswrapper[4689]: I1201 09:30:02.314418 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29409690-gq8bh" event={"ID":"9816ea3b-c7d1-435d-a4ff-79cf3470c742","Type":"ContainerDied","Data":"70d4dfc52a0eda50c02830e08b74b2b8f8038137e219f25608c32c4eb2695c83"} Dec 01 09:30:03 crc kubenswrapper[4689]: I1201 09:30:03.686526 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409690-gq8bh" Dec 01 09:30:03 crc kubenswrapper[4689]: I1201 09:30:03.825236 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hd4rl\" (UniqueName: \"kubernetes.io/projected/9816ea3b-c7d1-435d-a4ff-79cf3470c742-kube-api-access-hd4rl\") pod \"9816ea3b-c7d1-435d-a4ff-79cf3470c742\" (UID: \"9816ea3b-c7d1-435d-a4ff-79cf3470c742\") " Dec 01 09:30:03 crc kubenswrapper[4689]: I1201 09:30:03.825505 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9816ea3b-c7d1-435d-a4ff-79cf3470c742-config-volume\") pod \"9816ea3b-c7d1-435d-a4ff-79cf3470c742\" (UID: \"9816ea3b-c7d1-435d-a4ff-79cf3470c742\") " Dec 01 09:30:03 crc kubenswrapper[4689]: I1201 09:30:03.825543 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9816ea3b-c7d1-435d-a4ff-79cf3470c742-secret-volume\") pod \"9816ea3b-c7d1-435d-a4ff-79cf3470c742\" (UID: \"9816ea3b-c7d1-435d-a4ff-79cf3470c742\") " Dec 01 09:30:03 crc kubenswrapper[4689]: I1201 09:30:03.826958 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9816ea3b-c7d1-435d-a4ff-79cf3470c742-config-volume" (OuterVolumeSpecName: "config-volume") pod "9816ea3b-c7d1-435d-a4ff-79cf3470c742" (UID: "9816ea3b-c7d1-435d-a4ff-79cf3470c742"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:30:03 crc kubenswrapper[4689]: I1201 09:30:03.831525 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9816ea3b-c7d1-435d-a4ff-79cf3470c742-kube-api-access-hd4rl" (OuterVolumeSpecName: "kube-api-access-hd4rl") pod "9816ea3b-c7d1-435d-a4ff-79cf3470c742" (UID: "9816ea3b-c7d1-435d-a4ff-79cf3470c742"). InnerVolumeSpecName "kube-api-access-hd4rl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:30:03 crc kubenswrapper[4689]: I1201 09:30:03.831639 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9816ea3b-c7d1-435d-a4ff-79cf3470c742-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9816ea3b-c7d1-435d-a4ff-79cf3470c742" (UID: "9816ea3b-c7d1-435d-a4ff-79cf3470c742"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:30:03 crc kubenswrapper[4689]: I1201 09:30:03.928032 4689 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9816ea3b-c7d1-435d-a4ff-79cf3470c742-config-volume\") on node \"crc\" DevicePath \"\"" Dec 01 09:30:03 crc kubenswrapper[4689]: I1201 09:30:03.928067 4689 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9816ea3b-c7d1-435d-a4ff-79cf3470c742-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 01 09:30:03 crc kubenswrapper[4689]: I1201 09:30:03.928078 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hd4rl\" (UniqueName: \"kubernetes.io/projected/9816ea3b-c7d1-435d-a4ff-79cf3470c742-kube-api-access-hd4rl\") on node \"crc\" DevicePath \"\"" Dec 01 09:30:04 crc kubenswrapper[4689]: I1201 09:30:04.048356 4689 scope.go:117] "RemoveContainer" containerID="061213c21e96b72086584b4bc2e0384184a8626ff20d0fe98b0c83c0e192169e" Dec 01 09:30:04 crc kubenswrapper[4689]: E1201 09:30:04.048983 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:30:04 crc kubenswrapper[4689]: I1201 09:30:04.337785 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29409690-gq8bh" event={"ID":"9816ea3b-c7d1-435d-a4ff-79cf3470c742","Type":"ContainerDied","Data":"9942d39e33e707712aa7dbb23a89a8981284189e905eb2e05dda1a763fc17bba"} Dec 01 09:30:04 crc kubenswrapper[4689]: I1201 09:30:04.337873 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409690-gq8bh" Dec 01 09:30:04 crc kubenswrapper[4689]: I1201 09:30:04.338266 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9942d39e33e707712aa7dbb23a89a8981284189e905eb2e05dda1a763fc17bba" Dec 01 09:30:04 crc kubenswrapper[4689]: I1201 09:30:04.403625 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409645-lhqn8"] Dec 01 09:30:04 crc kubenswrapper[4689]: I1201 09:30:04.415184 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409645-lhqn8"] Dec 01 09:30:05 crc kubenswrapper[4689]: I1201 09:30:05.061347 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6fd5553-6e2f-4b49-93c0-f03807e48f54" path="/var/lib/kubelet/pods/a6fd5553-6e2f-4b49-93c0-f03807e48f54/volumes" Dec 01 09:30:18 crc kubenswrapper[4689]: I1201 09:30:18.046948 4689 scope.go:117] "RemoveContainer" containerID="061213c21e96b72086584b4bc2e0384184a8626ff20d0fe98b0c83c0e192169e" Dec 01 09:30:18 crc kubenswrapper[4689]: E1201 09:30:18.047653 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:30:30 crc kubenswrapper[4689]: I1201 09:30:30.053510 4689 scope.go:117] "RemoveContainer" containerID="061213c21e96b72086584b4bc2e0384184a8626ff20d0fe98b0c83c0e192169e" Dec 01 09:30:30 crc kubenswrapper[4689]: E1201 09:30:30.054344 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:30:45 crc kubenswrapper[4689]: I1201 09:30:45.047931 4689 scope.go:117] "RemoveContainer" containerID="061213c21e96b72086584b4bc2e0384184a8626ff20d0fe98b0c83c0e192169e" Dec 01 09:30:45 crc kubenswrapper[4689]: E1201 09:30:45.048860 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:30:48 crc kubenswrapper[4689]: I1201 09:30:48.427948 4689 scope.go:117] "RemoveContainer" containerID="1dd93318847e65d87543324043ddf9a6763d5a6d37cb0375cb4fee2781bb9041" Dec 01 09:30:56 crc kubenswrapper[4689]: I1201 09:30:56.048632 4689 scope.go:117] "RemoveContainer" containerID="061213c21e96b72086584b4bc2e0384184a8626ff20d0fe98b0c83c0e192169e" Dec 01 09:30:56 crc kubenswrapper[4689]: E1201 09:30:56.049381 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:31:11 crc kubenswrapper[4689]: I1201 09:31:11.054076 4689 scope.go:117] "RemoveContainer" containerID="061213c21e96b72086584b4bc2e0384184a8626ff20d0fe98b0c83c0e192169e" Dec 01 09:31:11 crc kubenswrapper[4689]: E1201 09:31:11.054974 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:31:22 crc kubenswrapper[4689]: I1201 09:31:22.047957 4689 scope.go:117] "RemoveContainer" containerID="061213c21e96b72086584b4bc2e0384184a8626ff20d0fe98b0c83c0e192169e" Dec 01 09:31:22 crc kubenswrapper[4689]: E1201 09:31:22.048744 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:31:33 crc kubenswrapper[4689]: I1201 09:31:33.048066 4689 scope.go:117] "RemoveContainer" containerID="061213c21e96b72086584b4bc2e0384184a8626ff20d0fe98b0c83c0e192169e" Dec 01 09:31:33 crc kubenswrapper[4689]: E1201 09:31:33.048724 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:31:36 crc kubenswrapper[4689]: I1201 09:31:36.261331 4689 generic.go:334] "Generic (PLEG): container finished" podID="4a88b941-7390-4f78-83e5-733fe9d39482" containerID="560686cb12220ca6b3fb437d5b1689f92f87215e357f357b1d165da2b99c12a4" exitCode=0 Dec 01 09:31:36 crc kubenswrapper[4689]: I1201 09:31:36.261429 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g6njz" event={"ID":"4a88b941-7390-4f78-83e5-733fe9d39482","Type":"ContainerDied","Data":"560686cb12220ca6b3fb437d5b1689f92f87215e357f357b1d165da2b99c12a4"} Dec 01 09:31:37 crc kubenswrapper[4689]: I1201 09:31:37.709309 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g6njz" Dec 01 09:31:37 crc kubenswrapper[4689]: I1201 09:31:37.848310 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/4a88b941-7390-4f78-83e5-733fe9d39482-ceilometer-compute-config-data-0\") pod \"4a88b941-7390-4f78-83e5-733fe9d39482\" (UID: \"4a88b941-7390-4f78-83e5-733fe9d39482\") " Dec 01 09:31:37 crc kubenswrapper[4689]: I1201 09:31:37.848468 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2vwjx\" (UniqueName: \"kubernetes.io/projected/4a88b941-7390-4f78-83e5-733fe9d39482-kube-api-access-2vwjx\") pod \"4a88b941-7390-4f78-83e5-733fe9d39482\" (UID: \"4a88b941-7390-4f78-83e5-733fe9d39482\") " Dec 01 09:31:37 crc kubenswrapper[4689]: I1201 09:31:37.848560 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/4a88b941-7390-4f78-83e5-733fe9d39482-ceilometer-compute-config-data-2\") pod \"4a88b941-7390-4f78-83e5-733fe9d39482\" (UID: \"4a88b941-7390-4f78-83e5-733fe9d39482\") " Dec 01 09:31:37 crc kubenswrapper[4689]: I1201 09:31:37.848590 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4a88b941-7390-4f78-83e5-733fe9d39482-inventory\") pod \"4a88b941-7390-4f78-83e5-733fe9d39482\" (UID: \"4a88b941-7390-4f78-83e5-733fe9d39482\") " Dec 01 09:31:37 crc kubenswrapper[4689]: I1201 09:31:37.848668 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a88b941-7390-4f78-83e5-733fe9d39482-telemetry-combined-ca-bundle\") pod \"4a88b941-7390-4f78-83e5-733fe9d39482\" (UID: \"4a88b941-7390-4f78-83e5-733fe9d39482\") " Dec 01 09:31:37 crc kubenswrapper[4689]: I1201 09:31:37.848712 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/4a88b941-7390-4f78-83e5-733fe9d39482-ceilometer-compute-config-data-1\") pod \"4a88b941-7390-4f78-83e5-733fe9d39482\" (UID: \"4a88b941-7390-4f78-83e5-733fe9d39482\") " Dec 01 09:31:37 crc kubenswrapper[4689]: I1201 09:31:37.848755 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4a88b941-7390-4f78-83e5-733fe9d39482-ssh-key\") pod \"4a88b941-7390-4f78-83e5-733fe9d39482\" (UID: \"4a88b941-7390-4f78-83e5-733fe9d39482\") " Dec 01 09:31:37 crc kubenswrapper[4689]: I1201 09:31:37.855712 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a88b941-7390-4f78-83e5-733fe9d39482-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "4a88b941-7390-4f78-83e5-733fe9d39482" (UID: "4a88b941-7390-4f78-83e5-733fe9d39482"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:31:37 crc kubenswrapper[4689]: I1201 09:31:37.856433 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a88b941-7390-4f78-83e5-733fe9d39482-kube-api-access-2vwjx" (OuterVolumeSpecName: "kube-api-access-2vwjx") pod "4a88b941-7390-4f78-83e5-733fe9d39482" (UID: "4a88b941-7390-4f78-83e5-733fe9d39482"). InnerVolumeSpecName "kube-api-access-2vwjx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:31:37 crc kubenswrapper[4689]: I1201 09:31:37.921966 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a88b941-7390-4f78-83e5-733fe9d39482-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "4a88b941-7390-4f78-83e5-733fe9d39482" (UID: "4a88b941-7390-4f78-83e5-733fe9d39482"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:31:37 crc kubenswrapper[4689]: I1201 09:31:37.921988 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a88b941-7390-4f78-83e5-733fe9d39482-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "4a88b941-7390-4f78-83e5-733fe9d39482" (UID: "4a88b941-7390-4f78-83e5-733fe9d39482"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:31:37 crc kubenswrapper[4689]: I1201 09:31:37.922040 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a88b941-7390-4f78-83e5-733fe9d39482-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "4a88b941-7390-4f78-83e5-733fe9d39482" (UID: "4a88b941-7390-4f78-83e5-733fe9d39482"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:31:37 crc kubenswrapper[4689]: I1201 09:31:37.924788 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a88b941-7390-4f78-83e5-733fe9d39482-inventory" (OuterVolumeSpecName: "inventory") pod "4a88b941-7390-4f78-83e5-733fe9d39482" (UID: "4a88b941-7390-4f78-83e5-733fe9d39482"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:31:37 crc kubenswrapper[4689]: I1201 09:31:37.928330 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a88b941-7390-4f78-83e5-733fe9d39482-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "4a88b941-7390-4f78-83e5-733fe9d39482" (UID: "4a88b941-7390-4f78-83e5-733fe9d39482"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:31:37 crc kubenswrapper[4689]: I1201 09:31:37.951762 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2vwjx\" (UniqueName: \"kubernetes.io/projected/4a88b941-7390-4f78-83e5-733fe9d39482-kube-api-access-2vwjx\") on node \"crc\" DevicePath \"\"" Dec 01 09:31:37 crc kubenswrapper[4689]: I1201 09:31:37.951802 4689 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/4a88b941-7390-4f78-83e5-733fe9d39482-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Dec 01 09:31:37 crc kubenswrapper[4689]: I1201 09:31:37.951815 4689 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4a88b941-7390-4f78-83e5-733fe9d39482-inventory\") on node \"crc\" DevicePath \"\"" Dec 01 09:31:37 crc kubenswrapper[4689]: I1201 09:31:37.951830 4689 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a88b941-7390-4f78-83e5-733fe9d39482-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 09:31:37 crc kubenswrapper[4689]: I1201 09:31:37.951846 4689 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/4a88b941-7390-4f78-83e5-733fe9d39482-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Dec 01 09:31:37 crc kubenswrapper[4689]: I1201 09:31:37.951859 4689 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4a88b941-7390-4f78-83e5-733fe9d39482-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 01 09:31:37 crc kubenswrapper[4689]: I1201 09:31:37.951871 4689 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/4a88b941-7390-4f78-83e5-733fe9d39482-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Dec 01 09:31:38 crc kubenswrapper[4689]: I1201 09:31:38.289294 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g6njz" event={"ID":"4a88b941-7390-4f78-83e5-733fe9d39482","Type":"ContainerDied","Data":"b151b35a15df0ac329b424f1f1ede9e9419d937b3bf8afc909aad92e11b78135"} Dec 01 09:31:38 crc kubenswrapper[4689]: I1201 09:31:38.289346 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b151b35a15df0ac329b424f1f1ede9e9419d937b3bf8afc909aad92e11b78135" Dec 01 09:31:38 crc kubenswrapper[4689]: I1201 09:31:38.289793 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g6njz" Dec 01 09:31:48 crc kubenswrapper[4689]: I1201 09:31:48.047573 4689 scope.go:117] "RemoveContainer" containerID="061213c21e96b72086584b4bc2e0384184a8626ff20d0fe98b0c83c0e192169e" Dec 01 09:31:49 crc kubenswrapper[4689]: I1201 09:31:49.392492 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" event={"ID":"3947625d-75bf-4332-a233-1491b2ee9d96","Type":"ContainerStarted","Data":"4faa3fc9d613af72eff28875b4605ed4e4b31f63bb6f62515a694b8dd41544ee"} Dec 01 09:31:50 crc kubenswrapper[4689]: I1201 09:31:50.586824 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5jjxp"] Dec 01 09:31:50 crc kubenswrapper[4689]: E1201 09:31:50.587870 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a88b941-7390-4f78-83e5-733fe9d39482" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Dec 01 09:31:50 crc kubenswrapper[4689]: I1201 09:31:50.587889 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a88b941-7390-4f78-83e5-733fe9d39482" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Dec 01 09:31:50 crc kubenswrapper[4689]: E1201 09:31:50.587926 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9816ea3b-c7d1-435d-a4ff-79cf3470c742" containerName="collect-profiles" Dec 01 09:31:50 crc kubenswrapper[4689]: I1201 09:31:50.587933 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="9816ea3b-c7d1-435d-a4ff-79cf3470c742" containerName="collect-profiles" Dec 01 09:31:50 crc kubenswrapper[4689]: I1201 09:31:50.588190 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="9816ea3b-c7d1-435d-a4ff-79cf3470c742" containerName="collect-profiles" Dec 01 09:31:50 crc kubenswrapper[4689]: I1201 09:31:50.588214 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a88b941-7390-4f78-83e5-733fe9d39482" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Dec 01 09:31:50 crc kubenswrapper[4689]: I1201 09:31:50.590101 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5jjxp" Dec 01 09:31:50 crc kubenswrapper[4689]: I1201 09:31:50.600327 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5jjxp"] Dec 01 09:31:50 crc kubenswrapper[4689]: I1201 09:31:50.713668 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mffq6\" (UniqueName: \"kubernetes.io/projected/ad329cd4-6a69-4c79-973f-95936c5e5981-kube-api-access-mffq6\") pod \"redhat-marketplace-5jjxp\" (UID: \"ad329cd4-6a69-4c79-973f-95936c5e5981\") " pod="openshift-marketplace/redhat-marketplace-5jjxp" Dec 01 09:31:50 crc kubenswrapper[4689]: I1201 09:31:50.713797 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad329cd4-6a69-4c79-973f-95936c5e5981-catalog-content\") pod \"redhat-marketplace-5jjxp\" (UID: \"ad329cd4-6a69-4c79-973f-95936c5e5981\") " pod="openshift-marketplace/redhat-marketplace-5jjxp" Dec 01 09:31:50 crc kubenswrapper[4689]: I1201 09:31:50.713919 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad329cd4-6a69-4c79-973f-95936c5e5981-utilities\") pod \"redhat-marketplace-5jjxp\" (UID: \"ad329cd4-6a69-4c79-973f-95936c5e5981\") " pod="openshift-marketplace/redhat-marketplace-5jjxp" Dec 01 09:31:50 crc kubenswrapper[4689]: I1201 09:31:50.817872 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mffq6\" (UniqueName: \"kubernetes.io/projected/ad329cd4-6a69-4c79-973f-95936c5e5981-kube-api-access-mffq6\") pod \"redhat-marketplace-5jjxp\" (UID: \"ad329cd4-6a69-4c79-973f-95936c5e5981\") " pod="openshift-marketplace/redhat-marketplace-5jjxp" Dec 01 09:31:50 crc kubenswrapper[4689]: I1201 09:31:50.817996 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad329cd4-6a69-4c79-973f-95936c5e5981-catalog-content\") pod \"redhat-marketplace-5jjxp\" (UID: \"ad329cd4-6a69-4c79-973f-95936c5e5981\") " pod="openshift-marketplace/redhat-marketplace-5jjxp" Dec 01 09:31:50 crc kubenswrapper[4689]: I1201 09:31:50.818056 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad329cd4-6a69-4c79-973f-95936c5e5981-utilities\") pod \"redhat-marketplace-5jjxp\" (UID: \"ad329cd4-6a69-4c79-973f-95936c5e5981\") " pod="openshift-marketplace/redhat-marketplace-5jjxp" Dec 01 09:31:50 crc kubenswrapper[4689]: I1201 09:31:50.818682 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad329cd4-6a69-4c79-973f-95936c5e5981-utilities\") pod \"redhat-marketplace-5jjxp\" (UID: \"ad329cd4-6a69-4c79-973f-95936c5e5981\") " pod="openshift-marketplace/redhat-marketplace-5jjxp" Dec 01 09:31:50 crc kubenswrapper[4689]: I1201 09:31:50.819300 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad329cd4-6a69-4c79-973f-95936c5e5981-catalog-content\") pod \"redhat-marketplace-5jjxp\" (UID: \"ad329cd4-6a69-4c79-973f-95936c5e5981\") " pod="openshift-marketplace/redhat-marketplace-5jjxp" Dec 01 09:31:50 crc kubenswrapper[4689]: I1201 09:31:50.845913 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mffq6\" (UniqueName: \"kubernetes.io/projected/ad329cd4-6a69-4c79-973f-95936c5e5981-kube-api-access-mffq6\") pod \"redhat-marketplace-5jjxp\" (UID: \"ad329cd4-6a69-4c79-973f-95936c5e5981\") " pod="openshift-marketplace/redhat-marketplace-5jjxp" Dec 01 09:31:50 crc kubenswrapper[4689]: I1201 09:31:50.914188 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5jjxp" Dec 01 09:31:51 crc kubenswrapper[4689]: I1201 09:31:51.481454 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5jjxp"] Dec 01 09:31:52 crc kubenswrapper[4689]: I1201 09:31:52.436100 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5jjxp" event={"ID":"ad329cd4-6a69-4c79-973f-95936c5e5981","Type":"ContainerStarted","Data":"eef0e0e695ceb0e45abdf51d4ef19cd8a3b117f5437a45408a1f94af572e4fa7"} Dec 01 09:31:53 crc kubenswrapper[4689]: I1201 09:31:53.448399 4689 generic.go:334] "Generic (PLEG): container finished" podID="ad329cd4-6a69-4c79-973f-95936c5e5981" containerID="4045a59622fc1df287ca0845515f9c53f19073a46ce24c90f2cbc04d9aa1e51a" exitCode=0 Dec 01 09:31:53 crc kubenswrapper[4689]: I1201 09:31:53.448503 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5jjxp" event={"ID":"ad329cd4-6a69-4c79-973f-95936c5e5981","Type":"ContainerDied","Data":"4045a59622fc1df287ca0845515f9c53f19073a46ce24c90f2cbc04d9aa1e51a"} Dec 01 09:31:53 crc kubenswrapper[4689]: I1201 09:31:53.452549 4689 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 01 09:31:55 crc kubenswrapper[4689]: I1201 09:31:55.470833 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5jjxp" event={"ID":"ad329cd4-6a69-4c79-973f-95936c5e5981","Type":"ContainerStarted","Data":"d8c1df193ac78fb43d61b0620ba8a520ee3cda6c604e58feb00836a2ce4d0ece"} Dec 01 09:31:56 crc kubenswrapper[4689]: I1201 09:31:56.482212 4689 generic.go:334] "Generic (PLEG): container finished" podID="ad329cd4-6a69-4c79-973f-95936c5e5981" containerID="d8c1df193ac78fb43d61b0620ba8a520ee3cda6c604e58feb00836a2ce4d0ece" exitCode=0 Dec 01 09:31:56 crc kubenswrapper[4689]: I1201 09:31:56.482264 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5jjxp" event={"ID":"ad329cd4-6a69-4c79-973f-95936c5e5981","Type":"ContainerDied","Data":"d8c1df193ac78fb43d61b0620ba8a520ee3cda6c604e58feb00836a2ce4d0ece"} Dec 01 09:31:57 crc kubenswrapper[4689]: I1201 09:31:57.497895 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5jjxp" event={"ID":"ad329cd4-6a69-4c79-973f-95936c5e5981","Type":"ContainerStarted","Data":"7db028b771e93b8070179febfccdf42c4e395daf76ba36332033ff253a22f402"} Dec 01 09:31:57 crc kubenswrapper[4689]: I1201 09:31:57.523577 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5jjxp" podStartSLOduration=3.897978525 podStartE2EDuration="7.523558442s" podCreationTimestamp="2025-12-01 09:31:50 +0000 UTC" firstStartedPulling="2025-12-01 09:31:53.451949023 +0000 UTC m=+3193.524236927" lastFinishedPulling="2025-12-01 09:31:57.07752894 +0000 UTC m=+3197.149816844" observedRunningTime="2025-12-01 09:31:57.519357088 +0000 UTC m=+3197.591645002" watchObservedRunningTime="2025-12-01 09:31:57.523558442 +0000 UTC m=+3197.595846346" Dec 01 09:32:00 crc kubenswrapper[4689]: I1201 09:32:00.916053 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5jjxp" Dec 01 09:32:00 crc kubenswrapper[4689]: I1201 09:32:00.917500 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5jjxp" Dec 01 09:32:00 crc kubenswrapper[4689]: I1201 09:32:00.999404 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5jjxp" Dec 01 09:32:10 crc kubenswrapper[4689]: I1201 09:32:10.980581 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5jjxp" Dec 01 09:32:11 crc kubenswrapper[4689]: I1201 09:32:11.044095 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5jjxp"] Dec 01 09:32:11 crc kubenswrapper[4689]: I1201 09:32:11.992742 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5jjxp" podUID="ad329cd4-6a69-4c79-973f-95936c5e5981" containerName="registry-server" containerID="cri-o://7db028b771e93b8070179febfccdf42c4e395daf76ba36332033ff253a22f402" gracePeriod=2 Dec 01 09:32:12 crc kubenswrapper[4689]: I1201 09:32:12.458411 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5jjxp" Dec 01 09:32:12 crc kubenswrapper[4689]: I1201 09:32:12.593794 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mffq6\" (UniqueName: \"kubernetes.io/projected/ad329cd4-6a69-4c79-973f-95936c5e5981-kube-api-access-mffq6\") pod \"ad329cd4-6a69-4c79-973f-95936c5e5981\" (UID: \"ad329cd4-6a69-4c79-973f-95936c5e5981\") " Dec 01 09:32:12 crc kubenswrapper[4689]: I1201 09:32:12.593987 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad329cd4-6a69-4c79-973f-95936c5e5981-catalog-content\") pod \"ad329cd4-6a69-4c79-973f-95936c5e5981\" (UID: \"ad329cd4-6a69-4c79-973f-95936c5e5981\") " Dec 01 09:32:12 crc kubenswrapper[4689]: I1201 09:32:12.594055 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad329cd4-6a69-4c79-973f-95936c5e5981-utilities\") pod \"ad329cd4-6a69-4c79-973f-95936c5e5981\" (UID: \"ad329cd4-6a69-4c79-973f-95936c5e5981\") " Dec 01 09:32:12 crc kubenswrapper[4689]: I1201 09:32:12.595099 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad329cd4-6a69-4c79-973f-95936c5e5981-utilities" (OuterVolumeSpecName: "utilities") pod "ad329cd4-6a69-4c79-973f-95936c5e5981" (UID: "ad329cd4-6a69-4c79-973f-95936c5e5981"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:32:12 crc kubenswrapper[4689]: I1201 09:32:12.605862 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad329cd4-6a69-4c79-973f-95936c5e5981-kube-api-access-mffq6" (OuterVolumeSpecName: "kube-api-access-mffq6") pod "ad329cd4-6a69-4c79-973f-95936c5e5981" (UID: "ad329cd4-6a69-4c79-973f-95936c5e5981"). InnerVolumeSpecName "kube-api-access-mffq6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:32:12 crc kubenswrapper[4689]: I1201 09:32:12.620512 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad329cd4-6a69-4c79-973f-95936c5e5981-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ad329cd4-6a69-4c79-973f-95936c5e5981" (UID: "ad329cd4-6a69-4c79-973f-95936c5e5981"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:32:12 crc kubenswrapper[4689]: I1201 09:32:12.696410 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mffq6\" (UniqueName: \"kubernetes.io/projected/ad329cd4-6a69-4c79-973f-95936c5e5981-kube-api-access-mffq6\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:12 crc kubenswrapper[4689]: I1201 09:32:12.696461 4689 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad329cd4-6a69-4c79-973f-95936c5e5981-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:12 crc kubenswrapper[4689]: I1201 09:32:12.696471 4689 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad329cd4-6a69-4c79-973f-95936c5e5981-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 09:32:13 crc kubenswrapper[4689]: I1201 09:32:13.006179 4689 generic.go:334] "Generic (PLEG): container finished" podID="ad329cd4-6a69-4c79-973f-95936c5e5981" containerID="7db028b771e93b8070179febfccdf42c4e395daf76ba36332033ff253a22f402" exitCode=0 Dec 01 09:32:13 crc kubenswrapper[4689]: I1201 09:32:13.006269 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5jjxp" event={"ID":"ad329cd4-6a69-4c79-973f-95936c5e5981","Type":"ContainerDied","Data":"7db028b771e93b8070179febfccdf42c4e395daf76ba36332033ff253a22f402"} Dec 01 09:32:13 crc kubenswrapper[4689]: I1201 09:32:13.006598 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5jjxp" event={"ID":"ad329cd4-6a69-4c79-973f-95936c5e5981","Type":"ContainerDied","Data":"eef0e0e695ceb0e45abdf51d4ef19cd8a3b117f5437a45408a1f94af572e4fa7"} Dec 01 09:32:13 crc kubenswrapper[4689]: I1201 09:32:13.006281 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5jjxp" Dec 01 09:32:13 crc kubenswrapper[4689]: I1201 09:32:13.006622 4689 scope.go:117] "RemoveContainer" containerID="7db028b771e93b8070179febfccdf42c4e395daf76ba36332033ff253a22f402" Dec 01 09:32:13 crc kubenswrapper[4689]: I1201 09:32:13.042034 4689 scope.go:117] "RemoveContainer" containerID="d8c1df193ac78fb43d61b0620ba8a520ee3cda6c604e58feb00836a2ce4d0ece" Dec 01 09:32:13 crc kubenswrapper[4689]: I1201 09:32:13.070563 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5jjxp"] Dec 01 09:32:13 crc kubenswrapper[4689]: I1201 09:32:13.089585 4689 scope.go:117] "RemoveContainer" containerID="4045a59622fc1df287ca0845515f9c53f19073a46ce24c90f2cbc04d9aa1e51a" Dec 01 09:32:13 crc kubenswrapper[4689]: I1201 09:32:13.100448 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5jjxp"] Dec 01 09:32:13 crc kubenswrapper[4689]: I1201 09:32:13.183535 4689 scope.go:117] "RemoveContainer" containerID="7db028b771e93b8070179febfccdf42c4e395daf76ba36332033ff253a22f402" Dec 01 09:32:13 crc kubenswrapper[4689]: E1201 09:32:13.189592 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7db028b771e93b8070179febfccdf42c4e395daf76ba36332033ff253a22f402\": container with ID starting with 7db028b771e93b8070179febfccdf42c4e395daf76ba36332033ff253a22f402 not found: ID does not exist" containerID="7db028b771e93b8070179febfccdf42c4e395daf76ba36332033ff253a22f402" Dec 01 09:32:13 crc kubenswrapper[4689]: I1201 09:32:13.189697 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7db028b771e93b8070179febfccdf42c4e395daf76ba36332033ff253a22f402"} err="failed to get container status \"7db028b771e93b8070179febfccdf42c4e395daf76ba36332033ff253a22f402\": rpc error: code = NotFound desc = could not find container \"7db028b771e93b8070179febfccdf42c4e395daf76ba36332033ff253a22f402\": container with ID starting with 7db028b771e93b8070179febfccdf42c4e395daf76ba36332033ff253a22f402 not found: ID does not exist" Dec 01 09:32:13 crc kubenswrapper[4689]: I1201 09:32:13.189735 4689 scope.go:117] "RemoveContainer" containerID="d8c1df193ac78fb43d61b0620ba8a520ee3cda6c604e58feb00836a2ce4d0ece" Dec 01 09:32:13 crc kubenswrapper[4689]: E1201 09:32:13.192639 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8c1df193ac78fb43d61b0620ba8a520ee3cda6c604e58feb00836a2ce4d0ece\": container with ID starting with d8c1df193ac78fb43d61b0620ba8a520ee3cda6c604e58feb00836a2ce4d0ece not found: ID does not exist" containerID="d8c1df193ac78fb43d61b0620ba8a520ee3cda6c604e58feb00836a2ce4d0ece" Dec 01 09:32:13 crc kubenswrapper[4689]: I1201 09:32:13.192709 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8c1df193ac78fb43d61b0620ba8a520ee3cda6c604e58feb00836a2ce4d0ece"} err="failed to get container status \"d8c1df193ac78fb43d61b0620ba8a520ee3cda6c604e58feb00836a2ce4d0ece\": rpc error: code = NotFound desc = could not find container \"d8c1df193ac78fb43d61b0620ba8a520ee3cda6c604e58feb00836a2ce4d0ece\": container with ID starting with d8c1df193ac78fb43d61b0620ba8a520ee3cda6c604e58feb00836a2ce4d0ece not found: ID does not exist" Dec 01 09:32:13 crc kubenswrapper[4689]: I1201 09:32:13.192745 4689 scope.go:117] "RemoveContainer" containerID="4045a59622fc1df287ca0845515f9c53f19073a46ce24c90f2cbc04d9aa1e51a" Dec 01 09:32:13 crc kubenswrapper[4689]: E1201 09:32:13.200668 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4045a59622fc1df287ca0845515f9c53f19073a46ce24c90f2cbc04d9aa1e51a\": container with ID starting with 4045a59622fc1df287ca0845515f9c53f19073a46ce24c90f2cbc04d9aa1e51a not found: ID does not exist" containerID="4045a59622fc1df287ca0845515f9c53f19073a46ce24c90f2cbc04d9aa1e51a" Dec 01 09:32:13 crc kubenswrapper[4689]: I1201 09:32:13.200723 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4045a59622fc1df287ca0845515f9c53f19073a46ce24c90f2cbc04d9aa1e51a"} err="failed to get container status \"4045a59622fc1df287ca0845515f9c53f19073a46ce24c90f2cbc04d9aa1e51a\": rpc error: code = NotFound desc = could not find container \"4045a59622fc1df287ca0845515f9c53f19073a46ce24c90f2cbc04d9aa1e51a\": container with ID starting with 4045a59622fc1df287ca0845515f9c53f19073a46ce24c90f2cbc04d9aa1e51a not found: ID does not exist" Dec 01 09:32:15 crc kubenswrapper[4689]: I1201 09:32:15.068650 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad329cd4-6a69-4c79-973f-95936c5e5981" path="/var/lib/kubelet/pods/ad329cd4-6a69-4c79-973f-95936c5e5981/volumes" Dec 01 09:32:38 crc kubenswrapper[4689]: I1201 09:32:38.871675 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6nbt2"] Dec 01 09:32:38 crc kubenswrapper[4689]: E1201 09:32:38.872787 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad329cd4-6a69-4c79-973f-95936c5e5981" containerName="registry-server" Dec 01 09:32:38 crc kubenswrapper[4689]: I1201 09:32:38.872803 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad329cd4-6a69-4c79-973f-95936c5e5981" containerName="registry-server" Dec 01 09:32:38 crc kubenswrapper[4689]: E1201 09:32:38.872846 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad329cd4-6a69-4c79-973f-95936c5e5981" containerName="extract-content" Dec 01 09:32:38 crc kubenswrapper[4689]: I1201 09:32:38.872855 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad329cd4-6a69-4c79-973f-95936c5e5981" containerName="extract-content" Dec 01 09:32:38 crc kubenswrapper[4689]: E1201 09:32:38.872888 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad329cd4-6a69-4c79-973f-95936c5e5981" containerName="extract-utilities" Dec 01 09:32:38 crc kubenswrapper[4689]: I1201 09:32:38.872900 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad329cd4-6a69-4c79-973f-95936c5e5981" containerName="extract-utilities" Dec 01 09:32:38 crc kubenswrapper[4689]: I1201 09:32:38.873158 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad329cd4-6a69-4c79-973f-95936c5e5981" containerName="registry-server" Dec 01 09:32:38 crc kubenswrapper[4689]: I1201 09:32:38.875029 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6nbt2" Dec 01 09:32:38 crc kubenswrapper[4689]: I1201 09:32:38.886029 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6nbt2"] Dec 01 09:32:39 crc kubenswrapper[4689]: I1201 09:32:39.012044 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcp22\" (UniqueName: \"kubernetes.io/projected/454bf5f0-e32c-4f8a-b10a-21dbcdd69f70-kube-api-access-mcp22\") pod \"certified-operators-6nbt2\" (UID: \"454bf5f0-e32c-4f8a-b10a-21dbcdd69f70\") " pod="openshift-marketplace/certified-operators-6nbt2" Dec 01 09:32:39 crc kubenswrapper[4689]: I1201 09:32:39.012228 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/454bf5f0-e32c-4f8a-b10a-21dbcdd69f70-utilities\") pod \"certified-operators-6nbt2\" (UID: \"454bf5f0-e32c-4f8a-b10a-21dbcdd69f70\") " pod="openshift-marketplace/certified-operators-6nbt2" Dec 01 09:32:39 crc kubenswrapper[4689]: I1201 09:32:39.012298 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/454bf5f0-e32c-4f8a-b10a-21dbcdd69f70-catalog-content\") pod \"certified-operators-6nbt2\" (UID: \"454bf5f0-e32c-4f8a-b10a-21dbcdd69f70\") " pod="openshift-marketplace/certified-operators-6nbt2" Dec 01 09:32:39 crc kubenswrapper[4689]: I1201 09:32:39.114419 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/454bf5f0-e32c-4f8a-b10a-21dbcdd69f70-utilities\") pod \"certified-operators-6nbt2\" (UID: \"454bf5f0-e32c-4f8a-b10a-21dbcdd69f70\") " pod="openshift-marketplace/certified-operators-6nbt2" Dec 01 09:32:39 crc kubenswrapper[4689]: I1201 09:32:39.114583 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/454bf5f0-e32c-4f8a-b10a-21dbcdd69f70-catalog-content\") pod \"certified-operators-6nbt2\" (UID: \"454bf5f0-e32c-4f8a-b10a-21dbcdd69f70\") " pod="openshift-marketplace/certified-operators-6nbt2" Dec 01 09:32:39 crc kubenswrapper[4689]: I1201 09:32:39.114650 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcp22\" (UniqueName: \"kubernetes.io/projected/454bf5f0-e32c-4f8a-b10a-21dbcdd69f70-kube-api-access-mcp22\") pod \"certified-operators-6nbt2\" (UID: \"454bf5f0-e32c-4f8a-b10a-21dbcdd69f70\") " pod="openshift-marketplace/certified-operators-6nbt2" Dec 01 09:32:39 crc kubenswrapper[4689]: I1201 09:32:39.115548 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/454bf5f0-e32c-4f8a-b10a-21dbcdd69f70-utilities\") pod \"certified-operators-6nbt2\" (UID: \"454bf5f0-e32c-4f8a-b10a-21dbcdd69f70\") " pod="openshift-marketplace/certified-operators-6nbt2" Dec 01 09:32:39 crc kubenswrapper[4689]: I1201 09:32:39.115606 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/454bf5f0-e32c-4f8a-b10a-21dbcdd69f70-catalog-content\") pod \"certified-operators-6nbt2\" (UID: \"454bf5f0-e32c-4f8a-b10a-21dbcdd69f70\") " pod="openshift-marketplace/certified-operators-6nbt2" Dec 01 09:32:39 crc kubenswrapper[4689]: I1201 09:32:39.137584 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcp22\" (UniqueName: \"kubernetes.io/projected/454bf5f0-e32c-4f8a-b10a-21dbcdd69f70-kube-api-access-mcp22\") pod \"certified-operators-6nbt2\" (UID: \"454bf5f0-e32c-4f8a-b10a-21dbcdd69f70\") " pod="openshift-marketplace/certified-operators-6nbt2" Dec 01 09:32:39 crc kubenswrapper[4689]: I1201 09:32:39.206421 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6nbt2" Dec 01 09:32:39 crc kubenswrapper[4689]: I1201 09:32:39.871835 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6nbt2"] Dec 01 09:32:40 crc kubenswrapper[4689]: I1201 09:32:40.371452 4689 generic.go:334] "Generic (PLEG): container finished" podID="454bf5f0-e32c-4f8a-b10a-21dbcdd69f70" containerID="6cfa1ab9ec94043991b6a2dd5c52d261ae0392bf10b9c52c4c891efbfcd88a7e" exitCode=0 Dec 01 09:32:40 crc kubenswrapper[4689]: I1201 09:32:40.371663 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6nbt2" event={"ID":"454bf5f0-e32c-4f8a-b10a-21dbcdd69f70","Type":"ContainerDied","Data":"6cfa1ab9ec94043991b6a2dd5c52d261ae0392bf10b9c52c4c891efbfcd88a7e"} Dec 01 09:32:40 crc kubenswrapper[4689]: I1201 09:32:40.371996 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6nbt2" event={"ID":"454bf5f0-e32c-4f8a-b10a-21dbcdd69f70","Type":"ContainerStarted","Data":"275f5923605e5cffd743649e83dec0d6483d1292a4224fb5b0670be15cfedd8b"} Dec 01 09:32:41 crc kubenswrapper[4689]: I1201 09:32:41.394325 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Dec 01 09:32:41 crc kubenswrapper[4689]: I1201 09:32:41.398319 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 01 09:32:41 crc kubenswrapper[4689]: I1201 09:32:41.401965 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Dec 01 09:32:41 crc kubenswrapper[4689]: I1201 09:32:41.402168 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Dec 01 09:32:41 crc kubenswrapper[4689]: I1201 09:32:41.402335 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Dec 01 09:32:41 crc kubenswrapper[4689]: I1201 09:32:41.402633 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6nbt2" event={"ID":"454bf5f0-e32c-4f8a-b10a-21dbcdd69f70","Type":"ContainerStarted","Data":"a458538119c634073952508396e26c949b8ed197ad361b99f1a9bb1a6d39477c"} Dec 01 09:32:41 crc kubenswrapper[4689]: I1201 09:32:41.411035 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-ph4vn" Dec 01 09:32:41 crc kubenswrapper[4689]: I1201 09:32:41.436260 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Dec 01 09:32:41 crc kubenswrapper[4689]: I1201 09:32:41.573004 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/107c3226-2b1b-4f80-9670-8f0c1ffd3337-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"107c3226-2b1b-4f80-9670-8f0c1ffd3337\") " pod="openstack/tempest-tests-tempest" Dec 01 09:32:41 crc kubenswrapper[4689]: I1201 09:32:41.573071 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/107c3226-2b1b-4f80-9670-8f0c1ffd3337-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"107c3226-2b1b-4f80-9670-8f0c1ffd3337\") " pod="openstack/tempest-tests-tempest" Dec 01 09:32:41 crc kubenswrapper[4689]: I1201 09:32:41.573397 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/107c3226-2b1b-4f80-9670-8f0c1ffd3337-config-data\") pod \"tempest-tests-tempest\" (UID: \"107c3226-2b1b-4f80-9670-8f0c1ffd3337\") " pod="openstack/tempest-tests-tempest" Dec 01 09:32:41 crc kubenswrapper[4689]: I1201 09:32:41.573551 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/107c3226-2b1b-4f80-9670-8f0c1ffd3337-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"107c3226-2b1b-4f80-9670-8f0c1ffd3337\") " pod="openstack/tempest-tests-tempest" Dec 01 09:32:41 crc kubenswrapper[4689]: I1201 09:32:41.573605 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/107c3226-2b1b-4f80-9670-8f0c1ffd3337-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"107c3226-2b1b-4f80-9670-8f0c1ffd3337\") " pod="openstack/tempest-tests-tempest" Dec 01 09:32:41 crc kubenswrapper[4689]: I1201 09:32:41.573683 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/107c3226-2b1b-4f80-9670-8f0c1ffd3337-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"107c3226-2b1b-4f80-9670-8f0c1ffd3337\") " pod="openstack/tempest-tests-tempest" Dec 01 09:32:41 crc kubenswrapper[4689]: I1201 09:32:41.573808 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvkqx\" (UniqueName: \"kubernetes.io/projected/107c3226-2b1b-4f80-9670-8f0c1ffd3337-kube-api-access-kvkqx\") pod \"tempest-tests-tempest\" (UID: \"107c3226-2b1b-4f80-9670-8f0c1ffd3337\") " pod="openstack/tempest-tests-tempest" Dec 01 09:32:41 crc kubenswrapper[4689]: I1201 09:32:41.573939 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/107c3226-2b1b-4f80-9670-8f0c1ffd3337-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"107c3226-2b1b-4f80-9670-8f0c1ffd3337\") " pod="openstack/tempest-tests-tempest" Dec 01 09:32:41 crc kubenswrapper[4689]: I1201 09:32:41.574096 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest\" (UID: \"107c3226-2b1b-4f80-9670-8f0c1ffd3337\") " pod="openstack/tempest-tests-tempest" Dec 01 09:32:41 crc kubenswrapper[4689]: I1201 09:32:41.675855 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest\" (UID: \"107c3226-2b1b-4f80-9670-8f0c1ffd3337\") " pod="openstack/tempest-tests-tempest" Dec 01 09:32:41 crc kubenswrapper[4689]: I1201 09:32:41.675937 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/107c3226-2b1b-4f80-9670-8f0c1ffd3337-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"107c3226-2b1b-4f80-9670-8f0c1ffd3337\") " pod="openstack/tempest-tests-tempest" Dec 01 09:32:41 crc kubenswrapper[4689]: I1201 09:32:41.675980 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/107c3226-2b1b-4f80-9670-8f0c1ffd3337-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"107c3226-2b1b-4f80-9670-8f0c1ffd3337\") " pod="openstack/tempest-tests-tempest" Dec 01 09:32:41 crc kubenswrapper[4689]: I1201 09:32:41.676037 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/107c3226-2b1b-4f80-9670-8f0c1ffd3337-config-data\") pod \"tempest-tests-tempest\" (UID: \"107c3226-2b1b-4f80-9670-8f0c1ffd3337\") " pod="openstack/tempest-tests-tempest" Dec 01 09:32:41 crc kubenswrapper[4689]: I1201 09:32:41.676070 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/107c3226-2b1b-4f80-9670-8f0c1ffd3337-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"107c3226-2b1b-4f80-9670-8f0c1ffd3337\") " pod="openstack/tempest-tests-tempest" Dec 01 09:32:41 crc kubenswrapper[4689]: I1201 09:32:41.676092 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/107c3226-2b1b-4f80-9670-8f0c1ffd3337-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"107c3226-2b1b-4f80-9670-8f0c1ffd3337\") " pod="openstack/tempest-tests-tempest" Dec 01 09:32:41 crc kubenswrapper[4689]: I1201 09:32:41.676115 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/107c3226-2b1b-4f80-9670-8f0c1ffd3337-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"107c3226-2b1b-4f80-9670-8f0c1ffd3337\") " pod="openstack/tempest-tests-tempest" Dec 01 09:32:41 crc kubenswrapper[4689]: I1201 09:32:41.676134 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvkqx\" (UniqueName: \"kubernetes.io/projected/107c3226-2b1b-4f80-9670-8f0c1ffd3337-kube-api-access-kvkqx\") pod \"tempest-tests-tempest\" (UID: \"107c3226-2b1b-4f80-9670-8f0c1ffd3337\") " pod="openstack/tempest-tests-tempest" Dec 01 09:32:41 crc kubenswrapper[4689]: I1201 09:32:41.676162 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/107c3226-2b1b-4f80-9670-8f0c1ffd3337-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"107c3226-2b1b-4f80-9670-8f0c1ffd3337\") " pod="openstack/tempest-tests-tempest" Dec 01 09:32:41 crc kubenswrapper[4689]: I1201 09:32:41.676382 4689 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest\" (UID: \"107c3226-2b1b-4f80-9670-8f0c1ffd3337\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/tempest-tests-tempest" Dec 01 09:32:41 crc kubenswrapper[4689]: I1201 09:32:41.676683 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/107c3226-2b1b-4f80-9670-8f0c1ffd3337-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"107c3226-2b1b-4f80-9670-8f0c1ffd3337\") " pod="openstack/tempest-tests-tempest" Dec 01 09:32:41 crc kubenswrapper[4689]: I1201 09:32:41.677872 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/107c3226-2b1b-4f80-9670-8f0c1ffd3337-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"107c3226-2b1b-4f80-9670-8f0c1ffd3337\") " pod="openstack/tempest-tests-tempest" Dec 01 09:32:41 crc kubenswrapper[4689]: I1201 09:32:41.679662 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/107c3226-2b1b-4f80-9670-8f0c1ffd3337-config-data\") pod \"tempest-tests-tempest\" (UID: \"107c3226-2b1b-4f80-9670-8f0c1ffd3337\") " pod="openstack/tempest-tests-tempest" Dec 01 09:32:41 crc kubenswrapper[4689]: I1201 09:32:41.679779 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/107c3226-2b1b-4f80-9670-8f0c1ffd3337-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"107c3226-2b1b-4f80-9670-8f0c1ffd3337\") " pod="openstack/tempest-tests-tempest" Dec 01 09:32:41 crc kubenswrapper[4689]: I1201 09:32:41.681716 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/107c3226-2b1b-4f80-9670-8f0c1ffd3337-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"107c3226-2b1b-4f80-9670-8f0c1ffd3337\") " pod="openstack/tempest-tests-tempest" Dec 01 09:32:41 crc kubenswrapper[4689]: I1201 09:32:41.682984 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/107c3226-2b1b-4f80-9670-8f0c1ffd3337-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"107c3226-2b1b-4f80-9670-8f0c1ffd3337\") " pod="openstack/tempest-tests-tempest" Dec 01 09:32:41 crc kubenswrapper[4689]: I1201 09:32:41.684603 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/107c3226-2b1b-4f80-9670-8f0c1ffd3337-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"107c3226-2b1b-4f80-9670-8f0c1ffd3337\") " pod="openstack/tempest-tests-tempest" Dec 01 09:32:41 crc kubenswrapper[4689]: I1201 09:32:41.698977 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvkqx\" (UniqueName: \"kubernetes.io/projected/107c3226-2b1b-4f80-9670-8f0c1ffd3337-kube-api-access-kvkqx\") pod \"tempest-tests-tempest\" (UID: \"107c3226-2b1b-4f80-9670-8f0c1ffd3337\") " pod="openstack/tempest-tests-tempest" Dec 01 09:32:41 crc kubenswrapper[4689]: I1201 09:32:41.713799 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest\" (UID: \"107c3226-2b1b-4f80-9670-8f0c1ffd3337\") " pod="openstack/tempest-tests-tempest" Dec 01 09:32:41 crc kubenswrapper[4689]: I1201 09:32:41.746632 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 01 09:32:42 crc kubenswrapper[4689]: I1201 09:32:42.292839 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Dec 01 09:32:42 crc kubenswrapper[4689]: I1201 09:32:42.412316 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"107c3226-2b1b-4f80-9670-8f0c1ffd3337","Type":"ContainerStarted","Data":"b0140a844a48251dfbbf5ac0af72c55535fbb30ee921a682fbf1d0e1e7385480"} Dec 01 09:32:44 crc kubenswrapper[4689]: I1201 09:32:44.440964 4689 generic.go:334] "Generic (PLEG): container finished" podID="454bf5f0-e32c-4f8a-b10a-21dbcdd69f70" containerID="a458538119c634073952508396e26c949b8ed197ad361b99f1a9bb1a6d39477c" exitCode=0 Dec 01 09:32:44 crc kubenswrapper[4689]: I1201 09:32:44.441162 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6nbt2" event={"ID":"454bf5f0-e32c-4f8a-b10a-21dbcdd69f70","Type":"ContainerDied","Data":"a458538119c634073952508396e26c949b8ed197ad361b99f1a9bb1a6d39477c"} Dec 01 09:32:46 crc kubenswrapper[4689]: I1201 09:32:46.506339 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6nbt2" event={"ID":"454bf5f0-e32c-4f8a-b10a-21dbcdd69f70","Type":"ContainerStarted","Data":"95c3d079b45d21f6396066ddedb7fae8c5e05b087cc2d29895c1fa375ad7b2fe"} Dec 01 09:32:46 crc kubenswrapper[4689]: I1201 09:32:46.531090 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6nbt2" podStartSLOduration=3.550725559 podStartE2EDuration="8.531070446s" podCreationTimestamp="2025-12-01 09:32:38 +0000 UTC" firstStartedPulling="2025-12-01 09:32:40.374566887 +0000 UTC m=+3240.446854801" lastFinishedPulling="2025-12-01 09:32:45.354911774 +0000 UTC m=+3245.427199688" observedRunningTime="2025-12-01 09:32:46.530650845 +0000 UTC m=+3246.602938779" watchObservedRunningTime="2025-12-01 09:32:46.531070446 +0000 UTC m=+3246.603358340" Dec 01 09:32:49 crc kubenswrapper[4689]: I1201 09:32:49.207054 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6nbt2" Dec 01 09:32:49 crc kubenswrapper[4689]: I1201 09:32:49.207137 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6nbt2" Dec 01 09:32:50 crc kubenswrapper[4689]: I1201 09:32:50.254621 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-6nbt2" podUID="454bf5f0-e32c-4f8a-b10a-21dbcdd69f70" containerName="registry-server" probeResult="failure" output=< Dec 01 09:32:50 crc kubenswrapper[4689]: timeout: failed to connect service ":50051" within 1s Dec 01 09:32:50 crc kubenswrapper[4689]: > Dec 01 09:32:59 crc kubenswrapper[4689]: I1201 09:32:59.320815 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6nbt2" Dec 01 09:32:59 crc kubenswrapper[4689]: I1201 09:32:59.393286 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6nbt2" Dec 01 09:32:59 crc kubenswrapper[4689]: I1201 09:32:59.565764 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6nbt2"] Dec 01 09:33:00 crc kubenswrapper[4689]: I1201 09:33:00.704934 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6nbt2" podUID="454bf5f0-e32c-4f8a-b10a-21dbcdd69f70" containerName="registry-server" containerID="cri-o://95c3d079b45d21f6396066ddedb7fae8c5e05b087cc2d29895c1fa375ad7b2fe" gracePeriod=2 Dec 01 09:33:01 crc kubenswrapper[4689]: I1201 09:33:01.735787 4689 generic.go:334] "Generic (PLEG): container finished" podID="454bf5f0-e32c-4f8a-b10a-21dbcdd69f70" containerID="95c3d079b45d21f6396066ddedb7fae8c5e05b087cc2d29895c1fa375ad7b2fe" exitCode=0 Dec 01 09:33:01 crc kubenswrapper[4689]: I1201 09:33:01.735833 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6nbt2" event={"ID":"454bf5f0-e32c-4f8a-b10a-21dbcdd69f70","Type":"ContainerDied","Data":"95c3d079b45d21f6396066ddedb7fae8c5e05b087cc2d29895c1fa375ad7b2fe"} Dec 01 09:33:06 crc kubenswrapper[4689]: I1201 09:33:06.389427 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hqnhc"] Dec 01 09:33:06 crc kubenswrapper[4689]: I1201 09:33:06.392867 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hqnhc" Dec 01 09:33:06 crc kubenswrapper[4689]: I1201 09:33:06.399178 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hqnhc"] Dec 01 09:33:06 crc kubenswrapper[4689]: I1201 09:33:06.445109 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcwvh\" (UniqueName: \"kubernetes.io/projected/493e02e9-20cb-4ef2-b0d7-94896afe320d-kube-api-access-gcwvh\") pod \"community-operators-hqnhc\" (UID: \"493e02e9-20cb-4ef2-b0d7-94896afe320d\") " pod="openshift-marketplace/community-operators-hqnhc" Dec 01 09:33:06 crc kubenswrapper[4689]: I1201 09:33:06.445248 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/493e02e9-20cb-4ef2-b0d7-94896afe320d-utilities\") pod \"community-operators-hqnhc\" (UID: \"493e02e9-20cb-4ef2-b0d7-94896afe320d\") " pod="openshift-marketplace/community-operators-hqnhc" Dec 01 09:33:06 crc kubenswrapper[4689]: I1201 09:33:06.445427 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/493e02e9-20cb-4ef2-b0d7-94896afe320d-catalog-content\") pod \"community-operators-hqnhc\" (UID: \"493e02e9-20cb-4ef2-b0d7-94896afe320d\") " pod="openshift-marketplace/community-operators-hqnhc" Dec 01 09:33:06 crc kubenswrapper[4689]: I1201 09:33:06.547962 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/493e02e9-20cb-4ef2-b0d7-94896afe320d-catalog-content\") pod \"community-operators-hqnhc\" (UID: \"493e02e9-20cb-4ef2-b0d7-94896afe320d\") " pod="openshift-marketplace/community-operators-hqnhc" Dec 01 09:33:06 crc kubenswrapper[4689]: I1201 09:33:06.548110 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gcwvh\" (UniqueName: \"kubernetes.io/projected/493e02e9-20cb-4ef2-b0d7-94896afe320d-kube-api-access-gcwvh\") pod \"community-operators-hqnhc\" (UID: \"493e02e9-20cb-4ef2-b0d7-94896afe320d\") " pod="openshift-marketplace/community-operators-hqnhc" Dec 01 09:33:06 crc kubenswrapper[4689]: I1201 09:33:06.548160 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/493e02e9-20cb-4ef2-b0d7-94896afe320d-utilities\") pod \"community-operators-hqnhc\" (UID: \"493e02e9-20cb-4ef2-b0d7-94896afe320d\") " pod="openshift-marketplace/community-operators-hqnhc" Dec 01 09:33:06 crc kubenswrapper[4689]: I1201 09:33:06.548884 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/493e02e9-20cb-4ef2-b0d7-94896afe320d-utilities\") pod \"community-operators-hqnhc\" (UID: \"493e02e9-20cb-4ef2-b0d7-94896afe320d\") " pod="openshift-marketplace/community-operators-hqnhc" Dec 01 09:33:06 crc kubenswrapper[4689]: I1201 09:33:06.549190 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/493e02e9-20cb-4ef2-b0d7-94896afe320d-catalog-content\") pod \"community-operators-hqnhc\" (UID: \"493e02e9-20cb-4ef2-b0d7-94896afe320d\") " pod="openshift-marketplace/community-operators-hqnhc" Dec 01 09:33:06 crc kubenswrapper[4689]: I1201 09:33:06.580915 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gcwvh\" (UniqueName: \"kubernetes.io/projected/493e02e9-20cb-4ef2-b0d7-94896afe320d-kube-api-access-gcwvh\") pod \"community-operators-hqnhc\" (UID: \"493e02e9-20cb-4ef2-b0d7-94896afe320d\") " pod="openshift-marketplace/community-operators-hqnhc" Dec 01 09:33:06 crc kubenswrapper[4689]: I1201 09:33:06.753206 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hqnhc" Dec 01 09:33:09 crc kubenswrapper[4689]: E1201 09:33:09.661498 4689 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 95c3d079b45d21f6396066ddedb7fae8c5e05b087cc2d29895c1fa375ad7b2fe is running failed: container process not found" containerID="95c3d079b45d21f6396066ddedb7fae8c5e05b087cc2d29895c1fa375ad7b2fe" cmd=["grpc_health_probe","-addr=:50051"] Dec 01 09:33:09 crc kubenswrapper[4689]: E1201 09:33:09.662734 4689 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 95c3d079b45d21f6396066ddedb7fae8c5e05b087cc2d29895c1fa375ad7b2fe is running failed: container process not found" containerID="95c3d079b45d21f6396066ddedb7fae8c5e05b087cc2d29895c1fa375ad7b2fe" cmd=["grpc_health_probe","-addr=:50051"] Dec 01 09:33:09 crc kubenswrapper[4689]: E1201 09:33:09.663086 4689 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 95c3d079b45d21f6396066ddedb7fae8c5e05b087cc2d29895c1fa375ad7b2fe is running failed: container process not found" containerID="95c3d079b45d21f6396066ddedb7fae8c5e05b087cc2d29895c1fa375ad7b2fe" cmd=["grpc_health_probe","-addr=:50051"] Dec 01 09:33:09 crc kubenswrapper[4689]: E1201 09:33:09.663110 4689 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 95c3d079b45d21f6396066ddedb7fae8c5e05b087cc2d29895c1fa375ad7b2fe is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-6nbt2" podUID="454bf5f0-e32c-4f8a-b10a-21dbcdd69f70" containerName="registry-server" Dec 01 09:33:19 crc kubenswrapper[4689]: E1201 09:33:19.207225 4689 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 95c3d079b45d21f6396066ddedb7fae8c5e05b087cc2d29895c1fa375ad7b2fe is running failed: container process not found" containerID="95c3d079b45d21f6396066ddedb7fae8c5e05b087cc2d29895c1fa375ad7b2fe" cmd=["grpc_health_probe","-addr=:50051"] Dec 01 09:33:19 crc kubenswrapper[4689]: E1201 09:33:19.208702 4689 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 95c3d079b45d21f6396066ddedb7fae8c5e05b087cc2d29895c1fa375ad7b2fe is running failed: container process not found" containerID="95c3d079b45d21f6396066ddedb7fae8c5e05b087cc2d29895c1fa375ad7b2fe" cmd=["grpc_health_probe","-addr=:50051"] Dec 01 09:33:19 crc kubenswrapper[4689]: E1201 09:33:19.209089 4689 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 95c3d079b45d21f6396066ddedb7fae8c5e05b087cc2d29895c1fa375ad7b2fe is running failed: container process not found" containerID="95c3d079b45d21f6396066ddedb7fae8c5e05b087cc2d29895c1fa375ad7b2fe" cmd=["grpc_health_probe","-addr=:50051"] Dec 01 09:33:19 crc kubenswrapper[4689]: E1201 09:33:19.209118 4689 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 95c3d079b45d21f6396066ddedb7fae8c5e05b087cc2d29895c1fa375ad7b2fe is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-6nbt2" podUID="454bf5f0-e32c-4f8a-b10a-21dbcdd69f70" containerName="registry-server" Dec 01 09:33:28 crc kubenswrapper[4689]: E1201 09:33:28.958954 4689 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Dec 01 09:33:28 crc kubenswrapper[4689]: E1201 09:33:28.961734 4689 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kvkqx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(107c3226-2b1b-4f80-9670-8f0c1ffd3337): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 01 09:33:28 crc kubenswrapper[4689]: E1201 09:33:28.964239 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="107c3226-2b1b-4f80-9670-8f0c1ffd3337" Dec 01 09:33:29 crc kubenswrapper[4689]: E1201 09:33:29.049070 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="107c3226-2b1b-4f80-9670-8f0c1ffd3337" Dec 01 09:33:29 crc kubenswrapper[4689]: E1201 09:33:29.207139 4689 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 95c3d079b45d21f6396066ddedb7fae8c5e05b087cc2d29895c1fa375ad7b2fe is running failed: container process not found" containerID="95c3d079b45d21f6396066ddedb7fae8c5e05b087cc2d29895c1fa375ad7b2fe" cmd=["grpc_health_probe","-addr=:50051"] Dec 01 09:33:29 crc kubenswrapper[4689]: E1201 09:33:29.207538 4689 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 95c3d079b45d21f6396066ddedb7fae8c5e05b087cc2d29895c1fa375ad7b2fe is running failed: container process not found" containerID="95c3d079b45d21f6396066ddedb7fae8c5e05b087cc2d29895c1fa375ad7b2fe" cmd=["grpc_health_probe","-addr=:50051"] Dec 01 09:33:29 crc kubenswrapper[4689]: E1201 09:33:29.207847 4689 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 95c3d079b45d21f6396066ddedb7fae8c5e05b087cc2d29895c1fa375ad7b2fe is running failed: container process not found" containerID="95c3d079b45d21f6396066ddedb7fae8c5e05b087cc2d29895c1fa375ad7b2fe" cmd=["grpc_health_probe","-addr=:50051"] Dec 01 09:33:29 crc kubenswrapper[4689]: E1201 09:33:29.207882 4689 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 95c3d079b45d21f6396066ddedb7fae8c5e05b087cc2d29895c1fa375ad7b2fe is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-6nbt2" podUID="454bf5f0-e32c-4f8a-b10a-21dbcdd69f70" containerName="registry-server" Dec 01 09:33:29 crc kubenswrapper[4689]: I1201 09:33:29.369647 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6nbt2" Dec 01 09:33:29 crc kubenswrapper[4689]: I1201 09:33:29.465334 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/454bf5f0-e32c-4f8a-b10a-21dbcdd69f70-utilities\") pod \"454bf5f0-e32c-4f8a-b10a-21dbcdd69f70\" (UID: \"454bf5f0-e32c-4f8a-b10a-21dbcdd69f70\") " Dec 01 09:33:29 crc kubenswrapper[4689]: I1201 09:33:29.465998 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/454bf5f0-e32c-4f8a-b10a-21dbcdd69f70-catalog-content\") pod \"454bf5f0-e32c-4f8a-b10a-21dbcdd69f70\" (UID: \"454bf5f0-e32c-4f8a-b10a-21dbcdd69f70\") " Dec 01 09:33:29 crc kubenswrapper[4689]: I1201 09:33:29.466133 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/454bf5f0-e32c-4f8a-b10a-21dbcdd69f70-utilities" (OuterVolumeSpecName: "utilities") pod "454bf5f0-e32c-4f8a-b10a-21dbcdd69f70" (UID: "454bf5f0-e32c-4f8a-b10a-21dbcdd69f70"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:33:29 crc kubenswrapper[4689]: I1201 09:33:29.466197 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mcp22\" (UniqueName: \"kubernetes.io/projected/454bf5f0-e32c-4f8a-b10a-21dbcdd69f70-kube-api-access-mcp22\") pod \"454bf5f0-e32c-4f8a-b10a-21dbcdd69f70\" (UID: \"454bf5f0-e32c-4f8a-b10a-21dbcdd69f70\") " Dec 01 09:33:29 crc kubenswrapper[4689]: I1201 09:33:29.467569 4689 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/454bf5f0-e32c-4f8a-b10a-21dbcdd69f70-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 09:33:29 crc kubenswrapper[4689]: I1201 09:33:29.489445 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/454bf5f0-e32c-4f8a-b10a-21dbcdd69f70-kube-api-access-mcp22" (OuterVolumeSpecName: "kube-api-access-mcp22") pod "454bf5f0-e32c-4f8a-b10a-21dbcdd69f70" (UID: "454bf5f0-e32c-4f8a-b10a-21dbcdd69f70"). InnerVolumeSpecName "kube-api-access-mcp22". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:33:29 crc kubenswrapper[4689]: I1201 09:33:29.512391 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/454bf5f0-e32c-4f8a-b10a-21dbcdd69f70-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "454bf5f0-e32c-4f8a-b10a-21dbcdd69f70" (UID: "454bf5f0-e32c-4f8a-b10a-21dbcdd69f70"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:33:29 crc kubenswrapper[4689]: I1201 09:33:29.540611 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hqnhc"] Dec 01 09:33:29 crc kubenswrapper[4689]: I1201 09:33:29.631142 4689 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/454bf5f0-e32c-4f8a-b10a-21dbcdd69f70-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 09:33:29 crc kubenswrapper[4689]: I1201 09:33:29.631198 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mcp22\" (UniqueName: \"kubernetes.io/projected/454bf5f0-e32c-4f8a-b10a-21dbcdd69f70-kube-api-access-mcp22\") on node \"crc\" DevicePath \"\"" Dec 01 09:33:30 crc kubenswrapper[4689]: I1201 09:33:30.063737 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6nbt2" event={"ID":"454bf5f0-e32c-4f8a-b10a-21dbcdd69f70","Type":"ContainerDied","Data":"275f5923605e5cffd743649e83dec0d6483d1292a4224fb5b0670be15cfedd8b"} Dec 01 09:33:30 crc kubenswrapper[4689]: I1201 09:33:30.064083 4689 scope.go:117] "RemoveContainer" containerID="95c3d079b45d21f6396066ddedb7fae8c5e05b087cc2d29895c1fa375ad7b2fe" Dec 01 09:33:30 crc kubenswrapper[4689]: I1201 09:33:30.063806 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6nbt2" Dec 01 09:33:30 crc kubenswrapper[4689]: I1201 09:33:30.067407 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hqnhc" event={"ID":"493e02e9-20cb-4ef2-b0d7-94896afe320d","Type":"ContainerStarted","Data":"7745f3fa2e35da306017f85c75122bb521e65ad369375c67aabd0137ae84783e"} Dec 01 09:33:30 crc kubenswrapper[4689]: I1201 09:33:30.067455 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hqnhc" event={"ID":"493e02e9-20cb-4ef2-b0d7-94896afe320d","Type":"ContainerStarted","Data":"c8301764fb5df110f6c760d1f34b3438526dc3723fbe28bd51b88ea3ee5424fb"} Dec 01 09:33:30 crc kubenswrapper[4689]: I1201 09:33:30.100206 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6nbt2"] Dec 01 09:33:30 crc kubenswrapper[4689]: I1201 09:33:30.109609 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6nbt2"] Dec 01 09:33:30 crc kubenswrapper[4689]: I1201 09:33:30.304044 4689 scope.go:117] "RemoveContainer" containerID="a458538119c634073952508396e26c949b8ed197ad361b99f1a9bb1a6d39477c" Dec 01 09:33:30 crc kubenswrapper[4689]: I1201 09:33:30.338184 4689 scope.go:117] "RemoveContainer" containerID="6cfa1ab9ec94043991b6a2dd5c52d261ae0392bf10b9c52c4c891efbfcd88a7e" Dec 01 09:33:31 crc kubenswrapper[4689]: I1201 09:33:31.078693 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="454bf5f0-e32c-4f8a-b10a-21dbcdd69f70" path="/var/lib/kubelet/pods/454bf5f0-e32c-4f8a-b10a-21dbcdd69f70/volumes" Dec 01 09:33:31 crc kubenswrapper[4689]: I1201 09:33:31.100290 4689 generic.go:334] "Generic (PLEG): container finished" podID="493e02e9-20cb-4ef2-b0d7-94896afe320d" containerID="7745f3fa2e35da306017f85c75122bb521e65ad369375c67aabd0137ae84783e" exitCode=0 Dec 01 09:33:31 crc kubenswrapper[4689]: I1201 09:33:31.100356 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hqnhc" event={"ID":"493e02e9-20cb-4ef2-b0d7-94896afe320d","Type":"ContainerDied","Data":"7745f3fa2e35da306017f85c75122bb521e65ad369375c67aabd0137ae84783e"} Dec 01 09:33:39 crc kubenswrapper[4689]: I1201 09:33:39.180419 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hqnhc" event={"ID":"493e02e9-20cb-4ef2-b0d7-94896afe320d","Type":"ContainerStarted","Data":"1e73c93d06aedc714056c00f841ac8a92c727fa057f814f2f0c224626c726d0a"} Dec 01 09:33:40 crc kubenswrapper[4689]: I1201 09:33:40.196278 4689 generic.go:334] "Generic (PLEG): container finished" podID="493e02e9-20cb-4ef2-b0d7-94896afe320d" containerID="1e73c93d06aedc714056c00f841ac8a92c727fa057f814f2f0c224626c726d0a" exitCode=0 Dec 01 09:33:40 crc kubenswrapper[4689]: I1201 09:33:40.196389 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hqnhc" event={"ID":"493e02e9-20cb-4ef2-b0d7-94896afe320d","Type":"ContainerDied","Data":"1e73c93d06aedc714056c00f841ac8a92c727fa057f814f2f0c224626c726d0a"} Dec 01 09:33:41 crc kubenswrapper[4689]: I1201 09:33:41.222882 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hqnhc" event={"ID":"493e02e9-20cb-4ef2-b0d7-94896afe320d","Type":"ContainerStarted","Data":"d11bd773e5529509cbd48216b8a5f0340c4f91d765769743faac437a7229c6dc"} Dec 01 09:33:42 crc kubenswrapper[4689]: I1201 09:33:42.068705 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hqnhc" podStartSLOduration=26.243490942 podStartE2EDuration="36.06867634s" podCreationTimestamp="2025-12-01 09:33:06 +0000 UTC" firstStartedPulling="2025-12-01 09:33:31.102500937 +0000 UTC m=+3291.174788841" lastFinishedPulling="2025-12-01 09:33:40.927686335 +0000 UTC m=+3300.999974239" observedRunningTime="2025-12-01 09:33:41.251144863 +0000 UTC m=+3301.323432797" watchObservedRunningTime="2025-12-01 09:33:42.06867634 +0000 UTC m=+3302.140964244" Dec 01 09:33:42 crc kubenswrapper[4689]: I1201 09:33:42.552177 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Dec 01 09:33:46 crc kubenswrapper[4689]: I1201 09:33:46.290352 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"107c3226-2b1b-4f80-9670-8f0c1ffd3337","Type":"ContainerStarted","Data":"50890759293db1064685b6ffa708b5f9aebd3c81bc6f1095346d5013abfa6bdd"} Dec 01 09:33:46 crc kubenswrapper[4689]: I1201 09:33:46.318246 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=6.059724415 podStartE2EDuration="1m6.318228109s" podCreationTimestamp="2025-12-01 09:32:40 +0000 UTC" firstStartedPulling="2025-12-01 09:32:42.290302596 +0000 UTC m=+3242.362590500" lastFinishedPulling="2025-12-01 09:33:42.54880629 +0000 UTC m=+3302.621094194" observedRunningTime="2025-12-01 09:33:46.306300015 +0000 UTC m=+3306.378587919" watchObservedRunningTime="2025-12-01 09:33:46.318228109 +0000 UTC m=+3306.390516013" Dec 01 09:33:46 crc kubenswrapper[4689]: I1201 09:33:46.753599 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hqnhc" Dec 01 09:33:46 crc kubenswrapper[4689]: I1201 09:33:46.753672 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hqnhc" Dec 01 09:33:46 crc kubenswrapper[4689]: I1201 09:33:46.809398 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hqnhc" Dec 01 09:33:47 crc kubenswrapper[4689]: I1201 09:33:47.348121 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hqnhc" Dec 01 09:33:47 crc kubenswrapper[4689]: I1201 09:33:47.439094 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hqnhc"] Dec 01 09:33:47 crc kubenswrapper[4689]: I1201 09:33:47.494985 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fwm44"] Dec 01 09:33:47 crc kubenswrapper[4689]: I1201 09:33:47.496177 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-fwm44" podUID="665830e4-f511-4fa5-8892-75d5bc618ede" containerName="registry-server" containerID="cri-o://46e6e610b7b5577742f57bba5b457128cab81fb37921c5d835f9531f553c86b1" gracePeriod=2 Dec 01 09:33:48 crc kubenswrapper[4689]: I1201 09:33:48.033752 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fwm44" Dec 01 09:33:48 crc kubenswrapper[4689]: I1201 09:33:48.202069 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/665830e4-f511-4fa5-8892-75d5bc618ede-catalog-content\") pod \"665830e4-f511-4fa5-8892-75d5bc618ede\" (UID: \"665830e4-f511-4fa5-8892-75d5bc618ede\") " Dec 01 09:33:48 crc kubenswrapper[4689]: I1201 09:33:48.202478 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/665830e4-f511-4fa5-8892-75d5bc618ede-utilities\") pod \"665830e4-f511-4fa5-8892-75d5bc618ede\" (UID: \"665830e4-f511-4fa5-8892-75d5bc618ede\") " Dec 01 09:33:48 crc kubenswrapper[4689]: I1201 09:33:48.202740 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-62dbk\" (UniqueName: \"kubernetes.io/projected/665830e4-f511-4fa5-8892-75d5bc618ede-kube-api-access-62dbk\") pod \"665830e4-f511-4fa5-8892-75d5bc618ede\" (UID: \"665830e4-f511-4fa5-8892-75d5bc618ede\") " Dec 01 09:33:48 crc kubenswrapper[4689]: I1201 09:33:48.202946 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/665830e4-f511-4fa5-8892-75d5bc618ede-utilities" (OuterVolumeSpecName: "utilities") pod "665830e4-f511-4fa5-8892-75d5bc618ede" (UID: "665830e4-f511-4fa5-8892-75d5bc618ede"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:33:48 crc kubenswrapper[4689]: I1201 09:33:48.203314 4689 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/665830e4-f511-4fa5-8892-75d5bc618ede-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 09:33:48 crc kubenswrapper[4689]: I1201 09:33:48.211891 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/665830e4-f511-4fa5-8892-75d5bc618ede-kube-api-access-62dbk" (OuterVolumeSpecName: "kube-api-access-62dbk") pod "665830e4-f511-4fa5-8892-75d5bc618ede" (UID: "665830e4-f511-4fa5-8892-75d5bc618ede"). InnerVolumeSpecName "kube-api-access-62dbk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:33:48 crc kubenswrapper[4689]: I1201 09:33:48.298359 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/665830e4-f511-4fa5-8892-75d5bc618ede-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "665830e4-f511-4fa5-8892-75d5bc618ede" (UID: "665830e4-f511-4fa5-8892-75d5bc618ede"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:33:48 crc kubenswrapper[4689]: I1201 09:33:48.305253 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-62dbk\" (UniqueName: \"kubernetes.io/projected/665830e4-f511-4fa5-8892-75d5bc618ede-kube-api-access-62dbk\") on node \"crc\" DevicePath \"\"" Dec 01 09:33:48 crc kubenswrapper[4689]: I1201 09:33:48.305306 4689 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/665830e4-f511-4fa5-8892-75d5bc618ede-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 09:33:48 crc kubenswrapper[4689]: I1201 09:33:48.311117 4689 generic.go:334] "Generic (PLEG): container finished" podID="665830e4-f511-4fa5-8892-75d5bc618ede" containerID="46e6e610b7b5577742f57bba5b457128cab81fb37921c5d835f9531f553c86b1" exitCode=0 Dec 01 09:33:48 crc kubenswrapper[4689]: I1201 09:33:48.312055 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fwm44" Dec 01 09:33:48 crc kubenswrapper[4689]: I1201 09:33:48.315526 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fwm44" event={"ID":"665830e4-f511-4fa5-8892-75d5bc618ede","Type":"ContainerDied","Data":"46e6e610b7b5577742f57bba5b457128cab81fb37921c5d835f9531f553c86b1"} Dec 01 09:33:48 crc kubenswrapper[4689]: I1201 09:33:48.315623 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fwm44" event={"ID":"665830e4-f511-4fa5-8892-75d5bc618ede","Type":"ContainerDied","Data":"a65e49ec654ad782658313b1670c9c985aa6ad3f814ab6a67e567a542fe0a641"} Dec 01 09:33:48 crc kubenswrapper[4689]: I1201 09:33:48.315645 4689 scope.go:117] "RemoveContainer" containerID="46e6e610b7b5577742f57bba5b457128cab81fb37921c5d835f9531f553c86b1" Dec 01 09:33:48 crc kubenswrapper[4689]: I1201 09:33:48.352771 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fwm44"] Dec 01 09:33:48 crc kubenswrapper[4689]: I1201 09:33:48.367323 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-fwm44"] Dec 01 09:33:48 crc kubenswrapper[4689]: I1201 09:33:48.397029 4689 scope.go:117] "RemoveContainer" containerID="a8943d91f9c8f49d61d8b5a2c4c8a9d0327d57711f58c983a5e4c8db06ec1d1f" Dec 01 09:33:48 crc kubenswrapper[4689]: I1201 09:33:48.427954 4689 scope.go:117] "RemoveContainer" containerID="3d563009ebaf57a3dcd9f9be083026b5f77c39693b0822802e64fa7c9618cf11" Dec 01 09:33:48 crc kubenswrapper[4689]: I1201 09:33:48.814001 4689 scope.go:117] "RemoveContainer" containerID="46e6e610b7b5577742f57bba5b457128cab81fb37921c5d835f9531f553c86b1" Dec 01 09:33:48 crc kubenswrapper[4689]: E1201 09:33:48.815146 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46e6e610b7b5577742f57bba5b457128cab81fb37921c5d835f9531f553c86b1\": container with ID starting with 46e6e610b7b5577742f57bba5b457128cab81fb37921c5d835f9531f553c86b1 not found: ID does not exist" containerID="46e6e610b7b5577742f57bba5b457128cab81fb37921c5d835f9531f553c86b1" Dec 01 09:33:48 crc kubenswrapper[4689]: I1201 09:33:48.815210 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46e6e610b7b5577742f57bba5b457128cab81fb37921c5d835f9531f553c86b1"} err="failed to get container status \"46e6e610b7b5577742f57bba5b457128cab81fb37921c5d835f9531f553c86b1\": rpc error: code = NotFound desc = could not find container \"46e6e610b7b5577742f57bba5b457128cab81fb37921c5d835f9531f553c86b1\": container with ID starting with 46e6e610b7b5577742f57bba5b457128cab81fb37921c5d835f9531f553c86b1 not found: ID does not exist" Dec 01 09:33:48 crc kubenswrapper[4689]: I1201 09:33:48.815233 4689 scope.go:117] "RemoveContainer" containerID="a8943d91f9c8f49d61d8b5a2c4c8a9d0327d57711f58c983a5e4c8db06ec1d1f" Dec 01 09:33:48 crc kubenswrapper[4689]: E1201 09:33:48.815550 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8943d91f9c8f49d61d8b5a2c4c8a9d0327d57711f58c983a5e4c8db06ec1d1f\": container with ID starting with a8943d91f9c8f49d61d8b5a2c4c8a9d0327d57711f58c983a5e4c8db06ec1d1f not found: ID does not exist" containerID="a8943d91f9c8f49d61d8b5a2c4c8a9d0327d57711f58c983a5e4c8db06ec1d1f" Dec 01 09:33:48 crc kubenswrapper[4689]: I1201 09:33:48.815583 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8943d91f9c8f49d61d8b5a2c4c8a9d0327d57711f58c983a5e4c8db06ec1d1f"} err="failed to get container status \"a8943d91f9c8f49d61d8b5a2c4c8a9d0327d57711f58c983a5e4c8db06ec1d1f\": rpc error: code = NotFound desc = could not find container \"a8943d91f9c8f49d61d8b5a2c4c8a9d0327d57711f58c983a5e4c8db06ec1d1f\": container with ID starting with a8943d91f9c8f49d61d8b5a2c4c8a9d0327d57711f58c983a5e4c8db06ec1d1f not found: ID does not exist" Dec 01 09:33:48 crc kubenswrapper[4689]: I1201 09:33:48.815603 4689 scope.go:117] "RemoveContainer" containerID="3d563009ebaf57a3dcd9f9be083026b5f77c39693b0822802e64fa7c9618cf11" Dec 01 09:33:48 crc kubenswrapper[4689]: E1201 09:33:48.815998 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d563009ebaf57a3dcd9f9be083026b5f77c39693b0822802e64fa7c9618cf11\": container with ID starting with 3d563009ebaf57a3dcd9f9be083026b5f77c39693b0822802e64fa7c9618cf11 not found: ID does not exist" containerID="3d563009ebaf57a3dcd9f9be083026b5f77c39693b0822802e64fa7c9618cf11" Dec 01 09:33:48 crc kubenswrapper[4689]: I1201 09:33:48.816021 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d563009ebaf57a3dcd9f9be083026b5f77c39693b0822802e64fa7c9618cf11"} err="failed to get container status \"3d563009ebaf57a3dcd9f9be083026b5f77c39693b0822802e64fa7c9618cf11\": rpc error: code = NotFound desc = could not find container \"3d563009ebaf57a3dcd9f9be083026b5f77c39693b0822802e64fa7c9618cf11\": container with ID starting with 3d563009ebaf57a3dcd9f9be083026b5f77c39693b0822802e64fa7c9618cf11 not found: ID does not exist" Dec 01 09:33:49 crc kubenswrapper[4689]: I1201 09:33:49.059315 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="665830e4-f511-4fa5-8892-75d5bc618ede" path="/var/lib/kubelet/pods/665830e4-f511-4fa5-8892-75d5bc618ede/volumes" Dec 01 09:34:09 crc kubenswrapper[4689]: I1201 09:34:09.146902 4689 patch_prober.go:28] interesting pod/machine-config-daemon-hmdnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 09:34:09 crc kubenswrapper[4689]: I1201 09:34:09.147695 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 09:34:39 crc kubenswrapper[4689]: I1201 09:34:39.147759 4689 patch_prober.go:28] interesting pod/machine-config-daemon-hmdnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 09:34:39 crc kubenswrapper[4689]: I1201 09:34:39.148247 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 09:35:09 crc kubenswrapper[4689]: I1201 09:35:09.147152 4689 patch_prober.go:28] interesting pod/machine-config-daemon-hmdnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 09:35:09 crc kubenswrapper[4689]: I1201 09:35:09.148932 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 09:35:09 crc kubenswrapper[4689]: I1201 09:35:09.149084 4689 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" Dec 01 09:35:09 crc kubenswrapper[4689]: I1201 09:35:09.150087 4689 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4faa3fc9d613af72eff28875b4605ed4e4b31f63bb6f62515a694b8dd41544ee"} pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 01 09:35:09 crc kubenswrapper[4689]: I1201 09:35:09.150277 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" containerName="machine-config-daemon" containerID="cri-o://4faa3fc9d613af72eff28875b4605ed4e4b31f63bb6f62515a694b8dd41544ee" gracePeriod=600 Dec 01 09:35:09 crc kubenswrapper[4689]: I1201 09:35:09.462202 4689 generic.go:334] "Generic (PLEG): container finished" podID="3947625d-75bf-4332-a233-1491b2ee9d96" containerID="4faa3fc9d613af72eff28875b4605ed4e4b31f63bb6f62515a694b8dd41544ee" exitCode=0 Dec 01 09:35:09 crc kubenswrapper[4689]: I1201 09:35:09.462453 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" event={"ID":"3947625d-75bf-4332-a233-1491b2ee9d96","Type":"ContainerDied","Data":"4faa3fc9d613af72eff28875b4605ed4e4b31f63bb6f62515a694b8dd41544ee"} Dec 01 09:35:09 crc kubenswrapper[4689]: I1201 09:35:09.462628 4689 scope.go:117] "RemoveContainer" containerID="061213c21e96b72086584b4bc2e0384184a8626ff20d0fe98b0c83c0e192169e" Dec 01 09:35:10 crc kubenswrapper[4689]: I1201 09:35:10.473629 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" event={"ID":"3947625d-75bf-4332-a233-1491b2ee9d96","Type":"ContainerStarted","Data":"ce8ba0c4ebf8c19567df23bc269c8977e9b7f7f3972dba477579893ac7472ebf"} Dec 01 09:35:46 crc kubenswrapper[4689]: I1201 09:35:46.463247 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-w7289"] Dec 01 09:35:46 crc kubenswrapper[4689]: E1201 09:35:46.464430 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="665830e4-f511-4fa5-8892-75d5bc618ede" containerName="registry-server" Dec 01 09:35:46 crc kubenswrapper[4689]: I1201 09:35:46.464449 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="665830e4-f511-4fa5-8892-75d5bc618ede" containerName="registry-server" Dec 01 09:35:46 crc kubenswrapper[4689]: E1201 09:35:46.464473 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="665830e4-f511-4fa5-8892-75d5bc618ede" containerName="extract-utilities" Dec 01 09:35:46 crc kubenswrapper[4689]: I1201 09:35:46.464481 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="665830e4-f511-4fa5-8892-75d5bc618ede" containerName="extract-utilities" Dec 01 09:35:46 crc kubenswrapper[4689]: E1201 09:35:46.464488 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="454bf5f0-e32c-4f8a-b10a-21dbcdd69f70" containerName="extract-content" Dec 01 09:35:46 crc kubenswrapper[4689]: I1201 09:35:46.464496 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="454bf5f0-e32c-4f8a-b10a-21dbcdd69f70" containerName="extract-content" Dec 01 09:35:46 crc kubenswrapper[4689]: E1201 09:35:46.464519 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="454bf5f0-e32c-4f8a-b10a-21dbcdd69f70" containerName="extract-utilities" Dec 01 09:35:46 crc kubenswrapper[4689]: I1201 09:35:46.464529 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="454bf5f0-e32c-4f8a-b10a-21dbcdd69f70" containerName="extract-utilities" Dec 01 09:35:46 crc kubenswrapper[4689]: E1201 09:35:46.464567 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="665830e4-f511-4fa5-8892-75d5bc618ede" containerName="extract-content" Dec 01 09:35:46 crc kubenswrapper[4689]: I1201 09:35:46.464575 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="665830e4-f511-4fa5-8892-75d5bc618ede" containerName="extract-content" Dec 01 09:35:46 crc kubenswrapper[4689]: E1201 09:35:46.464592 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="454bf5f0-e32c-4f8a-b10a-21dbcdd69f70" containerName="registry-server" Dec 01 09:35:46 crc kubenswrapper[4689]: I1201 09:35:46.464599 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="454bf5f0-e32c-4f8a-b10a-21dbcdd69f70" containerName="registry-server" Dec 01 09:35:46 crc kubenswrapper[4689]: I1201 09:35:46.464847 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="665830e4-f511-4fa5-8892-75d5bc618ede" containerName="registry-server" Dec 01 09:35:46 crc kubenswrapper[4689]: I1201 09:35:46.464869 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="454bf5f0-e32c-4f8a-b10a-21dbcdd69f70" containerName="registry-server" Dec 01 09:35:46 crc kubenswrapper[4689]: I1201 09:35:46.466728 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w7289" Dec 01 09:35:46 crc kubenswrapper[4689]: I1201 09:35:46.476264 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-w7289"] Dec 01 09:35:46 crc kubenswrapper[4689]: I1201 09:35:46.617224 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a446fc55-b4f8-4600-91fe-609acfc85c1a-utilities\") pod \"redhat-operators-w7289\" (UID: \"a446fc55-b4f8-4600-91fe-609acfc85c1a\") " pod="openshift-marketplace/redhat-operators-w7289" Dec 01 09:35:46 crc kubenswrapper[4689]: I1201 09:35:46.617500 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmm64\" (UniqueName: \"kubernetes.io/projected/a446fc55-b4f8-4600-91fe-609acfc85c1a-kube-api-access-gmm64\") pod \"redhat-operators-w7289\" (UID: \"a446fc55-b4f8-4600-91fe-609acfc85c1a\") " pod="openshift-marketplace/redhat-operators-w7289" Dec 01 09:35:46 crc kubenswrapper[4689]: I1201 09:35:46.617798 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a446fc55-b4f8-4600-91fe-609acfc85c1a-catalog-content\") pod \"redhat-operators-w7289\" (UID: \"a446fc55-b4f8-4600-91fe-609acfc85c1a\") " pod="openshift-marketplace/redhat-operators-w7289" Dec 01 09:35:46 crc kubenswrapper[4689]: I1201 09:35:46.719941 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a446fc55-b4f8-4600-91fe-609acfc85c1a-catalog-content\") pod \"redhat-operators-w7289\" (UID: \"a446fc55-b4f8-4600-91fe-609acfc85c1a\") " pod="openshift-marketplace/redhat-operators-w7289" Dec 01 09:35:46 crc kubenswrapper[4689]: I1201 09:35:46.720098 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a446fc55-b4f8-4600-91fe-609acfc85c1a-utilities\") pod \"redhat-operators-w7289\" (UID: \"a446fc55-b4f8-4600-91fe-609acfc85c1a\") " pod="openshift-marketplace/redhat-operators-w7289" Dec 01 09:35:46 crc kubenswrapper[4689]: I1201 09:35:46.720191 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmm64\" (UniqueName: \"kubernetes.io/projected/a446fc55-b4f8-4600-91fe-609acfc85c1a-kube-api-access-gmm64\") pod \"redhat-operators-w7289\" (UID: \"a446fc55-b4f8-4600-91fe-609acfc85c1a\") " pod="openshift-marketplace/redhat-operators-w7289" Dec 01 09:35:46 crc kubenswrapper[4689]: I1201 09:35:46.720553 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a446fc55-b4f8-4600-91fe-609acfc85c1a-utilities\") pod \"redhat-operators-w7289\" (UID: \"a446fc55-b4f8-4600-91fe-609acfc85c1a\") " pod="openshift-marketplace/redhat-operators-w7289" Dec 01 09:35:46 crc kubenswrapper[4689]: I1201 09:35:46.720851 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a446fc55-b4f8-4600-91fe-609acfc85c1a-catalog-content\") pod \"redhat-operators-w7289\" (UID: \"a446fc55-b4f8-4600-91fe-609acfc85c1a\") " pod="openshift-marketplace/redhat-operators-w7289" Dec 01 09:35:46 crc kubenswrapper[4689]: I1201 09:35:46.749659 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmm64\" (UniqueName: \"kubernetes.io/projected/a446fc55-b4f8-4600-91fe-609acfc85c1a-kube-api-access-gmm64\") pod \"redhat-operators-w7289\" (UID: \"a446fc55-b4f8-4600-91fe-609acfc85c1a\") " pod="openshift-marketplace/redhat-operators-w7289" Dec 01 09:35:46 crc kubenswrapper[4689]: I1201 09:35:46.807759 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w7289" Dec 01 09:35:47 crc kubenswrapper[4689]: I1201 09:35:47.521703 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-w7289"] Dec 01 09:35:47 crc kubenswrapper[4689]: I1201 09:35:47.822171 4689 generic.go:334] "Generic (PLEG): container finished" podID="a446fc55-b4f8-4600-91fe-609acfc85c1a" containerID="158e7dc254d33e02acc61bd6d7f4743d049b831601571ad50a8f4d91b3130fe3" exitCode=0 Dec 01 09:35:47 crc kubenswrapper[4689]: I1201 09:35:47.822215 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w7289" event={"ID":"a446fc55-b4f8-4600-91fe-609acfc85c1a","Type":"ContainerDied","Data":"158e7dc254d33e02acc61bd6d7f4743d049b831601571ad50a8f4d91b3130fe3"} Dec 01 09:35:47 crc kubenswrapper[4689]: I1201 09:35:47.822252 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w7289" event={"ID":"a446fc55-b4f8-4600-91fe-609acfc85c1a","Type":"ContainerStarted","Data":"342301be5f741ff4f2a9dba29e652810d25371c24a52c070059f6ca58090d578"} Dec 01 09:35:49 crc kubenswrapper[4689]: I1201 09:35:49.846807 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w7289" event={"ID":"a446fc55-b4f8-4600-91fe-609acfc85c1a","Type":"ContainerStarted","Data":"1b929b425c23b13f0746abbaa372baa0862504e3c1860224b8de6a7fa7d26b9c"} Dec 01 09:35:52 crc kubenswrapper[4689]: I1201 09:35:52.937730 4689 generic.go:334] "Generic (PLEG): container finished" podID="a446fc55-b4f8-4600-91fe-609acfc85c1a" containerID="1b929b425c23b13f0746abbaa372baa0862504e3c1860224b8de6a7fa7d26b9c" exitCode=0 Dec 01 09:35:52 crc kubenswrapper[4689]: I1201 09:35:52.937788 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w7289" event={"ID":"a446fc55-b4f8-4600-91fe-609acfc85c1a","Type":"ContainerDied","Data":"1b929b425c23b13f0746abbaa372baa0862504e3c1860224b8de6a7fa7d26b9c"} Dec 01 09:35:53 crc kubenswrapper[4689]: I1201 09:35:53.950702 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w7289" event={"ID":"a446fc55-b4f8-4600-91fe-609acfc85c1a","Type":"ContainerStarted","Data":"45a67ea39590f721b02c7c061f667bc1fd15f4dc19ad991bd249009dcee82db4"} Dec 01 09:35:53 crc kubenswrapper[4689]: I1201 09:35:53.976556 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-w7289" podStartSLOduration=2.307434785 podStartE2EDuration="7.976536946s" podCreationTimestamp="2025-12-01 09:35:46 +0000 UTC" firstStartedPulling="2025-12-01 09:35:47.824061815 +0000 UTC m=+3427.896349719" lastFinishedPulling="2025-12-01 09:35:53.493163946 +0000 UTC m=+3433.565451880" observedRunningTime="2025-12-01 09:35:53.97448164 +0000 UTC m=+3434.046769554" watchObservedRunningTime="2025-12-01 09:35:53.976536946 +0000 UTC m=+3434.048824850" Dec 01 09:35:56 crc kubenswrapper[4689]: I1201 09:35:56.808220 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-w7289" Dec 01 09:35:56 crc kubenswrapper[4689]: I1201 09:35:56.808599 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-w7289" Dec 01 09:35:57 crc kubenswrapper[4689]: I1201 09:35:57.856996 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-w7289" podUID="a446fc55-b4f8-4600-91fe-609acfc85c1a" containerName="registry-server" probeResult="failure" output=< Dec 01 09:35:57 crc kubenswrapper[4689]: timeout: failed to connect service ":50051" within 1s Dec 01 09:35:57 crc kubenswrapper[4689]: > Dec 01 09:36:06 crc kubenswrapper[4689]: I1201 09:36:06.858608 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-w7289" Dec 01 09:36:06 crc kubenswrapper[4689]: I1201 09:36:06.917700 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-w7289" Dec 01 09:36:07 crc kubenswrapper[4689]: I1201 09:36:07.103237 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-w7289"] Dec 01 09:36:08 crc kubenswrapper[4689]: I1201 09:36:08.100270 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-w7289" podUID="a446fc55-b4f8-4600-91fe-609acfc85c1a" containerName="registry-server" containerID="cri-o://45a67ea39590f721b02c7c061f667bc1fd15f4dc19ad991bd249009dcee82db4" gracePeriod=2 Dec 01 09:36:08 crc kubenswrapper[4689]: I1201 09:36:08.709502 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w7289" Dec 01 09:36:08 crc kubenswrapper[4689]: I1201 09:36:08.862343 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a446fc55-b4f8-4600-91fe-609acfc85c1a-utilities\") pod \"a446fc55-b4f8-4600-91fe-609acfc85c1a\" (UID: \"a446fc55-b4f8-4600-91fe-609acfc85c1a\") " Dec 01 09:36:08 crc kubenswrapper[4689]: I1201 09:36:08.862451 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gmm64\" (UniqueName: \"kubernetes.io/projected/a446fc55-b4f8-4600-91fe-609acfc85c1a-kube-api-access-gmm64\") pod \"a446fc55-b4f8-4600-91fe-609acfc85c1a\" (UID: \"a446fc55-b4f8-4600-91fe-609acfc85c1a\") " Dec 01 09:36:08 crc kubenswrapper[4689]: I1201 09:36:08.862641 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a446fc55-b4f8-4600-91fe-609acfc85c1a-catalog-content\") pod \"a446fc55-b4f8-4600-91fe-609acfc85c1a\" (UID: \"a446fc55-b4f8-4600-91fe-609acfc85c1a\") " Dec 01 09:36:08 crc kubenswrapper[4689]: I1201 09:36:08.863342 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a446fc55-b4f8-4600-91fe-609acfc85c1a-utilities" (OuterVolumeSpecName: "utilities") pod "a446fc55-b4f8-4600-91fe-609acfc85c1a" (UID: "a446fc55-b4f8-4600-91fe-609acfc85c1a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:36:08 crc kubenswrapper[4689]: I1201 09:36:08.869132 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a446fc55-b4f8-4600-91fe-609acfc85c1a-kube-api-access-gmm64" (OuterVolumeSpecName: "kube-api-access-gmm64") pod "a446fc55-b4f8-4600-91fe-609acfc85c1a" (UID: "a446fc55-b4f8-4600-91fe-609acfc85c1a"). InnerVolumeSpecName "kube-api-access-gmm64". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:36:08 crc kubenswrapper[4689]: I1201 09:36:08.965311 4689 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a446fc55-b4f8-4600-91fe-609acfc85c1a-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 09:36:08 crc kubenswrapper[4689]: I1201 09:36:08.965674 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gmm64\" (UniqueName: \"kubernetes.io/projected/a446fc55-b4f8-4600-91fe-609acfc85c1a-kube-api-access-gmm64\") on node \"crc\" DevicePath \"\"" Dec 01 09:36:09 crc kubenswrapper[4689]: I1201 09:36:09.019262 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a446fc55-b4f8-4600-91fe-609acfc85c1a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a446fc55-b4f8-4600-91fe-609acfc85c1a" (UID: "a446fc55-b4f8-4600-91fe-609acfc85c1a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:36:09 crc kubenswrapper[4689]: I1201 09:36:09.067414 4689 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a446fc55-b4f8-4600-91fe-609acfc85c1a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 09:36:09 crc kubenswrapper[4689]: I1201 09:36:09.116732 4689 generic.go:334] "Generic (PLEG): container finished" podID="a446fc55-b4f8-4600-91fe-609acfc85c1a" containerID="45a67ea39590f721b02c7c061f667bc1fd15f4dc19ad991bd249009dcee82db4" exitCode=0 Dec 01 09:36:09 crc kubenswrapper[4689]: I1201 09:36:09.116779 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w7289" event={"ID":"a446fc55-b4f8-4600-91fe-609acfc85c1a","Type":"ContainerDied","Data":"45a67ea39590f721b02c7c061f667bc1fd15f4dc19ad991bd249009dcee82db4"} Dec 01 09:36:09 crc kubenswrapper[4689]: I1201 09:36:09.116805 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w7289" event={"ID":"a446fc55-b4f8-4600-91fe-609acfc85c1a","Type":"ContainerDied","Data":"342301be5f741ff4f2a9dba29e652810d25371c24a52c070059f6ca58090d578"} Dec 01 09:36:09 crc kubenswrapper[4689]: I1201 09:36:09.116824 4689 scope.go:117] "RemoveContainer" containerID="45a67ea39590f721b02c7c061f667bc1fd15f4dc19ad991bd249009dcee82db4" Dec 01 09:36:09 crc kubenswrapper[4689]: I1201 09:36:09.117614 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w7289" Dec 01 09:36:09 crc kubenswrapper[4689]: I1201 09:36:09.147755 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-w7289"] Dec 01 09:36:09 crc kubenswrapper[4689]: I1201 09:36:09.157304 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-w7289"] Dec 01 09:36:09 crc kubenswrapper[4689]: I1201 09:36:09.166055 4689 scope.go:117] "RemoveContainer" containerID="1b929b425c23b13f0746abbaa372baa0862504e3c1860224b8de6a7fa7d26b9c" Dec 01 09:36:09 crc kubenswrapper[4689]: I1201 09:36:09.193795 4689 scope.go:117] "RemoveContainer" containerID="158e7dc254d33e02acc61bd6d7f4743d049b831601571ad50a8f4d91b3130fe3" Dec 01 09:36:09 crc kubenswrapper[4689]: I1201 09:36:09.242600 4689 scope.go:117] "RemoveContainer" containerID="45a67ea39590f721b02c7c061f667bc1fd15f4dc19ad991bd249009dcee82db4" Dec 01 09:36:09 crc kubenswrapper[4689]: E1201 09:36:09.243122 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45a67ea39590f721b02c7c061f667bc1fd15f4dc19ad991bd249009dcee82db4\": container with ID starting with 45a67ea39590f721b02c7c061f667bc1fd15f4dc19ad991bd249009dcee82db4 not found: ID does not exist" containerID="45a67ea39590f721b02c7c061f667bc1fd15f4dc19ad991bd249009dcee82db4" Dec 01 09:36:09 crc kubenswrapper[4689]: I1201 09:36:09.243165 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45a67ea39590f721b02c7c061f667bc1fd15f4dc19ad991bd249009dcee82db4"} err="failed to get container status \"45a67ea39590f721b02c7c061f667bc1fd15f4dc19ad991bd249009dcee82db4\": rpc error: code = NotFound desc = could not find container \"45a67ea39590f721b02c7c061f667bc1fd15f4dc19ad991bd249009dcee82db4\": container with ID starting with 45a67ea39590f721b02c7c061f667bc1fd15f4dc19ad991bd249009dcee82db4 not found: ID does not exist" Dec 01 09:36:09 crc kubenswrapper[4689]: I1201 09:36:09.243187 4689 scope.go:117] "RemoveContainer" containerID="1b929b425c23b13f0746abbaa372baa0862504e3c1860224b8de6a7fa7d26b9c" Dec 01 09:36:09 crc kubenswrapper[4689]: E1201 09:36:09.243767 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b929b425c23b13f0746abbaa372baa0862504e3c1860224b8de6a7fa7d26b9c\": container with ID starting with 1b929b425c23b13f0746abbaa372baa0862504e3c1860224b8de6a7fa7d26b9c not found: ID does not exist" containerID="1b929b425c23b13f0746abbaa372baa0862504e3c1860224b8de6a7fa7d26b9c" Dec 01 09:36:09 crc kubenswrapper[4689]: I1201 09:36:09.243824 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b929b425c23b13f0746abbaa372baa0862504e3c1860224b8de6a7fa7d26b9c"} err="failed to get container status \"1b929b425c23b13f0746abbaa372baa0862504e3c1860224b8de6a7fa7d26b9c\": rpc error: code = NotFound desc = could not find container \"1b929b425c23b13f0746abbaa372baa0862504e3c1860224b8de6a7fa7d26b9c\": container with ID starting with 1b929b425c23b13f0746abbaa372baa0862504e3c1860224b8de6a7fa7d26b9c not found: ID does not exist" Dec 01 09:36:09 crc kubenswrapper[4689]: I1201 09:36:09.243861 4689 scope.go:117] "RemoveContainer" containerID="158e7dc254d33e02acc61bd6d7f4743d049b831601571ad50a8f4d91b3130fe3" Dec 01 09:36:09 crc kubenswrapper[4689]: E1201 09:36:09.244339 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"158e7dc254d33e02acc61bd6d7f4743d049b831601571ad50a8f4d91b3130fe3\": container with ID starting with 158e7dc254d33e02acc61bd6d7f4743d049b831601571ad50a8f4d91b3130fe3 not found: ID does not exist" containerID="158e7dc254d33e02acc61bd6d7f4743d049b831601571ad50a8f4d91b3130fe3" Dec 01 09:36:09 crc kubenswrapper[4689]: I1201 09:36:09.244401 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"158e7dc254d33e02acc61bd6d7f4743d049b831601571ad50a8f4d91b3130fe3"} err="failed to get container status \"158e7dc254d33e02acc61bd6d7f4743d049b831601571ad50a8f4d91b3130fe3\": rpc error: code = NotFound desc = could not find container \"158e7dc254d33e02acc61bd6d7f4743d049b831601571ad50a8f4d91b3130fe3\": container with ID starting with 158e7dc254d33e02acc61bd6d7f4743d049b831601571ad50a8f4d91b3130fe3 not found: ID does not exist" Dec 01 09:36:11 crc kubenswrapper[4689]: I1201 09:36:11.058956 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a446fc55-b4f8-4600-91fe-609acfc85c1a" path="/var/lib/kubelet/pods/a446fc55-b4f8-4600-91fe-609acfc85c1a/volumes" Dec 01 09:36:48 crc kubenswrapper[4689]: I1201 09:36:48.485349 4689 generic.go:334] "Generic (PLEG): container finished" podID="107c3226-2b1b-4f80-9670-8f0c1ffd3337" containerID="50890759293db1064685b6ffa708b5f9aebd3c81bc6f1095346d5013abfa6bdd" exitCode=0 Dec 01 09:36:48 crc kubenswrapper[4689]: I1201 09:36:48.485441 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"107c3226-2b1b-4f80-9670-8f0c1ffd3337","Type":"ContainerDied","Data":"50890759293db1064685b6ffa708b5f9aebd3c81bc6f1095346d5013abfa6bdd"} Dec 01 09:36:49 crc kubenswrapper[4689]: I1201 09:36:49.909490 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 01 09:36:49 crc kubenswrapper[4689]: I1201 09:36:49.985088 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/107c3226-2b1b-4f80-9670-8f0c1ffd3337-test-operator-ephemeral-temporary\") pod \"107c3226-2b1b-4f80-9670-8f0c1ffd3337\" (UID: \"107c3226-2b1b-4f80-9670-8f0c1ffd3337\") " Dec 01 09:36:49 crc kubenswrapper[4689]: I1201 09:36:49.985136 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/107c3226-2b1b-4f80-9670-8f0c1ffd3337-test-operator-ephemeral-workdir\") pod \"107c3226-2b1b-4f80-9670-8f0c1ffd3337\" (UID: \"107c3226-2b1b-4f80-9670-8f0c1ffd3337\") " Dec 01 09:36:49 crc kubenswrapper[4689]: I1201 09:36:49.985176 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/107c3226-2b1b-4f80-9670-8f0c1ffd3337-ca-certs\") pod \"107c3226-2b1b-4f80-9670-8f0c1ffd3337\" (UID: \"107c3226-2b1b-4f80-9670-8f0c1ffd3337\") " Dec 01 09:36:49 crc kubenswrapper[4689]: I1201 09:36:49.985231 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/107c3226-2b1b-4f80-9670-8f0c1ffd3337-ssh-key\") pod \"107c3226-2b1b-4f80-9670-8f0c1ffd3337\" (UID: \"107c3226-2b1b-4f80-9670-8f0c1ffd3337\") " Dec 01 09:36:49 crc kubenswrapper[4689]: I1201 09:36:49.985267 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/107c3226-2b1b-4f80-9670-8f0c1ffd3337-config-data\") pod \"107c3226-2b1b-4f80-9670-8f0c1ffd3337\" (UID: \"107c3226-2b1b-4f80-9670-8f0c1ffd3337\") " Dec 01 09:36:49 crc kubenswrapper[4689]: I1201 09:36:49.985300 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"107c3226-2b1b-4f80-9670-8f0c1ffd3337\" (UID: \"107c3226-2b1b-4f80-9670-8f0c1ffd3337\") " Dec 01 09:36:49 crc kubenswrapper[4689]: I1201 09:36:49.985346 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/107c3226-2b1b-4f80-9670-8f0c1ffd3337-openstack-config\") pod \"107c3226-2b1b-4f80-9670-8f0c1ffd3337\" (UID: \"107c3226-2b1b-4f80-9670-8f0c1ffd3337\") " Dec 01 09:36:49 crc kubenswrapper[4689]: I1201 09:36:49.985450 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kvkqx\" (UniqueName: \"kubernetes.io/projected/107c3226-2b1b-4f80-9670-8f0c1ffd3337-kube-api-access-kvkqx\") pod \"107c3226-2b1b-4f80-9670-8f0c1ffd3337\" (UID: \"107c3226-2b1b-4f80-9670-8f0c1ffd3337\") " Dec 01 09:36:49 crc kubenswrapper[4689]: I1201 09:36:49.985536 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/107c3226-2b1b-4f80-9670-8f0c1ffd3337-openstack-config-secret\") pod \"107c3226-2b1b-4f80-9670-8f0c1ffd3337\" (UID: \"107c3226-2b1b-4f80-9670-8f0c1ffd3337\") " Dec 01 09:36:49 crc kubenswrapper[4689]: I1201 09:36:49.985954 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/107c3226-2b1b-4f80-9670-8f0c1ffd3337-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "107c3226-2b1b-4f80-9670-8f0c1ffd3337" (UID: "107c3226-2b1b-4f80-9670-8f0c1ffd3337"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:36:49 crc kubenswrapper[4689]: I1201 09:36:49.986124 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/107c3226-2b1b-4f80-9670-8f0c1ffd3337-config-data" (OuterVolumeSpecName: "config-data") pod "107c3226-2b1b-4f80-9670-8f0c1ffd3337" (UID: "107c3226-2b1b-4f80-9670-8f0c1ffd3337"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:36:49 crc kubenswrapper[4689]: I1201 09:36:49.990669 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/107c3226-2b1b-4f80-9670-8f0c1ffd3337-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "107c3226-2b1b-4f80-9670-8f0c1ffd3337" (UID: "107c3226-2b1b-4f80-9670-8f0c1ffd3337"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:36:50 crc kubenswrapper[4689]: I1201 09:36:49.994849 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/107c3226-2b1b-4f80-9670-8f0c1ffd3337-kube-api-access-kvkqx" (OuterVolumeSpecName: "kube-api-access-kvkqx") pod "107c3226-2b1b-4f80-9670-8f0c1ffd3337" (UID: "107c3226-2b1b-4f80-9670-8f0c1ffd3337"). InnerVolumeSpecName "kube-api-access-kvkqx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:36:50 crc kubenswrapper[4689]: I1201 09:36:50.011888 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "test-operator-logs") pod "107c3226-2b1b-4f80-9670-8f0c1ffd3337" (UID: "107c3226-2b1b-4f80-9670-8f0c1ffd3337"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 01 09:36:50 crc kubenswrapper[4689]: I1201 09:36:50.028657 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/107c3226-2b1b-4f80-9670-8f0c1ffd3337-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "107c3226-2b1b-4f80-9670-8f0c1ffd3337" (UID: "107c3226-2b1b-4f80-9670-8f0c1ffd3337"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:36:50 crc kubenswrapper[4689]: I1201 09:36:50.062580 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/107c3226-2b1b-4f80-9670-8f0c1ffd3337-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "107c3226-2b1b-4f80-9670-8f0c1ffd3337" (UID: "107c3226-2b1b-4f80-9670-8f0c1ffd3337"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:36:50 crc kubenswrapper[4689]: I1201 09:36:50.068546 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/107c3226-2b1b-4f80-9670-8f0c1ffd3337-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "107c3226-2b1b-4f80-9670-8f0c1ffd3337" (UID: "107c3226-2b1b-4f80-9670-8f0c1ffd3337"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:36:50 crc kubenswrapper[4689]: I1201 09:36:50.076501 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/107c3226-2b1b-4f80-9670-8f0c1ffd3337-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "107c3226-2b1b-4f80-9670-8f0c1ffd3337" (UID: "107c3226-2b1b-4f80-9670-8f0c1ffd3337"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:36:50 crc kubenswrapper[4689]: I1201 09:36:50.087431 4689 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/107c3226-2b1b-4f80-9670-8f0c1ffd3337-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Dec 01 09:36:50 crc kubenswrapper[4689]: I1201 09:36:50.087466 4689 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/107c3226-2b1b-4f80-9670-8f0c1ffd3337-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Dec 01 09:36:50 crc kubenswrapper[4689]: I1201 09:36:50.087476 4689 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/107c3226-2b1b-4f80-9670-8f0c1ffd3337-ca-certs\") on node \"crc\" DevicePath \"\"" Dec 01 09:36:50 crc kubenswrapper[4689]: I1201 09:36:50.087485 4689 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/107c3226-2b1b-4f80-9670-8f0c1ffd3337-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 01 09:36:50 crc kubenswrapper[4689]: I1201 09:36:50.087493 4689 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/107c3226-2b1b-4f80-9670-8f0c1ffd3337-config-data\") on node \"crc\" DevicePath \"\"" Dec 01 09:36:50 crc kubenswrapper[4689]: I1201 09:36:50.087561 4689 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Dec 01 09:36:50 crc kubenswrapper[4689]: I1201 09:36:50.087571 4689 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/107c3226-2b1b-4f80-9670-8f0c1ffd3337-openstack-config\") on node \"crc\" DevicePath \"\"" Dec 01 09:36:50 crc kubenswrapper[4689]: I1201 09:36:50.087583 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kvkqx\" (UniqueName: \"kubernetes.io/projected/107c3226-2b1b-4f80-9670-8f0c1ffd3337-kube-api-access-kvkqx\") on node \"crc\" DevicePath \"\"" Dec 01 09:36:50 crc kubenswrapper[4689]: I1201 09:36:50.087591 4689 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/107c3226-2b1b-4f80-9670-8f0c1ffd3337-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Dec 01 09:36:50 crc kubenswrapper[4689]: I1201 09:36:50.109922 4689 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Dec 01 09:36:50 crc kubenswrapper[4689]: I1201 09:36:50.189773 4689 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Dec 01 09:36:50 crc kubenswrapper[4689]: I1201 09:36:50.506779 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"107c3226-2b1b-4f80-9670-8f0c1ffd3337","Type":"ContainerDied","Data":"b0140a844a48251dfbbf5ac0af72c55535fbb30ee921a682fbf1d0e1e7385480"} Dec 01 09:36:50 crc kubenswrapper[4689]: I1201 09:36:50.506825 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0140a844a48251dfbbf5ac0af72c55535fbb30ee921a682fbf1d0e1e7385480" Dec 01 09:36:50 crc kubenswrapper[4689]: I1201 09:36:50.506854 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 01 09:37:02 crc kubenswrapper[4689]: I1201 09:37:02.192287 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Dec 01 09:37:02 crc kubenswrapper[4689]: E1201 09:37:02.193412 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="107c3226-2b1b-4f80-9670-8f0c1ffd3337" containerName="tempest-tests-tempest-tests-runner" Dec 01 09:37:02 crc kubenswrapper[4689]: I1201 09:37:02.193439 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="107c3226-2b1b-4f80-9670-8f0c1ffd3337" containerName="tempest-tests-tempest-tests-runner" Dec 01 09:37:02 crc kubenswrapper[4689]: E1201 09:37:02.193475 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a446fc55-b4f8-4600-91fe-609acfc85c1a" containerName="extract-utilities" Dec 01 09:37:02 crc kubenswrapper[4689]: I1201 09:37:02.193485 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="a446fc55-b4f8-4600-91fe-609acfc85c1a" containerName="extract-utilities" Dec 01 09:37:02 crc kubenswrapper[4689]: E1201 09:37:02.193507 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a446fc55-b4f8-4600-91fe-609acfc85c1a" containerName="extract-content" Dec 01 09:37:02 crc kubenswrapper[4689]: I1201 09:37:02.193516 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="a446fc55-b4f8-4600-91fe-609acfc85c1a" containerName="extract-content" Dec 01 09:37:02 crc kubenswrapper[4689]: E1201 09:37:02.193554 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a446fc55-b4f8-4600-91fe-609acfc85c1a" containerName="registry-server" Dec 01 09:37:02 crc kubenswrapper[4689]: I1201 09:37:02.193562 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="a446fc55-b4f8-4600-91fe-609acfc85c1a" containerName="registry-server" Dec 01 09:37:02 crc kubenswrapper[4689]: I1201 09:37:02.193777 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="107c3226-2b1b-4f80-9670-8f0c1ffd3337" containerName="tempest-tests-tempest-tests-runner" Dec 01 09:37:02 crc kubenswrapper[4689]: I1201 09:37:02.193794 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="a446fc55-b4f8-4600-91fe-609acfc85c1a" containerName="registry-server" Dec 01 09:37:02 crc kubenswrapper[4689]: I1201 09:37:02.194611 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 01 09:37:02 crc kubenswrapper[4689]: I1201 09:37:02.203359 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-ph4vn" Dec 01 09:37:02 crc kubenswrapper[4689]: I1201 09:37:02.221437 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Dec 01 09:37:02 crc kubenswrapper[4689]: I1201 09:37:02.404758 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"308bdd90-c162-47a3-bc04-5369c9b235b8\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 01 09:37:02 crc kubenswrapper[4689]: I1201 09:37:02.404864 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcd5c\" (UniqueName: \"kubernetes.io/projected/308bdd90-c162-47a3-bc04-5369c9b235b8-kube-api-access-dcd5c\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"308bdd90-c162-47a3-bc04-5369c9b235b8\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 01 09:37:02 crc kubenswrapper[4689]: I1201 09:37:02.513749 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"308bdd90-c162-47a3-bc04-5369c9b235b8\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 01 09:37:02 crc kubenswrapper[4689]: I1201 09:37:02.513879 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcd5c\" (UniqueName: \"kubernetes.io/projected/308bdd90-c162-47a3-bc04-5369c9b235b8-kube-api-access-dcd5c\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"308bdd90-c162-47a3-bc04-5369c9b235b8\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 01 09:37:02 crc kubenswrapper[4689]: I1201 09:37:02.514638 4689 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"308bdd90-c162-47a3-bc04-5369c9b235b8\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 01 09:37:02 crc kubenswrapper[4689]: I1201 09:37:02.537579 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcd5c\" (UniqueName: \"kubernetes.io/projected/308bdd90-c162-47a3-bc04-5369c9b235b8-kube-api-access-dcd5c\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"308bdd90-c162-47a3-bc04-5369c9b235b8\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 01 09:37:02 crc kubenswrapper[4689]: I1201 09:37:02.548178 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"308bdd90-c162-47a3-bc04-5369c9b235b8\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 01 09:37:02 crc kubenswrapper[4689]: I1201 09:37:02.585626 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 01 09:37:03 crc kubenswrapper[4689]: I1201 09:37:03.094699 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Dec 01 09:37:03 crc kubenswrapper[4689]: I1201 09:37:03.105571 4689 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 01 09:37:03 crc kubenswrapper[4689]: I1201 09:37:03.658704 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"308bdd90-c162-47a3-bc04-5369c9b235b8","Type":"ContainerStarted","Data":"dfd5ede5ef0076a281bad47a5f27bbb63721656f2642e9c19e1776d146797b29"} Dec 01 09:37:04 crc kubenswrapper[4689]: I1201 09:37:04.668630 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"308bdd90-c162-47a3-bc04-5369c9b235b8","Type":"ContainerStarted","Data":"f2a8f9262e30ab75ade98a77aa4966effb7077f9f6ef42463761c4b925fcfcfd"} Dec 01 09:37:04 crc kubenswrapper[4689]: I1201 09:37:04.690118 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=1.8119729470000001 podStartE2EDuration="2.690093667s" podCreationTimestamp="2025-12-01 09:37:02 +0000 UTC" firstStartedPulling="2025-12-01 09:37:03.105164542 +0000 UTC m=+3503.177452456" lastFinishedPulling="2025-12-01 09:37:03.983285272 +0000 UTC m=+3504.055573176" observedRunningTime="2025-12-01 09:37:04.682806478 +0000 UTC m=+3504.755094402" watchObservedRunningTime="2025-12-01 09:37:04.690093667 +0000 UTC m=+3504.762381571" Dec 01 09:37:09 crc kubenswrapper[4689]: I1201 09:37:09.146670 4689 patch_prober.go:28] interesting pod/machine-config-daemon-hmdnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 09:37:09 crc kubenswrapper[4689]: I1201 09:37:09.147107 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 09:37:28 crc kubenswrapper[4689]: I1201 09:37:28.032416 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-f6hnn/must-gather-sv7dg"] Dec 01 09:37:28 crc kubenswrapper[4689]: I1201 09:37:28.051723 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-f6hnn/must-gather-sv7dg" Dec 01 09:37:28 crc kubenswrapper[4689]: I1201 09:37:28.058242 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-f6hnn"/"openshift-service-ca.crt" Dec 01 09:37:28 crc kubenswrapper[4689]: I1201 09:37:28.058558 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-f6hnn"/"default-dockercfg-6z8t4" Dec 01 09:37:28 crc kubenswrapper[4689]: I1201 09:37:28.058766 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-f6hnn"/"kube-root-ca.crt" Dec 01 09:37:28 crc kubenswrapper[4689]: I1201 09:37:28.059793 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsrrt\" (UniqueName: \"kubernetes.io/projected/d61f8f56-c6aa-469c-8ffc-178814fe85e5-kube-api-access-qsrrt\") pod \"must-gather-sv7dg\" (UID: \"d61f8f56-c6aa-469c-8ffc-178814fe85e5\") " pod="openshift-must-gather-f6hnn/must-gather-sv7dg" Dec 01 09:37:28 crc kubenswrapper[4689]: I1201 09:37:28.060063 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d61f8f56-c6aa-469c-8ffc-178814fe85e5-must-gather-output\") pod \"must-gather-sv7dg\" (UID: \"d61f8f56-c6aa-469c-8ffc-178814fe85e5\") " pod="openshift-must-gather-f6hnn/must-gather-sv7dg" Dec 01 09:37:28 crc kubenswrapper[4689]: I1201 09:37:28.108634 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-f6hnn/must-gather-sv7dg"] Dec 01 09:37:28 crc kubenswrapper[4689]: I1201 09:37:28.162150 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d61f8f56-c6aa-469c-8ffc-178814fe85e5-must-gather-output\") pod \"must-gather-sv7dg\" (UID: \"d61f8f56-c6aa-469c-8ffc-178814fe85e5\") " pod="openshift-must-gather-f6hnn/must-gather-sv7dg" Dec 01 09:37:28 crc kubenswrapper[4689]: I1201 09:37:28.162229 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qsrrt\" (UniqueName: \"kubernetes.io/projected/d61f8f56-c6aa-469c-8ffc-178814fe85e5-kube-api-access-qsrrt\") pod \"must-gather-sv7dg\" (UID: \"d61f8f56-c6aa-469c-8ffc-178814fe85e5\") " pod="openshift-must-gather-f6hnn/must-gather-sv7dg" Dec 01 09:37:28 crc kubenswrapper[4689]: I1201 09:37:28.162655 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d61f8f56-c6aa-469c-8ffc-178814fe85e5-must-gather-output\") pod \"must-gather-sv7dg\" (UID: \"d61f8f56-c6aa-469c-8ffc-178814fe85e5\") " pod="openshift-must-gather-f6hnn/must-gather-sv7dg" Dec 01 09:37:28 crc kubenswrapper[4689]: I1201 09:37:28.188594 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qsrrt\" (UniqueName: \"kubernetes.io/projected/d61f8f56-c6aa-469c-8ffc-178814fe85e5-kube-api-access-qsrrt\") pod \"must-gather-sv7dg\" (UID: \"d61f8f56-c6aa-469c-8ffc-178814fe85e5\") " pod="openshift-must-gather-f6hnn/must-gather-sv7dg" Dec 01 09:37:28 crc kubenswrapper[4689]: I1201 09:37:28.400912 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-f6hnn/must-gather-sv7dg" Dec 01 09:37:28 crc kubenswrapper[4689]: I1201 09:37:28.881161 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-f6hnn/must-gather-sv7dg"] Dec 01 09:37:28 crc kubenswrapper[4689]: I1201 09:37:28.960262 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-f6hnn/must-gather-sv7dg" event={"ID":"d61f8f56-c6aa-469c-8ffc-178814fe85e5","Type":"ContainerStarted","Data":"f08359fb048f71fbfcee91d19a4f523b6f35d3b7f3c908c711b58017fd8c41a1"} Dec 01 09:37:35 crc kubenswrapper[4689]: I1201 09:37:35.043611 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-f6hnn/must-gather-sv7dg" event={"ID":"d61f8f56-c6aa-469c-8ffc-178814fe85e5","Type":"ContainerStarted","Data":"fcd79970d34b8e555440efed755851e8ca9afd6ab4513a418f6d237e722097cc"} Dec 01 09:37:36 crc kubenswrapper[4689]: I1201 09:37:36.061292 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-f6hnn/must-gather-sv7dg" event={"ID":"d61f8f56-c6aa-469c-8ffc-178814fe85e5","Type":"ContainerStarted","Data":"498ac31847139f993dafde9a0199b69197945c82f4da346fe45cd095927ec4fb"} Dec 01 09:37:36 crc kubenswrapper[4689]: I1201 09:37:36.081667 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-f6hnn/must-gather-sv7dg" podStartSLOduration=2.261802091 podStartE2EDuration="8.081648683s" podCreationTimestamp="2025-12-01 09:37:28 +0000 UTC" firstStartedPulling="2025-12-01 09:37:28.887587756 +0000 UTC m=+3528.959875660" lastFinishedPulling="2025-12-01 09:37:34.707434348 +0000 UTC m=+3534.779722252" observedRunningTime="2025-12-01 09:37:36.07422213 +0000 UTC m=+3536.146510034" watchObservedRunningTime="2025-12-01 09:37:36.081648683 +0000 UTC m=+3536.153936587" Dec 01 09:37:39 crc kubenswrapper[4689]: E1201 09:37:39.021313 4689 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.190:54502->38.102.83.190:35327: write tcp 38.102.83.190:54502->38.102.83.190:35327: write: broken pipe Dec 01 09:37:39 crc kubenswrapper[4689]: I1201 09:37:39.148174 4689 patch_prober.go:28] interesting pod/machine-config-daemon-hmdnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 09:37:39 crc kubenswrapper[4689]: I1201 09:37:39.148228 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 09:37:40 crc kubenswrapper[4689]: I1201 09:37:40.062759 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-f6hnn/crc-debug-stzgp"] Dec 01 09:37:40 crc kubenswrapper[4689]: I1201 09:37:40.065569 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-f6hnn/crc-debug-stzgp" Dec 01 09:37:40 crc kubenswrapper[4689]: I1201 09:37:40.148760 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhvjz\" (UniqueName: \"kubernetes.io/projected/a0a2a564-e37b-44c7-a71c-d921e27f79f3-kube-api-access-fhvjz\") pod \"crc-debug-stzgp\" (UID: \"a0a2a564-e37b-44c7-a71c-d921e27f79f3\") " pod="openshift-must-gather-f6hnn/crc-debug-stzgp" Dec 01 09:37:40 crc kubenswrapper[4689]: I1201 09:37:40.148865 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a0a2a564-e37b-44c7-a71c-d921e27f79f3-host\") pod \"crc-debug-stzgp\" (UID: \"a0a2a564-e37b-44c7-a71c-d921e27f79f3\") " pod="openshift-must-gather-f6hnn/crc-debug-stzgp" Dec 01 09:37:40 crc kubenswrapper[4689]: I1201 09:37:40.251310 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhvjz\" (UniqueName: \"kubernetes.io/projected/a0a2a564-e37b-44c7-a71c-d921e27f79f3-kube-api-access-fhvjz\") pod \"crc-debug-stzgp\" (UID: \"a0a2a564-e37b-44c7-a71c-d921e27f79f3\") " pod="openshift-must-gather-f6hnn/crc-debug-stzgp" Dec 01 09:37:40 crc kubenswrapper[4689]: I1201 09:37:40.251466 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a0a2a564-e37b-44c7-a71c-d921e27f79f3-host\") pod \"crc-debug-stzgp\" (UID: \"a0a2a564-e37b-44c7-a71c-d921e27f79f3\") " pod="openshift-must-gather-f6hnn/crc-debug-stzgp" Dec 01 09:37:40 crc kubenswrapper[4689]: I1201 09:37:40.251599 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a0a2a564-e37b-44c7-a71c-d921e27f79f3-host\") pod \"crc-debug-stzgp\" (UID: \"a0a2a564-e37b-44c7-a71c-d921e27f79f3\") " pod="openshift-must-gather-f6hnn/crc-debug-stzgp" Dec 01 09:37:40 crc kubenswrapper[4689]: I1201 09:37:40.281166 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhvjz\" (UniqueName: \"kubernetes.io/projected/a0a2a564-e37b-44c7-a71c-d921e27f79f3-kube-api-access-fhvjz\") pod \"crc-debug-stzgp\" (UID: \"a0a2a564-e37b-44c7-a71c-d921e27f79f3\") " pod="openshift-must-gather-f6hnn/crc-debug-stzgp" Dec 01 09:37:40 crc kubenswrapper[4689]: I1201 09:37:40.403674 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-f6hnn/crc-debug-stzgp" Dec 01 09:37:41 crc kubenswrapper[4689]: I1201 09:37:41.109479 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-f6hnn/crc-debug-stzgp" event={"ID":"a0a2a564-e37b-44c7-a71c-d921e27f79f3","Type":"ContainerStarted","Data":"9b9c779977385c142313ec428e0bb35bce324d8e2af3c9cd38e03149c3b9f7bf"} Dec 01 09:37:54 crc kubenswrapper[4689]: I1201 09:37:54.233975 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-f6hnn/crc-debug-stzgp" event={"ID":"a0a2a564-e37b-44c7-a71c-d921e27f79f3","Type":"ContainerStarted","Data":"b01ceed10892271b43789bf7ad7b96a4edb6d8dfba8866f5401eda94e8c8d239"} Dec 01 09:37:54 crc kubenswrapper[4689]: I1201 09:37:54.260242 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-f6hnn/crc-debug-stzgp" podStartSLOduration=1.465688235 podStartE2EDuration="14.260220958s" podCreationTimestamp="2025-12-01 09:37:40 +0000 UTC" firstStartedPulling="2025-12-01 09:37:40.484618617 +0000 UTC m=+3540.556906521" lastFinishedPulling="2025-12-01 09:37:53.27915134 +0000 UTC m=+3553.351439244" observedRunningTime="2025-12-01 09:37:54.248778696 +0000 UTC m=+3554.321066600" watchObservedRunningTime="2025-12-01 09:37:54.260220958 +0000 UTC m=+3554.332508852" Dec 01 09:38:09 crc kubenswrapper[4689]: I1201 09:38:09.147009 4689 patch_prober.go:28] interesting pod/machine-config-daemon-hmdnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 09:38:09 crc kubenswrapper[4689]: I1201 09:38:09.148556 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 09:38:09 crc kubenswrapper[4689]: I1201 09:38:09.148699 4689 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" Dec 01 09:38:09 crc kubenswrapper[4689]: I1201 09:38:09.149524 4689 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ce8ba0c4ebf8c19567df23bc269c8977e9b7f7f3972dba477579893ac7472ebf"} pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 01 09:38:09 crc kubenswrapper[4689]: I1201 09:38:09.149664 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" containerName="machine-config-daemon" containerID="cri-o://ce8ba0c4ebf8c19567df23bc269c8977e9b7f7f3972dba477579893ac7472ebf" gracePeriod=600 Dec 01 09:38:09 crc kubenswrapper[4689]: E1201 09:38:09.281350 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:38:09 crc kubenswrapper[4689]: I1201 09:38:09.371049 4689 generic.go:334] "Generic (PLEG): container finished" podID="3947625d-75bf-4332-a233-1491b2ee9d96" containerID="ce8ba0c4ebf8c19567df23bc269c8977e9b7f7f3972dba477579893ac7472ebf" exitCode=0 Dec 01 09:38:09 crc kubenswrapper[4689]: I1201 09:38:09.371092 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" event={"ID":"3947625d-75bf-4332-a233-1491b2ee9d96","Type":"ContainerDied","Data":"ce8ba0c4ebf8c19567df23bc269c8977e9b7f7f3972dba477579893ac7472ebf"} Dec 01 09:38:09 crc kubenswrapper[4689]: I1201 09:38:09.371129 4689 scope.go:117] "RemoveContainer" containerID="4faa3fc9d613af72eff28875b4605ed4e4b31f63bb6f62515a694b8dd41544ee" Dec 01 09:38:09 crc kubenswrapper[4689]: I1201 09:38:09.371948 4689 scope.go:117] "RemoveContainer" containerID="ce8ba0c4ebf8c19567df23bc269c8977e9b7f7f3972dba477579893ac7472ebf" Dec 01 09:38:09 crc kubenswrapper[4689]: E1201 09:38:09.372285 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:38:21 crc kubenswrapper[4689]: I1201 09:38:21.059380 4689 scope.go:117] "RemoveContainer" containerID="ce8ba0c4ebf8c19567df23bc269c8977e9b7f7f3972dba477579893ac7472ebf" Dec 01 09:38:21 crc kubenswrapper[4689]: E1201 09:38:21.063545 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:38:33 crc kubenswrapper[4689]: I1201 09:38:33.047726 4689 scope.go:117] "RemoveContainer" containerID="ce8ba0c4ebf8c19567df23bc269c8977e9b7f7f3972dba477579893ac7472ebf" Dec 01 09:38:33 crc kubenswrapper[4689]: E1201 09:38:33.048697 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:38:38 crc kubenswrapper[4689]: I1201 09:38:38.647304 4689 generic.go:334] "Generic (PLEG): container finished" podID="a0a2a564-e37b-44c7-a71c-d921e27f79f3" containerID="b01ceed10892271b43789bf7ad7b96a4edb6d8dfba8866f5401eda94e8c8d239" exitCode=0 Dec 01 09:38:38 crc kubenswrapper[4689]: I1201 09:38:38.647416 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-f6hnn/crc-debug-stzgp" event={"ID":"a0a2a564-e37b-44c7-a71c-d921e27f79f3","Type":"ContainerDied","Data":"b01ceed10892271b43789bf7ad7b96a4edb6d8dfba8866f5401eda94e8c8d239"} Dec 01 09:38:39 crc kubenswrapper[4689]: I1201 09:38:39.758535 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-f6hnn/crc-debug-stzgp" Dec 01 09:38:39 crc kubenswrapper[4689]: I1201 09:38:39.800692 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-f6hnn/crc-debug-stzgp"] Dec 01 09:38:39 crc kubenswrapper[4689]: I1201 09:38:39.811808 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-f6hnn/crc-debug-stzgp"] Dec 01 09:38:39 crc kubenswrapper[4689]: I1201 09:38:39.822618 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a0a2a564-e37b-44c7-a71c-d921e27f79f3-host\") pod \"a0a2a564-e37b-44c7-a71c-d921e27f79f3\" (UID: \"a0a2a564-e37b-44c7-a71c-d921e27f79f3\") " Dec 01 09:38:39 crc kubenswrapper[4689]: I1201 09:38:39.822795 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhvjz\" (UniqueName: \"kubernetes.io/projected/a0a2a564-e37b-44c7-a71c-d921e27f79f3-kube-api-access-fhvjz\") pod \"a0a2a564-e37b-44c7-a71c-d921e27f79f3\" (UID: \"a0a2a564-e37b-44c7-a71c-d921e27f79f3\") " Dec 01 09:38:39 crc kubenswrapper[4689]: I1201 09:38:39.824242 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a0a2a564-e37b-44c7-a71c-d921e27f79f3-host" (OuterVolumeSpecName: "host") pod "a0a2a564-e37b-44c7-a71c-d921e27f79f3" (UID: "a0a2a564-e37b-44c7-a71c-d921e27f79f3"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 09:38:39 crc kubenswrapper[4689]: I1201 09:38:39.840566 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0a2a564-e37b-44c7-a71c-d921e27f79f3-kube-api-access-fhvjz" (OuterVolumeSpecName: "kube-api-access-fhvjz") pod "a0a2a564-e37b-44c7-a71c-d921e27f79f3" (UID: "a0a2a564-e37b-44c7-a71c-d921e27f79f3"). InnerVolumeSpecName "kube-api-access-fhvjz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:38:39 crc kubenswrapper[4689]: I1201 09:38:39.925207 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhvjz\" (UniqueName: \"kubernetes.io/projected/a0a2a564-e37b-44c7-a71c-d921e27f79f3-kube-api-access-fhvjz\") on node \"crc\" DevicePath \"\"" Dec 01 09:38:39 crc kubenswrapper[4689]: I1201 09:38:39.925257 4689 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a0a2a564-e37b-44c7-a71c-d921e27f79f3-host\") on node \"crc\" DevicePath \"\"" Dec 01 09:38:40 crc kubenswrapper[4689]: I1201 09:38:40.669026 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b9c779977385c142313ec428e0bb35bce324d8e2af3c9cd38e03149c3b9f7bf" Dec 01 09:38:40 crc kubenswrapper[4689]: I1201 09:38:40.669255 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-f6hnn/crc-debug-stzgp" Dec 01 09:38:41 crc kubenswrapper[4689]: I1201 09:38:41.043748 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-f6hnn/crc-debug-gmrkg"] Dec 01 09:38:41 crc kubenswrapper[4689]: E1201 09:38:41.044156 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0a2a564-e37b-44c7-a71c-d921e27f79f3" containerName="container-00" Dec 01 09:38:41 crc kubenswrapper[4689]: I1201 09:38:41.044169 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0a2a564-e37b-44c7-a71c-d921e27f79f3" containerName="container-00" Dec 01 09:38:41 crc kubenswrapper[4689]: I1201 09:38:41.044464 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0a2a564-e37b-44c7-a71c-d921e27f79f3" containerName="container-00" Dec 01 09:38:41 crc kubenswrapper[4689]: I1201 09:38:41.045048 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-f6hnn/crc-debug-gmrkg" Dec 01 09:38:41 crc kubenswrapper[4689]: I1201 09:38:41.064986 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0a2a564-e37b-44c7-a71c-d921e27f79f3" path="/var/lib/kubelet/pods/a0a2a564-e37b-44c7-a71c-d921e27f79f3/volumes" Dec 01 09:38:41 crc kubenswrapper[4689]: I1201 09:38:41.153444 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6e0e5f87-87ef-4101-bf7c-e749a9c8f294-host\") pod \"crc-debug-gmrkg\" (UID: \"6e0e5f87-87ef-4101-bf7c-e749a9c8f294\") " pod="openshift-must-gather-f6hnn/crc-debug-gmrkg" Dec 01 09:38:41 crc kubenswrapper[4689]: I1201 09:38:41.153620 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxmj4\" (UniqueName: \"kubernetes.io/projected/6e0e5f87-87ef-4101-bf7c-e749a9c8f294-kube-api-access-sxmj4\") pod \"crc-debug-gmrkg\" (UID: \"6e0e5f87-87ef-4101-bf7c-e749a9c8f294\") " pod="openshift-must-gather-f6hnn/crc-debug-gmrkg" Dec 01 09:38:41 crc kubenswrapper[4689]: I1201 09:38:41.255128 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxmj4\" (UniqueName: \"kubernetes.io/projected/6e0e5f87-87ef-4101-bf7c-e749a9c8f294-kube-api-access-sxmj4\") pod \"crc-debug-gmrkg\" (UID: \"6e0e5f87-87ef-4101-bf7c-e749a9c8f294\") " pod="openshift-must-gather-f6hnn/crc-debug-gmrkg" Dec 01 09:38:41 crc kubenswrapper[4689]: I1201 09:38:41.255302 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6e0e5f87-87ef-4101-bf7c-e749a9c8f294-host\") pod \"crc-debug-gmrkg\" (UID: \"6e0e5f87-87ef-4101-bf7c-e749a9c8f294\") " pod="openshift-must-gather-f6hnn/crc-debug-gmrkg" Dec 01 09:38:41 crc kubenswrapper[4689]: I1201 09:38:41.255516 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6e0e5f87-87ef-4101-bf7c-e749a9c8f294-host\") pod \"crc-debug-gmrkg\" (UID: \"6e0e5f87-87ef-4101-bf7c-e749a9c8f294\") " pod="openshift-must-gather-f6hnn/crc-debug-gmrkg" Dec 01 09:38:41 crc kubenswrapper[4689]: I1201 09:38:41.282076 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxmj4\" (UniqueName: \"kubernetes.io/projected/6e0e5f87-87ef-4101-bf7c-e749a9c8f294-kube-api-access-sxmj4\") pod \"crc-debug-gmrkg\" (UID: \"6e0e5f87-87ef-4101-bf7c-e749a9c8f294\") " pod="openshift-must-gather-f6hnn/crc-debug-gmrkg" Dec 01 09:38:41 crc kubenswrapper[4689]: I1201 09:38:41.368483 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-f6hnn/crc-debug-gmrkg" Dec 01 09:38:41 crc kubenswrapper[4689]: I1201 09:38:41.680472 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-f6hnn/crc-debug-gmrkg" event={"ID":"6e0e5f87-87ef-4101-bf7c-e749a9c8f294","Type":"ContainerStarted","Data":"97c5bc996678773fa4e45141ca8e7060b7caf9f695280de2968565fe1864b1fe"} Dec 01 09:38:41 crc kubenswrapper[4689]: I1201 09:38:41.680518 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-f6hnn/crc-debug-gmrkg" event={"ID":"6e0e5f87-87ef-4101-bf7c-e749a9c8f294","Type":"ContainerStarted","Data":"1a89fe3404710f0af5ccab24ccda4a5b9ff323cf54256cfc42b41c3042c4b22d"} Dec 01 09:38:42 crc kubenswrapper[4689]: I1201 09:38:42.104568 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-f6hnn/crc-debug-gmrkg"] Dec 01 09:38:42 crc kubenswrapper[4689]: I1201 09:38:42.117019 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-f6hnn/crc-debug-gmrkg"] Dec 01 09:38:42 crc kubenswrapper[4689]: I1201 09:38:42.703451 4689 generic.go:334] "Generic (PLEG): container finished" podID="6e0e5f87-87ef-4101-bf7c-e749a9c8f294" containerID="97c5bc996678773fa4e45141ca8e7060b7caf9f695280de2968565fe1864b1fe" exitCode=0 Dec 01 09:38:42 crc kubenswrapper[4689]: I1201 09:38:42.810175 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-f6hnn/crc-debug-gmrkg" Dec 01 09:38:42 crc kubenswrapper[4689]: I1201 09:38:42.885852 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sxmj4\" (UniqueName: \"kubernetes.io/projected/6e0e5f87-87ef-4101-bf7c-e749a9c8f294-kube-api-access-sxmj4\") pod \"6e0e5f87-87ef-4101-bf7c-e749a9c8f294\" (UID: \"6e0e5f87-87ef-4101-bf7c-e749a9c8f294\") " Dec 01 09:38:42 crc kubenswrapper[4689]: I1201 09:38:42.886066 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6e0e5f87-87ef-4101-bf7c-e749a9c8f294-host\") pod \"6e0e5f87-87ef-4101-bf7c-e749a9c8f294\" (UID: \"6e0e5f87-87ef-4101-bf7c-e749a9c8f294\") " Dec 01 09:38:42 crc kubenswrapper[4689]: I1201 09:38:42.886307 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e0e5f87-87ef-4101-bf7c-e749a9c8f294-host" (OuterVolumeSpecName: "host") pod "6e0e5f87-87ef-4101-bf7c-e749a9c8f294" (UID: "6e0e5f87-87ef-4101-bf7c-e749a9c8f294"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 09:38:42 crc kubenswrapper[4689]: I1201 09:38:42.886871 4689 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6e0e5f87-87ef-4101-bf7c-e749a9c8f294-host\") on node \"crc\" DevicePath \"\"" Dec 01 09:38:42 crc kubenswrapper[4689]: I1201 09:38:42.897209 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e0e5f87-87ef-4101-bf7c-e749a9c8f294-kube-api-access-sxmj4" (OuterVolumeSpecName: "kube-api-access-sxmj4") pod "6e0e5f87-87ef-4101-bf7c-e749a9c8f294" (UID: "6e0e5f87-87ef-4101-bf7c-e749a9c8f294"). InnerVolumeSpecName "kube-api-access-sxmj4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:38:42 crc kubenswrapper[4689]: I1201 09:38:42.988489 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sxmj4\" (UniqueName: \"kubernetes.io/projected/6e0e5f87-87ef-4101-bf7c-e749a9c8f294-kube-api-access-sxmj4\") on node \"crc\" DevicePath \"\"" Dec 01 09:38:43 crc kubenswrapper[4689]: I1201 09:38:43.058607 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e0e5f87-87ef-4101-bf7c-e749a9c8f294" path="/var/lib/kubelet/pods/6e0e5f87-87ef-4101-bf7c-e749a9c8f294/volumes" Dec 01 09:38:43 crc kubenswrapper[4689]: I1201 09:38:43.287527 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-f6hnn/crc-debug-gbzsq"] Dec 01 09:38:43 crc kubenswrapper[4689]: E1201 09:38:43.288400 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e0e5f87-87ef-4101-bf7c-e749a9c8f294" containerName="container-00" Dec 01 09:38:43 crc kubenswrapper[4689]: I1201 09:38:43.288417 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e0e5f87-87ef-4101-bf7c-e749a9c8f294" containerName="container-00" Dec 01 09:38:43 crc kubenswrapper[4689]: I1201 09:38:43.288699 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e0e5f87-87ef-4101-bf7c-e749a9c8f294" containerName="container-00" Dec 01 09:38:43 crc kubenswrapper[4689]: I1201 09:38:43.289530 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-f6hnn/crc-debug-gbzsq" Dec 01 09:38:43 crc kubenswrapper[4689]: I1201 09:38:43.395222 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpcw5\" (UniqueName: \"kubernetes.io/projected/42f54a15-e062-4951-9d0f-ba2abbf1e4e5-kube-api-access-cpcw5\") pod \"crc-debug-gbzsq\" (UID: \"42f54a15-e062-4951-9d0f-ba2abbf1e4e5\") " pod="openshift-must-gather-f6hnn/crc-debug-gbzsq" Dec 01 09:38:43 crc kubenswrapper[4689]: I1201 09:38:43.395340 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/42f54a15-e062-4951-9d0f-ba2abbf1e4e5-host\") pod \"crc-debug-gbzsq\" (UID: \"42f54a15-e062-4951-9d0f-ba2abbf1e4e5\") " pod="openshift-must-gather-f6hnn/crc-debug-gbzsq" Dec 01 09:38:43 crc kubenswrapper[4689]: I1201 09:38:43.496655 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/42f54a15-e062-4951-9d0f-ba2abbf1e4e5-host\") pod \"crc-debug-gbzsq\" (UID: \"42f54a15-e062-4951-9d0f-ba2abbf1e4e5\") " pod="openshift-must-gather-f6hnn/crc-debug-gbzsq" Dec 01 09:38:43 crc kubenswrapper[4689]: I1201 09:38:43.496784 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpcw5\" (UniqueName: \"kubernetes.io/projected/42f54a15-e062-4951-9d0f-ba2abbf1e4e5-kube-api-access-cpcw5\") pod \"crc-debug-gbzsq\" (UID: \"42f54a15-e062-4951-9d0f-ba2abbf1e4e5\") " pod="openshift-must-gather-f6hnn/crc-debug-gbzsq" Dec 01 09:38:43 crc kubenswrapper[4689]: I1201 09:38:43.496847 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/42f54a15-e062-4951-9d0f-ba2abbf1e4e5-host\") pod \"crc-debug-gbzsq\" (UID: \"42f54a15-e062-4951-9d0f-ba2abbf1e4e5\") " pod="openshift-must-gather-f6hnn/crc-debug-gbzsq" Dec 01 09:38:43 crc kubenswrapper[4689]: I1201 09:38:43.514343 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpcw5\" (UniqueName: \"kubernetes.io/projected/42f54a15-e062-4951-9d0f-ba2abbf1e4e5-kube-api-access-cpcw5\") pod \"crc-debug-gbzsq\" (UID: \"42f54a15-e062-4951-9d0f-ba2abbf1e4e5\") " pod="openshift-must-gather-f6hnn/crc-debug-gbzsq" Dec 01 09:38:43 crc kubenswrapper[4689]: I1201 09:38:43.607431 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-f6hnn/crc-debug-gbzsq" Dec 01 09:38:43 crc kubenswrapper[4689]: W1201 09:38:43.636535 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod42f54a15_e062_4951_9d0f_ba2abbf1e4e5.slice/crio-6fd3372d8b7cd620000c19fb812d1ae1ab3674f6277083473a9be7c1b9514435 WatchSource:0}: Error finding container 6fd3372d8b7cd620000c19fb812d1ae1ab3674f6277083473a9be7c1b9514435: Status 404 returned error can't find the container with id 6fd3372d8b7cd620000c19fb812d1ae1ab3674f6277083473a9be7c1b9514435 Dec 01 09:38:43 crc kubenswrapper[4689]: I1201 09:38:43.712773 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-f6hnn/crc-debug-gbzsq" event={"ID":"42f54a15-e062-4951-9d0f-ba2abbf1e4e5","Type":"ContainerStarted","Data":"6fd3372d8b7cd620000c19fb812d1ae1ab3674f6277083473a9be7c1b9514435"} Dec 01 09:38:43 crc kubenswrapper[4689]: I1201 09:38:43.718346 4689 scope.go:117] "RemoveContainer" containerID="97c5bc996678773fa4e45141ca8e7060b7caf9f695280de2968565fe1864b1fe" Dec 01 09:38:43 crc kubenswrapper[4689]: I1201 09:38:43.718400 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-f6hnn/crc-debug-gmrkg" Dec 01 09:38:44 crc kubenswrapper[4689]: I1201 09:38:44.729470 4689 generic.go:334] "Generic (PLEG): container finished" podID="42f54a15-e062-4951-9d0f-ba2abbf1e4e5" containerID="adfda9aa67f6861323d528bf0b12a01d9fd0c0dbd86ceb9dc4e36416936f394f" exitCode=0 Dec 01 09:38:44 crc kubenswrapper[4689]: I1201 09:38:44.729513 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-f6hnn/crc-debug-gbzsq" event={"ID":"42f54a15-e062-4951-9d0f-ba2abbf1e4e5","Type":"ContainerDied","Data":"adfda9aa67f6861323d528bf0b12a01d9fd0c0dbd86ceb9dc4e36416936f394f"} Dec 01 09:38:44 crc kubenswrapper[4689]: I1201 09:38:44.772783 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-f6hnn/crc-debug-gbzsq"] Dec 01 09:38:44 crc kubenswrapper[4689]: I1201 09:38:44.783115 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-f6hnn/crc-debug-gbzsq"] Dec 01 09:38:45 crc kubenswrapper[4689]: I1201 09:38:45.052993 4689 scope.go:117] "RemoveContainer" containerID="ce8ba0c4ebf8c19567df23bc269c8977e9b7f7f3972dba477579893ac7472ebf" Dec 01 09:38:45 crc kubenswrapper[4689]: E1201 09:38:45.053863 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:38:45 crc kubenswrapper[4689]: I1201 09:38:45.849055 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-f6hnn/crc-debug-gbzsq" Dec 01 09:38:45 crc kubenswrapper[4689]: I1201 09:38:45.943417 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/42f54a15-e062-4951-9d0f-ba2abbf1e4e5-host\") pod \"42f54a15-e062-4951-9d0f-ba2abbf1e4e5\" (UID: \"42f54a15-e062-4951-9d0f-ba2abbf1e4e5\") " Dec 01 09:38:45 crc kubenswrapper[4689]: I1201 09:38:45.943542 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42f54a15-e062-4951-9d0f-ba2abbf1e4e5-host" (OuterVolumeSpecName: "host") pod "42f54a15-e062-4951-9d0f-ba2abbf1e4e5" (UID: "42f54a15-e062-4951-9d0f-ba2abbf1e4e5"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 09:38:45 crc kubenswrapper[4689]: I1201 09:38:45.943606 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cpcw5\" (UniqueName: \"kubernetes.io/projected/42f54a15-e062-4951-9d0f-ba2abbf1e4e5-kube-api-access-cpcw5\") pod \"42f54a15-e062-4951-9d0f-ba2abbf1e4e5\" (UID: \"42f54a15-e062-4951-9d0f-ba2abbf1e4e5\") " Dec 01 09:38:45 crc kubenswrapper[4689]: I1201 09:38:45.944065 4689 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/42f54a15-e062-4951-9d0f-ba2abbf1e4e5-host\") on node \"crc\" DevicePath \"\"" Dec 01 09:38:45 crc kubenswrapper[4689]: I1201 09:38:45.957688 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42f54a15-e062-4951-9d0f-ba2abbf1e4e5-kube-api-access-cpcw5" (OuterVolumeSpecName: "kube-api-access-cpcw5") pod "42f54a15-e062-4951-9d0f-ba2abbf1e4e5" (UID: "42f54a15-e062-4951-9d0f-ba2abbf1e4e5"). InnerVolumeSpecName "kube-api-access-cpcw5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:38:46 crc kubenswrapper[4689]: I1201 09:38:46.045380 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cpcw5\" (UniqueName: \"kubernetes.io/projected/42f54a15-e062-4951-9d0f-ba2abbf1e4e5-kube-api-access-cpcw5\") on node \"crc\" DevicePath \"\"" Dec 01 09:38:46 crc kubenswrapper[4689]: I1201 09:38:46.748992 4689 scope.go:117] "RemoveContainer" containerID="adfda9aa67f6861323d528bf0b12a01d9fd0c0dbd86ceb9dc4e36416936f394f" Dec 01 09:38:46 crc kubenswrapper[4689]: I1201 09:38:46.749057 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-f6hnn/crc-debug-gbzsq" Dec 01 09:38:47 crc kubenswrapper[4689]: I1201 09:38:47.057210 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42f54a15-e062-4951-9d0f-ba2abbf1e4e5" path="/var/lib/kubelet/pods/42f54a15-e062-4951-9d0f-ba2abbf1e4e5/volumes" Dec 01 09:38:58 crc kubenswrapper[4689]: I1201 09:38:58.048701 4689 scope.go:117] "RemoveContainer" containerID="ce8ba0c4ebf8c19567df23bc269c8977e9b7f7f3972dba477579893ac7472ebf" Dec 01 09:38:58 crc kubenswrapper[4689]: E1201 09:38:58.049942 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:39:02 crc kubenswrapper[4689]: I1201 09:39:02.368340 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7bd884c498-fvqdz_1bd94e50-aa23-4249-acd5-293b272a8123/barbican-api/0.log" Dec 01 09:39:02 crc kubenswrapper[4689]: I1201 09:39:02.498629 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7bd884c498-fvqdz_1bd94e50-aa23-4249-acd5-293b272a8123/barbican-api-log/0.log" Dec 01 09:39:02 crc kubenswrapper[4689]: I1201 09:39:02.783578 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7cdd6b5dcb-j5dgx_215d6908-3cbd-486b-adc3-82cdaddef118/barbican-keystone-listener/0.log" Dec 01 09:39:02 crc kubenswrapper[4689]: I1201 09:39:02.867889 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7cdd6b5dcb-j5dgx_215d6908-3cbd-486b-adc3-82cdaddef118/barbican-keystone-listener-log/0.log" Dec 01 09:39:02 crc kubenswrapper[4689]: I1201 09:39:02.935322 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-844c6c5cff-mqnnk_059abe7a-8a94-4c9a-8ac2-1830fffad22c/barbican-worker/0.log" Dec 01 09:39:03 crc kubenswrapper[4689]: I1201 09:39:03.039216 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-844c6c5cff-mqnnk_059abe7a-8a94-4c9a-8ac2-1830fffad22c/barbican-worker-log/0.log" Dec 01 09:39:03 crc kubenswrapper[4689]: I1201 09:39:03.136492 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-qsxf4_caf4bdec-471c-4c07-a5a7-294faf35c880/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Dec 01 09:39:03 crc kubenswrapper[4689]: I1201 09:39:03.336445 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_5971de46-c278-4f0d-80be-0a7a25d7678c/ceilometer-notification-agent/0.log" Dec 01 09:39:03 crc kubenswrapper[4689]: I1201 09:39:03.354257 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_5971de46-c278-4f0d-80be-0a7a25d7678c/ceilometer-central-agent/0.log" Dec 01 09:39:03 crc kubenswrapper[4689]: I1201 09:39:03.412525 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_5971de46-c278-4f0d-80be-0a7a25d7678c/proxy-httpd/0.log" Dec 01 09:39:03 crc kubenswrapper[4689]: I1201 09:39:03.498073 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_5971de46-c278-4f0d-80be-0a7a25d7678c/sg-core/0.log" Dec 01 09:39:03 crc kubenswrapper[4689]: I1201 09:39:03.641259 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_8f0f718c-3a19-482a-9ed0-4c4d7dbac886/cinder-api/0.log" Dec 01 09:39:03 crc kubenswrapper[4689]: I1201 09:39:03.746238 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_8f0f718c-3a19-482a-9ed0-4c4d7dbac886/cinder-api-log/0.log" Dec 01 09:39:03 crc kubenswrapper[4689]: I1201 09:39:03.926172 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_0556c1c8-69cc-4fa6-a3df-46a4ed439312/cinder-scheduler/1.log" Dec 01 09:39:03 crc kubenswrapper[4689]: I1201 09:39:03.987858 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_0556c1c8-69cc-4fa6-a3df-46a4ed439312/cinder-scheduler/0.log" Dec 01 09:39:04 crc kubenswrapper[4689]: I1201 09:39:04.070451 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_0556c1c8-69cc-4fa6-a3df-46a4ed439312/probe/0.log" Dec 01 09:39:04 crc kubenswrapper[4689]: I1201 09:39:04.231644 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-vf78w_14713a8f-36bf-48fa-bfb2-3c384ad7abd0/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Dec 01 09:39:04 crc kubenswrapper[4689]: I1201 09:39:04.289689 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-fndhp_4ae39c64-0beb-4b8c-a08b-35aba6ecb704/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 01 09:39:04 crc kubenswrapper[4689]: I1201 09:39:04.532059 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6bcf8b9d95-dkgsn_fb61d912-665c-4e59-b0cf-7e46e24e5201/init/0.log" Dec 01 09:39:04 crc kubenswrapper[4689]: I1201 09:39:04.766078 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6bcf8b9d95-dkgsn_fb61d912-665c-4e59-b0cf-7e46e24e5201/dnsmasq-dns/0.log" Dec 01 09:39:04 crc kubenswrapper[4689]: I1201 09:39:04.812677 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6bcf8b9d95-dkgsn_fb61d912-665c-4e59-b0cf-7e46e24e5201/init/0.log" Dec 01 09:39:04 crc kubenswrapper[4689]: I1201 09:39:04.913596 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-rdqqh_7f3287e5-9e76-46ee-91c4-8bc9b69a738f/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Dec 01 09:39:05 crc kubenswrapper[4689]: I1201 09:39:05.056230 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_097455e0-57c7-4c8e-bd14-86890aecc860/glance-log/0.log" Dec 01 09:39:05 crc kubenswrapper[4689]: I1201 09:39:05.088567 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_097455e0-57c7-4c8e-bd14-86890aecc860/glance-httpd/0.log" Dec 01 09:39:05 crc kubenswrapper[4689]: I1201 09:39:05.391573 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_5dd0e22c-0f46-4089-9ef0-7882c6068697/glance-httpd/0.log" Dec 01 09:39:05 crc kubenswrapper[4689]: I1201 09:39:05.466443 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_5dd0e22c-0f46-4089-9ef0-7882c6068697/glance-log/0.log" Dec 01 09:39:05 crc kubenswrapper[4689]: I1201 09:39:05.517509 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-d65b9788-2kr5p_fcebf70c-3de0-499e-928d-3419299a512f/horizon/1.log" Dec 01 09:39:05 crc kubenswrapper[4689]: I1201 09:39:05.878405 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-d65b9788-2kr5p_fcebf70c-3de0-499e-928d-3419299a512f/horizon/0.log" Dec 01 09:39:06 crc kubenswrapper[4689]: I1201 09:39:06.135075 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-cbwf4_c8be40c2-4b56-46d0-b99b-0fd198004a03/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Dec 01 09:39:06 crc kubenswrapper[4689]: I1201 09:39:06.352548 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-d65b9788-2kr5p_fcebf70c-3de0-499e-928d-3419299a512f/horizon-log/0.log" Dec 01 09:39:06 crc kubenswrapper[4689]: I1201 09:39:06.453991 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-x69xg_7b1625a4-a976-4cd2-8e93-7022d1571f1f/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 01 09:39:06 crc kubenswrapper[4689]: I1201 09:39:06.839979 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29409661-b4dz4_06af101b-855c-409b-8f88-171d7e9aaffc/keystone-cron/0.log" Dec 01 09:39:06 crc kubenswrapper[4689]: I1201 09:39:06.932052 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-7575f55b68-75xn5_3c402617-8f98-4531-b798-f395844db3ea/keystone-api/0.log" Dec 01 09:39:06 crc kubenswrapper[4689]: I1201 09:39:06.955969 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_432574e7-df30-4103-a396-c758c4df932c/kube-state-metrics/0.log" Dec 01 09:39:06 crc kubenswrapper[4689]: I1201 09:39:06.963852 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_432574e7-df30-4103-a396-c758c4df932c/kube-state-metrics/1.log" Dec 01 09:39:07 crc kubenswrapper[4689]: I1201 09:39:07.187809 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-m5ms8_79ac411d-051b-464c-ab78-a5e99ef18520/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Dec 01 09:39:07 crc kubenswrapper[4689]: I1201 09:39:07.645241 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-58c7f9c74f-nqnzt_9834ce74-a0c7-4e32-9d8b-1d39b27c62b6/neutron-httpd/0.log" Dec 01 09:39:07 crc kubenswrapper[4689]: I1201 09:39:07.646674 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-58c7f9c74f-nqnzt_9834ce74-a0c7-4e32-9d8b-1d39b27c62b6/neutron-api/0.log" Dec 01 09:39:07 crc kubenswrapper[4689]: I1201 09:39:07.655001 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-tmxfz_ccee02a7-c83e-4eb5-a6e7-f2ad619d948a/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Dec 01 09:39:07 crc kubenswrapper[4689]: I1201 09:39:07.893696 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_a3a578c7-bcdf-46f5-a781-5759e3c6da45/nova-api-api/0.log" Dec 01 09:39:08 crc kubenswrapper[4689]: I1201 09:39:08.196608 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_a3a578c7-bcdf-46f5-a781-5759e3c6da45/nova-api-log/0.log" Dec 01 09:39:08 crc kubenswrapper[4689]: I1201 09:39:08.259725 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_a3a578c7-bcdf-46f5-a781-5759e3c6da45/nova-api-log/1.log" Dec 01 09:39:08 crc kubenswrapper[4689]: I1201 09:39:08.315312 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_a3a578c7-bcdf-46f5-a781-5759e3c6da45/nova-api-api/1.log" Dec 01 09:39:08 crc kubenswrapper[4689]: I1201 09:39:08.603751 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_d111a251-6be1-4996-a20d-a6ecdb0dbec9/nova-cell0-conductor-conductor/0.log" Dec 01 09:39:08 crc kubenswrapper[4689]: I1201 09:39:08.635590 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_4a7f30c7-ee71-44e3-9aed-b1e65916e8b7/nova-cell1-conductor-conductor/0.log" Dec 01 09:39:09 crc kubenswrapper[4689]: I1201 09:39:09.101996 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_d1e959a4-6ab1-4c6c-86e4-8e319fc8806a/nova-cell1-novncproxy-novncproxy/0.log" Dec 01 09:39:09 crc kubenswrapper[4689]: I1201 09:39:09.196932 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-mkgfg_5351042e-776c-44c1-a6ad-bf530a24bfb7/nova-edpm-deployment-openstack-edpm-ipam/0.log" Dec 01 09:39:09 crc kubenswrapper[4689]: I1201 09:39:09.421069 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_b0e9419c-e23b-4c71-b88e-736138bcdd65/nova-metadata-log/0.log" Dec 01 09:39:09 crc kubenswrapper[4689]: I1201 09:39:09.473501 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_b0e9419c-e23b-4c71-b88e-736138bcdd65/nova-metadata-metadata/0.log" Dec 01 09:39:09 crc kubenswrapper[4689]: I1201 09:39:09.688494 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_b0e9419c-e23b-4c71-b88e-736138bcdd65/nova-metadata-log/1.log" Dec 01 09:39:09 crc kubenswrapper[4689]: I1201 09:39:09.994099 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_bc1ecd4c-eede-492c-ac97-071c42545607/mysql-bootstrap/0.log" Dec 01 09:39:10 crc kubenswrapper[4689]: I1201 09:39:10.005931 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_a13f2879-b8c7-42d5-8f88-ca9aeb7f26bc/nova-scheduler-scheduler/0.log" Dec 01 09:39:10 crc kubenswrapper[4689]: I1201 09:39:10.327356 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_bc1ecd4c-eede-492c-ac97-071c42545607/mysql-bootstrap/0.log" Dec 01 09:39:10 crc kubenswrapper[4689]: I1201 09:39:10.371945 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_bc1ecd4c-eede-492c-ac97-071c42545607/galera/0.log" Dec 01 09:39:10 crc kubenswrapper[4689]: I1201 09:39:10.601824 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_555543d8-21bb-4dba-9c08-ab82e90ea894/mysql-bootstrap/0.log" Dec 01 09:39:10 crc kubenswrapper[4689]: I1201 09:39:10.805556 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_b0e9419c-e23b-4c71-b88e-736138bcdd65/nova-metadata-metadata/1.log" Dec 01 09:39:10 crc kubenswrapper[4689]: I1201 09:39:10.851908 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_555543d8-21bb-4dba-9c08-ab82e90ea894/mysql-bootstrap/0.log" Dec 01 09:39:10 crc kubenswrapper[4689]: I1201 09:39:10.875969 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_555543d8-21bb-4dba-9c08-ab82e90ea894/galera/0.log" Dec 01 09:39:10 crc kubenswrapper[4689]: I1201 09:39:10.999554 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_555543d8-21bb-4dba-9c08-ab82e90ea894/galera/1.log" Dec 01 09:39:11 crc kubenswrapper[4689]: I1201 09:39:11.064724 4689 scope.go:117] "RemoveContainer" containerID="ce8ba0c4ebf8c19567df23bc269c8977e9b7f7f3972dba477579893ac7472ebf" Dec 01 09:39:11 crc kubenswrapper[4689]: E1201 09:39:11.064926 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:39:11 crc kubenswrapper[4689]: I1201 09:39:11.177206 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_0cbf9f73-fecd-4c17-95c6-b0bd5a1ae285/openstackclient/0.log" Dec 01 09:39:11 crc kubenswrapper[4689]: I1201 09:39:11.349775 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-48955_8731b0fb-0429-4730-8da9-cc182fdf29e1/ovn-controller/0.log" Dec 01 09:39:11 crc kubenswrapper[4689]: I1201 09:39:11.535014 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-t4rfs_5b0566d9-e730-4929-aa69-fba41a7c88c0/openstack-network-exporter/0.log" Dec 01 09:39:11 crc kubenswrapper[4689]: I1201 09:39:11.698976 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-sj4xx_a0d0f0ef-1203-4001-9872-7c32022a4839/ovsdb-server-init/0.log" Dec 01 09:39:12 crc kubenswrapper[4689]: I1201 09:39:12.020142 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-sj4xx_a0d0f0ef-1203-4001-9872-7c32022a4839/ovs-vswitchd/0.log" Dec 01 09:39:12 crc kubenswrapper[4689]: I1201 09:39:12.114038 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-sj4xx_a0d0f0ef-1203-4001-9872-7c32022a4839/ovsdb-server/0.log" Dec 01 09:39:12 crc kubenswrapper[4689]: I1201 09:39:12.146422 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-sj4xx_a0d0f0ef-1203-4001-9872-7c32022a4839/ovsdb-server-init/0.log" Dec 01 09:39:12 crc kubenswrapper[4689]: I1201 09:39:12.384244 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_aa6871a1-f6d5-44b1-a4b7-638763c9c92b/openstack-network-exporter/0.log" Dec 01 09:39:12 crc kubenswrapper[4689]: I1201 09:39:12.386125 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-9d8rc_8df0b21e-33ac-48fa-b46f-558a7e4c37fc/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Dec 01 09:39:12 crc kubenswrapper[4689]: I1201 09:39:12.469249 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_aa6871a1-f6d5-44b1-a4b7-638763c9c92b/ovn-northd/0.log" Dec 01 09:39:12 crc kubenswrapper[4689]: I1201 09:39:12.727679 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_150dfc79-4971-4c3d-aada-13fc85bd101c/ovsdbserver-nb/0.log" Dec 01 09:39:12 crc kubenswrapper[4689]: I1201 09:39:12.739566 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_150dfc79-4971-4c3d-aada-13fc85bd101c/openstack-network-exporter/0.log" Dec 01 09:39:12 crc kubenswrapper[4689]: I1201 09:39:12.950493 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_5b1a856a-afb7-4839-a797-7625521520b2/openstack-network-exporter/0.log" Dec 01 09:39:13 crc kubenswrapper[4689]: I1201 09:39:13.072869 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_5b1a856a-afb7-4839-a797-7625521520b2/ovsdbserver-sb/0.log" Dec 01 09:39:13 crc kubenswrapper[4689]: I1201 09:39:13.219418 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5454b5d64d-5p8d8_a31e6c25-e2a2-4c12-9138-6969155a7f20/placement-api/0.log" Dec 01 09:39:13 crc kubenswrapper[4689]: I1201 09:39:13.315925 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5454b5d64d-5p8d8_a31e6c25-e2a2-4c12-9138-6969155a7f20/placement-log/0.log" Dec 01 09:39:13 crc kubenswrapper[4689]: I1201 09:39:13.443741 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_5100fd48-e762-41b7-ac48-29b85c21dd3d/setup-container/0.log" Dec 01 09:39:13 crc kubenswrapper[4689]: I1201 09:39:13.736437 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_5100fd48-e762-41b7-ac48-29b85c21dd3d/rabbitmq/0.log" Dec 01 09:39:13 crc kubenswrapper[4689]: I1201 09:39:13.745780 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_4b5ea820-9372-4a98-8000-75815f156435/setup-container/0.log" Dec 01 09:39:13 crc kubenswrapper[4689]: I1201 09:39:13.790968 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_5100fd48-e762-41b7-ac48-29b85c21dd3d/setup-container/0.log" Dec 01 09:39:14 crc kubenswrapper[4689]: I1201 09:39:14.088932 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_4b5ea820-9372-4a98-8000-75815f156435/setup-container/0.log" Dec 01 09:39:14 crc kubenswrapper[4689]: I1201 09:39:14.231640 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_4b5ea820-9372-4a98-8000-75815f156435/rabbitmq/0.log" Dec 01 09:39:14 crc kubenswrapper[4689]: I1201 09:39:14.270779 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-mjpf5_defe39e2-091c-472e-aefe-7691672100e7/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 01 09:39:14 crc kubenswrapper[4689]: I1201 09:39:14.545805 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-gcfdn_1da07875-e46b-4de1-8eea-fb33b293b5a7/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Dec 01 09:39:14 crc kubenswrapper[4689]: I1201 09:39:14.651779 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-drctr_0e13f608-85ec-4fe4-b6bb-e651d2f736d3/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Dec 01 09:39:14 crc kubenswrapper[4689]: I1201 09:39:14.895544 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-9vvhw_3bdb7314-795b-4a29-a1a6-ba5f3bccdcbe/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 01 09:39:15 crc kubenswrapper[4689]: I1201 09:39:15.014307 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-q6z88_8fd75600-1f4f-4bfb-94fd-d9778efd0e5e/ssh-known-hosts-edpm-deployment/0.log" Dec 01 09:39:15 crc kubenswrapper[4689]: I1201 09:39:15.303023 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-7459744dff-cxqv7_e242b763-d0db-401f-b552-d109d6c5ec28/proxy-server/0.log" Dec 01 09:39:15 crc kubenswrapper[4689]: I1201 09:39:15.340858 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-7459744dff-cxqv7_e242b763-d0db-401f-b552-d109d6c5ec28/proxy-httpd/0.log" Dec 01 09:39:15 crc kubenswrapper[4689]: I1201 09:39:15.422582 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-66t7q_0e57c646-4b20-4bb9-9c89-bad52b7a1c07/swift-ring-rebalance/0.log" Dec 01 09:39:15 crc kubenswrapper[4689]: I1201 09:39:15.663635 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c18c4a63-48ba-42e2-a7f0-d5750963b90f/account-auditor/0.log" Dec 01 09:39:15 crc kubenswrapper[4689]: I1201 09:39:15.737608 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c18c4a63-48ba-42e2-a7f0-d5750963b90f/account-reaper/0.log" Dec 01 09:39:15 crc kubenswrapper[4689]: I1201 09:39:15.788218 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c18c4a63-48ba-42e2-a7f0-d5750963b90f/account-replicator/0.log" Dec 01 09:39:15 crc kubenswrapper[4689]: I1201 09:39:15.964535 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c18c4a63-48ba-42e2-a7f0-d5750963b90f/account-server/0.log" Dec 01 09:39:16 crc kubenswrapper[4689]: I1201 09:39:16.143229 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c18c4a63-48ba-42e2-a7f0-d5750963b90f/container-auditor/0.log" Dec 01 09:39:16 crc kubenswrapper[4689]: I1201 09:39:16.170233 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c18c4a63-48ba-42e2-a7f0-d5750963b90f/container-server/0.log" Dec 01 09:39:16 crc kubenswrapper[4689]: I1201 09:39:16.270334 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c18c4a63-48ba-42e2-a7f0-d5750963b90f/container-replicator/0.log" Dec 01 09:39:16 crc kubenswrapper[4689]: I1201 09:39:16.301169 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c18c4a63-48ba-42e2-a7f0-d5750963b90f/container-updater/0.log" Dec 01 09:39:16 crc kubenswrapper[4689]: I1201 09:39:16.452703 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c18c4a63-48ba-42e2-a7f0-d5750963b90f/object-expirer/0.log" Dec 01 09:39:16 crc kubenswrapper[4689]: I1201 09:39:16.527213 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c18c4a63-48ba-42e2-a7f0-d5750963b90f/object-auditor/0.log" Dec 01 09:39:16 crc kubenswrapper[4689]: I1201 09:39:16.609143 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c18c4a63-48ba-42e2-a7f0-d5750963b90f/object-replicator/0.log" Dec 01 09:39:16 crc kubenswrapper[4689]: I1201 09:39:16.709844 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c18c4a63-48ba-42e2-a7f0-d5750963b90f/object-server/0.log" Dec 01 09:39:16 crc kubenswrapper[4689]: I1201 09:39:16.819559 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c18c4a63-48ba-42e2-a7f0-d5750963b90f/object-updater/0.log" Dec 01 09:39:16 crc kubenswrapper[4689]: I1201 09:39:16.845718 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c18c4a63-48ba-42e2-a7f0-d5750963b90f/rsync/0.log" Dec 01 09:39:16 crc kubenswrapper[4689]: I1201 09:39:16.911470 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c18c4a63-48ba-42e2-a7f0-d5750963b90f/swift-recon-cron/0.log" Dec 01 09:39:17 crc kubenswrapper[4689]: I1201 09:39:17.209975 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-g6njz_4a88b941-7390-4f78-83e5-733fe9d39482/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Dec 01 09:39:17 crc kubenswrapper[4689]: I1201 09:39:17.240781 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_107c3226-2b1b-4f80-9670-8f0c1ffd3337/tempest-tests-tempest-tests-runner/0.log" Dec 01 09:39:17 crc kubenswrapper[4689]: I1201 09:39:17.470258 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_308bdd90-c162-47a3-bc04-5369c9b235b8/test-operator-logs-container/0.log" Dec 01 09:39:17 crc kubenswrapper[4689]: I1201 09:39:17.595591 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-42d6w_dc01b01d-6ad2-4595-ab0f-42cc127d1a7a/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Dec 01 09:39:22 crc kubenswrapper[4689]: I1201 09:39:22.046866 4689 scope.go:117] "RemoveContainer" containerID="ce8ba0c4ebf8c19567df23bc269c8977e9b7f7f3972dba477579893ac7472ebf" Dec 01 09:39:22 crc kubenswrapper[4689]: E1201 09:39:22.047713 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:39:27 crc kubenswrapper[4689]: I1201 09:39:27.468155 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_f04989a7-e9bc-4d0b-a7a1-efe12657bd2b/memcached/0.log" Dec 01 09:39:37 crc kubenswrapper[4689]: I1201 09:39:37.048764 4689 scope.go:117] "RemoveContainer" containerID="ce8ba0c4ebf8c19567df23bc269c8977e9b7f7f3972dba477579893ac7472ebf" Dec 01 09:39:37 crc kubenswrapper[4689]: E1201 09:39:37.051398 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:39:49 crc kubenswrapper[4689]: I1201 09:39:49.047591 4689 scope.go:117] "RemoveContainer" containerID="ce8ba0c4ebf8c19567df23bc269c8977e9b7f7f3972dba477579893ac7472ebf" Dec 01 09:39:49 crc kubenswrapper[4689]: E1201 09:39:49.048269 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:39:49 crc kubenswrapper[4689]: I1201 09:39:49.314486 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-7vlqn_7ce2f328-3ee3-4800-89e4-9141c841c258/manager/1.log" Dec 01 09:39:49 crc kubenswrapper[4689]: I1201 09:39:49.341182 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-7vlqn_7ce2f328-3ee3-4800-89e4-9141c841c258/kube-rbac-proxy/0.log" Dec 01 09:39:49 crc kubenswrapper[4689]: I1201 09:39:49.510565 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-7vlqn_7ce2f328-3ee3-4800-89e4-9141c841c258/manager/0.log" Dec 01 09:39:49 crc kubenswrapper[4689]: I1201 09:39:49.597275 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-859b6ccc6-7vrt5_5266d333-3337-4481-9478-2e1df848bfa2/manager/1.log" Dec 01 09:39:49 crc kubenswrapper[4689]: I1201 09:39:49.598422 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-859b6ccc6-7vrt5_5266d333-3337-4481-9478-2e1df848bfa2/kube-rbac-proxy/0.log" Dec 01 09:39:49 crc kubenswrapper[4689]: I1201 09:39:49.815739 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-859b6ccc6-7vrt5_5266d333-3337-4481-9478-2e1df848bfa2/manager/0.log" Dec 01 09:39:49 crc kubenswrapper[4689]: I1201 09:39:49.851462 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-78b4bc895b-25q6j_2b35aff9-c66d-448c-9883-05e650f7f147/kube-rbac-proxy/0.log" Dec 01 09:39:49 crc kubenswrapper[4689]: I1201 09:39:49.886230 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-78b4bc895b-25q6j_2b35aff9-c66d-448c-9883-05e650f7f147/manager/0.log" Dec 01 09:39:50 crc kubenswrapper[4689]: I1201 09:39:50.095614 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f2e15f12445683aa2b8703b846077ffc01e309ca5b05095633f7ac13f1j55cj_8405d928-22c4-4389-9b08-f6e3dc2acfdc/util/0.log" Dec 01 09:39:50 crc kubenswrapper[4689]: I1201 09:39:50.283766 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f2e15f12445683aa2b8703b846077ffc01e309ca5b05095633f7ac13f1j55cj_8405d928-22c4-4389-9b08-f6e3dc2acfdc/pull/0.log" Dec 01 09:39:50 crc kubenswrapper[4689]: I1201 09:39:50.288348 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f2e15f12445683aa2b8703b846077ffc01e309ca5b05095633f7ac13f1j55cj_8405d928-22c4-4389-9b08-f6e3dc2acfdc/util/0.log" Dec 01 09:39:50 crc kubenswrapper[4689]: I1201 09:39:50.328121 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f2e15f12445683aa2b8703b846077ffc01e309ca5b05095633f7ac13f1j55cj_8405d928-22c4-4389-9b08-f6e3dc2acfdc/pull/0.log" Dec 01 09:39:50 crc kubenswrapper[4689]: I1201 09:39:50.493747 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f2e15f12445683aa2b8703b846077ffc01e309ca5b05095633f7ac13f1j55cj_8405d928-22c4-4389-9b08-f6e3dc2acfdc/util/0.log" Dec 01 09:39:50 crc kubenswrapper[4689]: I1201 09:39:50.544041 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f2e15f12445683aa2b8703b846077ffc01e309ca5b05095633f7ac13f1j55cj_8405d928-22c4-4389-9b08-f6e3dc2acfdc/pull/0.log" Dec 01 09:39:50 crc kubenswrapper[4689]: I1201 09:39:50.621611 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f2e15f12445683aa2b8703b846077ffc01e309ca5b05095633f7ac13f1j55cj_8405d928-22c4-4389-9b08-f6e3dc2acfdc/extract/0.log" Dec 01 09:39:50 crc kubenswrapper[4689]: I1201 09:39:50.722960 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-668d9c48b9-xhrp7_ae47d16a-5025-44f4-8fa4-f5aa08b126b8/kube-rbac-proxy/0.log" Dec 01 09:39:50 crc kubenswrapper[4689]: I1201 09:39:50.783590 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-668d9c48b9-xhrp7_ae47d16a-5025-44f4-8fa4-f5aa08b126b8/manager/1.log" Dec 01 09:39:50 crc kubenswrapper[4689]: I1201 09:39:50.864005 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-668d9c48b9-xhrp7_ae47d16a-5025-44f4-8fa4-f5aa08b126b8/manager/0.log" Dec 01 09:39:50 crc kubenswrapper[4689]: I1201 09:39:50.990578 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-w6qx2_fc02885a-340a-4800-bd0b-360c0476b456/kube-rbac-proxy/0.log" Dec 01 09:39:51 crc kubenswrapper[4689]: I1201 09:39:51.098897 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-w6qx2_fc02885a-340a-4800-bd0b-360c0476b456/manager/1.log" Dec 01 09:39:51 crc kubenswrapper[4689]: I1201 09:39:51.206605 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-w6qx2_fc02885a-340a-4800-bd0b-360c0476b456/manager/0.log" Dec 01 09:39:51 crc kubenswrapper[4689]: I1201 09:39:51.281209 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-dp8gl_ffc5e400-7853-4b1d-ae11-d6ffa553093a/kube-rbac-proxy/0.log" Dec 01 09:39:51 crc kubenswrapper[4689]: I1201 09:39:51.351921 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-dp8gl_ffc5e400-7853-4b1d-ae11-d6ffa553093a/manager/1.log" Dec 01 09:39:51 crc kubenswrapper[4689]: I1201 09:39:51.507915 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-tgmx9_e44ef73a-e172-4557-920d-42f84488390e/kube-rbac-proxy/0.log" Dec 01 09:39:51 crc kubenswrapper[4689]: I1201 09:39:51.551021 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-dp8gl_ffc5e400-7853-4b1d-ae11-d6ffa553093a/manager/0.log" Dec 01 09:39:51 crc kubenswrapper[4689]: I1201 09:39:51.683151 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-tgmx9_e44ef73a-e172-4557-920d-42f84488390e/manager/1.log" Dec 01 09:39:51 crc kubenswrapper[4689]: I1201 09:39:51.817224 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6c548fd776-f7xtr_ea3e4b08-090d-444e-ba53-a3df490fbaf8/kube-rbac-proxy/0.log" Dec 01 09:39:51 crc kubenswrapper[4689]: I1201 09:39:51.854570 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-tgmx9_e44ef73a-e172-4557-920d-42f84488390e/manager/0.log" Dec 01 09:39:51 crc kubenswrapper[4689]: I1201 09:39:51.964538 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6c548fd776-f7xtr_ea3e4b08-090d-444e-ba53-a3df490fbaf8/manager/1.log" Dec 01 09:39:52 crc kubenswrapper[4689]: I1201 09:39:52.002709 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6c548fd776-f7xtr_ea3e4b08-090d-444e-ba53-a3df490fbaf8/manager/0.log" Dec 01 09:39:52 crc kubenswrapper[4689]: I1201 09:39:52.087182 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-758d67db86-z298n_2974e300-3f26-4ec0-912a-9ee6b78f33ce/kube-rbac-proxy/0.log" Dec 01 09:39:52 crc kubenswrapper[4689]: I1201 09:39:52.184446 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-758d67db86-z298n_2974e300-3f26-4ec0-912a-9ee6b78f33ce/manager/1.log" Dec 01 09:39:52 crc kubenswrapper[4689]: I1201 09:39:52.297489 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-758d67db86-z298n_2974e300-3f26-4ec0-912a-9ee6b78f33ce/manager/0.log" Dec 01 09:39:52 crc kubenswrapper[4689]: I1201 09:39:52.301072 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-6546668bfd-x722t_3751be2a-8675-4b07-8198-101bfdd71d72/kube-rbac-proxy/0.log" Dec 01 09:39:52 crc kubenswrapper[4689]: I1201 09:39:52.437262 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-6546668bfd-x722t_3751be2a-8675-4b07-8198-101bfdd71d72/manager/1.log" Dec 01 09:39:52 crc kubenswrapper[4689]: I1201 09:39:52.558070 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-6546668bfd-x722t_3751be2a-8675-4b07-8198-101bfdd71d72/manager/0.log" Dec 01 09:39:52 crc kubenswrapper[4689]: I1201 09:39:52.590969 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-56bbcc9d85-fm9bv_0d311ded-de3a-42e8-87d3-23c50c4fbd8a/kube-rbac-proxy/0.log" Dec 01 09:39:52 crc kubenswrapper[4689]: I1201 09:39:52.702829 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-56bbcc9d85-fm9bv_0d311ded-de3a-42e8-87d3-23c50c4fbd8a/manager/1.log" Dec 01 09:39:52 crc kubenswrapper[4689]: I1201 09:39:52.767516 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-56bbcc9d85-fm9bv_0d311ded-de3a-42e8-87d3-23c50c4fbd8a/manager/0.log" Dec 01 09:39:52 crc kubenswrapper[4689]: I1201 09:39:52.806982 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-ghq5b_4d923f8c-103b-4b12-b2e7-ea926440e5e7/kube-rbac-proxy/0.log" Dec 01 09:39:53 crc kubenswrapper[4689]: I1201 09:39:53.012783 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-ghq5b_4d923f8c-103b-4b12-b2e7-ea926440e5e7/manager/1.log" Dec 01 09:39:53 crc kubenswrapper[4689]: I1201 09:39:53.054630 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-ghq5b_4d923f8c-103b-4b12-b2e7-ea926440e5e7/manager/0.log" Dec 01 09:39:53 crc kubenswrapper[4689]: I1201 09:39:53.078291 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-pssbg_d4a1d78c-9486-4b3b-afac-2d51d2cb14df/kube-rbac-proxy/0.log" Dec 01 09:39:53 crc kubenswrapper[4689]: I1201 09:39:53.283058 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-pssbg_d4a1d78c-9486-4b3b-afac-2d51d2cb14df/manager/1.log" Dec 01 09:39:53 crc kubenswrapper[4689]: I1201 09:39:53.309876 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-vfnzm_12885cbd-1d3e-40c1-b7f5-73bdb6572db9/kube-rbac-proxy/0.log" Dec 01 09:39:53 crc kubenswrapper[4689]: I1201 09:39:53.333111 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-pssbg_d4a1d78c-9486-4b3b-afac-2d51d2cb14df/manager/0.log" Dec 01 09:39:53 crc kubenswrapper[4689]: I1201 09:39:53.555363 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-vfnzm_12885cbd-1d3e-40c1-b7f5-73bdb6572db9/manager/0.log" Dec 01 09:39:53 crc kubenswrapper[4689]: I1201 09:39:53.573004 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-vfnzm_12885cbd-1d3e-40c1-b7f5-73bdb6572db9/manager/1.log" Dec 01 09:39:53 crc kubenswrapper[4689]: I1201 09:39:53.665865 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-64bc77cfd4sbfl9_6d24ac0e-a14a-4644-ba9a-bc0a6bb0c538/kube-rbac-proxy/0.log" Dec 01 09:39:53 crc kubenswrapper[4689]: I1201 09:39:53.811453 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-64bc77cfd4sbfl9_6d24ac0e-a14a-4644-ba9a-bc0a6bb0c538/manager/0.log" Dec 01 09:39:53 crc kubenswrapper[4689]: I1201 09:39:53.827131 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-64bc77cfd4sbfl9_6d24ac0e-a14a-4644-ba9a-bc0a6bb0c538/manager/1.log" Dec 01 09:39:54 crc kubenswrapper[4689]: I1201 09:39:54.438053 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-c6fb994fd-5lzsb_161f3daa-6403-48b2-8e33-b01d632a2316/operator/0.log" Dec 01 09:39:54 crc kubenswrapper[4689]: I1201 09:39:54.471833 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-jss7v_b5026a2c-ab73-4b77-99d4-79dd6bcdb139/registry-server/0.log" Dec 01 09:39:54 crc kubenswrapper[4689]: I1201 09:39:54.473963 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-6fc767d767-8r9dw_4f43cf3a-d166-44ba-8d44-9e81b0666e0a/manager/1.log" Dec 01 09:39:54 crc kubenswrapper[4689]: I1201 09:39:54.771272 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-p296h_b3049390-311d-46ed-b472-d32a22f2f8d2/manager/1.log" Dec 01 09:39:54 crc kubenswrapper[4689]: I1201 09:39:54.793052 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-p296h_b3049390-311d-46ed-b472-d32a22f2f8d2/kube-rbac-proxy/0.log" Dec 01 09:39:54 crc kubenswrapper[4689]: I1201 09:39:54.978605 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-p296h_b3049390-311d-46ed-b472-d32a22f2f8d2/manager/0.log" Dec 01 09:39:55 crc kubenswrapper[4689]: I1201 09:39:55.057215 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-6fc767d767-8r9dw_4f43cf3a-d166-44ba-8d44-9e81b0666e0a/manager/0.log" Dec 01 09:39:55 crc kubenswrapper[4689]: I1201 09:39:55.061824 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-nsnm9_3e8aa0dc-ea41-48e6-b047-4bb71fd01f8a/manager/1.log" Dec 01 09:39:55 crc kubenswrapper[4689]: I1201 09:39:55.174227 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-nsnm9_3e8aa0dc-ea41-48e6-b047-4bb71fd01f8a/kube-rbac-proxy/0.log" Dec 01 09:39:55 crc kubenswrapper[4689]: I1201 09:39:55.282098 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-nsnm9_3e8aa0dc-ea41-48e6-b047-4bb71fd01f8a/manager/0.log" Dec 01 09:39:55 crc kubenswrapper[4689]: I1201 09:39:55.361846 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-t56mz_7085b604-e50c-4940-ac21-b6fe208c82cd/operator/1.log" Dec 01 09:39:55 crc kubenswrapper[4689]: I1201 09:39:55.428695 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-t56mz_7085b604-e50c-4940-ac21-b6fe208c82cd/operator/0.log" Dec 01 09:39:56 crc kubenswrapper[4689]: I1201 09:39:56.013948 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f8c65bbfc-5d8x5_8b33263b-a51c-49e4-b301-b975791e098a/kube-rbac-proxy/0.log" Dec 01 09:39:56 crc kubenswrapper[4689]: I1201 09:39:56.017360 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-76cc84c6bb-prvxn_af92d0ca-8211-49a0-9362-bd5749143fff/kube-rbac-proxy/0.log" Dec 01 09:39:56 crc kubenswrapper[4689]: I1201 09:39:56.020991 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f8c65bbfc-5d8x5_8b33263b-a51c-49e4-b301-b975791e098a/manager/1.log" Dec 01 09:39:56 crc kubenswrapper[4689]: I1201 09:39:56.037382 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-76cc84c6bb-prvxn_af92d0ca-8211-49a0-9362-bd5749143fff/manager/1.log" Dec 01 09:39:56 crc kubenswrapper[4689]: I1201 09:39:56.042948 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f8c65bbfc-5d8x5_8b33263b-a51c-49e4-b301-b975791e098a/manager/0.log" Dec 01 09:39:56 crc kubenswrapper[4689]: I1201 09:39:56.260331 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-vbkrn_f94d79da-740a-4080-81d0-ff3bf1867b3d/kube-rbac-proxy/0.log" Dec 01 09:39:56 crc kubenswrapper[4689]: I1201 09:39:56.278669 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-vbkrn_f94d79da-740a-4080-81d0-ff3bf1867b3d/manager/0.log" Dec 01 09:39:56 crc kubenswrapper[4689]: I1201 09:39:56.340555 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-vbkrn_f94d79da-740a-4080-81d0-ff3bf1867b3d/manager/1.log" Dec 01 09:39:56 crc kubenswrapper[4689]: I1201 09:39:56.406653 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-76cc84c6bb-prvxn_af92d0ca-8211-49a0-9362-bd5749143fff/manager/0.log" Dec 01 09:39:56 crc kubenswrapper[4689]: I1201 09:39:56.491185 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-769dc69bc-sfplx_5f9861d6-2700-4af6-b385-e79220c14b2e/kube-rbac-proxy/0.log" Dec 01 09:39:56 crc kubenswrapper[4689]: I1201 09:39:56.594765 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-769dc69bc-sfplx_5f9861d6-2700-4af6-b385-e79220c14b2e/manager/0.log" Dec 01 09:39:56 crc kubenswrapper[4689]: I1201 09:39:56.601315 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-769dc69bc-sfplx_5f9861d6-2700-4af6-b385-e79220c14b2e/manager/1.log" Dec 01 09:40:04 crc kubenswrapper[4689]: I1201 09:40:04.047855 4689 scope.go:117] "RemoveContainer" containerID="ce8ba0c4ebf8c19567df23bc269c8977e9b7f7f3972dba477579893ac7472ebf" Dec 01 09:40:04 crc kubenswrapper[4689]: E1201 09:40:04.048648 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:40:16 crc kubenswrapper[4689]: I1201 09:40:16.810695 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-xjxwg_2499ecbd-1cda-49a9-8c8a-e80d44127f01/control-plane-machine-set-operator/0.log" Dec 01 09:40:17 crc kubenswrapper[4689]: I1201 09:40:17.022589 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-chlnk_c062b92b-1709-4892-9b40-b1d2405d5812/kube-rbac-proxy/0.log" Dec 01 09:40:17 crc kubenswrapper[4689]: I1201 09:40:17.035164 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-chlnk_c062b92b-1709-4892-9b40-b1d2405d5812/machine-api-operator/0.log" Dec 01 09:40:19 crc kubenswrapper[4689]: I1201 09:40:19.047359 4689 scope.go:117] "RemoveContainer" containerID="ce8ba0c4ebf8c19567df23bc269c8977e9b7f7f3972dba477579893ac7472ebf" Dec 01 09:40:19 crc kubenswrapper[4689]: E1201 09:40:19.047961 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:40:29 crc kubenswrapper[4689]: I1201 09:40:29.405086 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-jxq2j_159eaec1-709b-4f6b-9c2d-271433805055/cert-manager-controller/1.log" Dec 01 09:40:29 crc kubenswrapper[4689]: I1201 09:40:29.518971 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-jxq2j_159eaec1-709b-4f6b-9c2d-271433805055/cert-manager-controller/0.log" Dec 01 09:40:29 crc kubenswrapper[4689]: I1201 09:40:29.592933 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-lhzz2_f166eac0-2073-4aa8-9b0b-6b3c6e43b19e/cert-manager-cainjector/1.log" Dec 01 09:40:29 crc kubenswrapper[4689]: I1201 09:40:29.680834 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-lhzz2_f166eac0-2073-4aa8-9b0b-6b3c6e43b19e/cert-manager-cainjector/0.log" Dec 01 09:40:29 crc kubenswrapper[4689]: I1201 09:40:29.852609 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-mnqrt_0690c213-4822-49c3-a886-9dd92aa3f957/cert-manager-webhook/0.log" Dec 01 09:40:34 crc kubenswrapper[4689]: I1201 09:40:34.047732 4689 scope.go:117] "RemoveContainer" containerID="ce8ba0c4ebf8c19567df23bc269c8977e9b7f7f3972dba477579893ac7472ebf" Dec 01 09:40:34 crc kubenswrapper[4689]: E1201 09:40:34.048604 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:40:43 crc kubenswrapper[4689]: I1201 09:40:43.430480 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7fbb5f6569-rls4h_888875d4-358f-4232-96f5-7fe326118284/nmstate-console-plugin/0.log" Dec 01 09:40:43 crc kubenswrapper[4689]: I1201 09:40:43.735956 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-mtr66_569aea60-ecf2-4ccb-b516-93098c33139a/nmstate-handler/0.log" Dec 01 09:40:43 crc kubenswrapper[4689]: I1201 09:40:43.784507 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-nwsvj_c0686309-db1b-42c8-963c-e66bee2b8bb1/kube-rbac-proxy/0.log" Dec 01 09:40:43 crc kubenswrapper[4689]: I1201 09:40:43.894278 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-nwsvj_c0686309-db1b-42c8-963c-e66bee2b8bb1/nmstate-metrics/0.log" Dec 01 09:40:44 crc kubenswrapper[4689]: I1201 09:40:44.056382 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-5b5b58f5c8-ldxm6_f38467c3-1d62-49ae-97f5-1fa17dbb514e/nmstate-operator/0.log" Dec 01 09:40:44 crc kubenswrapper[4689]: I1201 09:40:44.158931 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f6d4c5ccb-fbrdp_4eb87e27-d5ce-4aa6-9808-862d7afb9fd1/nmstate-webhook/0.log" Dec 01 09:40:46 crc kubenswrapper[4689]: I1201 09:40:46.048983 4689 scope.go:117] "RemoveContainer" containerID="ce8ba0c4ebf8c19567df23bc269c8977e9b7f7f3972dba477579893ac7472ebf" Dec 01 09:40:46 crc kubenswrapper[4689]: E1201 09:40:46.049958 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:40:59 crc kubenswrapper[4689]: I1201 09:40:59.047298 4689 scope.go:117] "RemoveContainer" containerID="ce8ba0c4ebf8c19567df23bc269c8977e9b7f7f3972dba477579893ac7472ebf" Dec 01 09:40:59 crc kubenswrapper[4689]: E1201 09:40:59.048165 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:41:00 crc kubenswrapper[4689]: I1201 09:41:00.381723 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-wrs47_ef20aee9-f534-4832-9bb0-ef4ec0c3c807/kube-rbac-proxy/0.log" Dec 01 09:41:00 crc kubenswrapper[4689]: I1201 09:41:00.544032 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-wrs47_ef20aee9-f534-4832-9bb0-ef4ec0c3c807/controller/0.log" Dec 01 09:41:00 crc kubenswrapper[4689]: I1201 09:41:00.675477 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5j4hf_67f63643-d748-4058-b24c-66ce8a8c3234/cp-frr-files/0.log" Dec 01 09:41:00 crc kubenswrapper[4689]: I1201 09:41:00.848853 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5j4hf_67f63643-d748-4058-b24c-66ce8a8c3234/cp-reloader/0.log" Dec 01 09:41:00 crc kubenswrapper[4689]: I1201 09:41:00.892078 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5j4hf_67f63643-d748-4058-b24c-66ce8a8c3234/cp-reloader/0.log" Dec 01 09:41:00 crc kubenswrapper[4689]: I1201 09:41:00.905060 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5j4hf_67f63643-d748-4058-b24c-66ce8a8c3234/cp-frr-files/0.log" Dec 01 09:41:00 crc kubenswrapper[4689]: I1201 09:41:00.940309 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5j4hf_67f63643-d748-4058-b24c-66ce8a8c3234/cp-metrics/0.log" Dec 01 09:41:01 crc kubenswrapper[4689]: I1201 09:41:01.287311 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5j4hf_67f63643-d748-4058-b24c-66ce8a8c3234/cp-metrics/0.log" Dec 01 09:41:01 crc kubenswrapper[4689]: I1201 09:41:01.289457 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5j4hf_67f63643-d748-4058-b24c-66ce8a8c3234/cp-frr-files/0.log" Dec 01 09:41:01 crc kubenswrapper[4689]: I1201 09:41:01.297444 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5j4hf_67f63643-d748-4058-b24c-66ce8a8c3234/cp-metrics/0.log" Dec 01 09:41:01 crc kubenswrapper[4689]: I1201 09:41:01.328790 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5j4hf_67f63643-d748-4058-b24c-66ce8a8c3234/cp-reloader/0.log" Dec 01 09:41:01 crc kubenswrapper[4689]: I1201 09:41:01.490095 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5j4hf_67f63643-d748-4058-b24c-66ce8a8c3234/cp-frr-files/0.log" Dec 01 09:41:01 crc kubenswrapper[4689]: I1201 09:41:01.490252 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5j4hf_67f63643-d748-4058-b24c-66ce8a8c3234/cp-reloader/0.log" Dec 01 09:41:01 crc kubenswrapper[4689]: I1201 09:41:01.525896 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5j4hf_67f63643-d748-4058-b24c-66ce8a8c3234/cp-metrics/0.log" Dec 01 09:41:01 crc kubenswrapper[4689]: I1201 09:41:01.621061 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5j4hf_67f63643-d748-4058-b24c-66ce8a8c3234/controller/0.log" Dec 01 09:41:01 crc kubenswrapper[4689]: I1201 09:41:01.737648 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5j4hf_67f63643-d748-4058-b24c-66ce8a8c3234/frr-metrics/0.log" Dec 01 09:41:01 crc kubenswrapper[4689]: I1201 09:41:01.765907 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5j4hf_67f63643-d748-4058-b24c-66ce8a8c3234/kube-rbac-proxy/0.log" Dec 01 09:41:01 crc kubenswrapper[4689]: I1201 09:41:01.880249 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5j4hf_67f63643-d748-4058-b24c-66ce8a8c3234/kube-rbac-proxy-frr/0.log" Dec 01 09:41:02 crc kubenswrapper[4689]: I1201 09:41:02.042950 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5j4hf_67f63643-d748-4058-b24c-66ce8a8c3234/reloader/0.log" Dec 01 09:41:02 crc kubenswrapper[4689]: I1201 09:41:02.165667 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7fcb986d4-nmlzc_b3b8a95d-6924-4416-a625-995ed59e230d/frr-k8s-webhook-server/0.log" Dec 01 09:41:02 crc kubenswrapper[4689]: I1201 09:41:02.478354 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6599c4498-sh7sl_7d09395b-ad54-4b96-af05-ea6ce866de71/manager/1.log" Dec 01 09:41:02 crc kubenswrapper[4689]: I1201 09:41:02.481880 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6599c4498-sh7sl_7d09395b-ad54-4b96-af05-ea6ce866de71/manager/0.log" Dec 01 09:41:02 crc kubenswrapper[4689]: I1201 09:41:02.899844 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-fd7fdd679-r8jpf_1f4ef99a-e0b0-42c7-8599-284fdd6c5ae1/webhook-server/0.log" Dec 01 09:41:02 crc kubenswrapper[4689]: I1201 09:41:02.917596 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5j4hf_67f63643-d748-4058-b24c-66ce8a8c3234/frr/0.log" Dec 01 09:41:03 crc kubenswrapper[4689]: I1201 09:41:03.015918 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-5c56f_4e0a2cf0-4edb-4f5e-90d2-e276ddd43fd1/kube-rbac-proxy/0.log" Dec 01 09:41:03 crc kubenswrapper[4689]: I1201 09:41:03.527980 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-5c56f_4e0a2cf0-4edb-4f5e-90d2-e276ddd43fd1/speaker/1.log" Dec 01 09:41:03 crc kubenswrapper[4689]: I1201 09:41:03.624223 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-5c56f_4e0a2cf0-4edb-4f5e-90d2-e276ddd43fd1/speaker/0.log" Dec 01 09:41:11 crc kubenswrapper[4689]: I1201 09:41:11.080620 4689 scope.go:117] "RemoveContainer" containerID="ce8ba0c4ebf8c19567df23bc269c8977e9b7f7f3972dba477579893ac7472ebf" Dec 01 09:41:11 crc kubenswrapper[4689]: E1201 09:41:11.081720 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:41:16 crc kubenswrapper[4689]: I1201 09:41:16.890545 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f85wqf_4205e462-7e96-4991-8157-5a483dec2452/util/0.log" Dec 01 09:41:17 crc kubenswrapper[4689]: I1201 09:41:17.185486 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f85wqf_4205e462-7e96-4991-8157-5a483dec2452/util/0.log" Dec 01 09:41:17 crc kubenswrapper[4689]: I1201 09:41:17.230491 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f85wqf_4205e462-7e96-4991-8157-5a483dec2452/pull/0.log" Dec 01 09:41:17 crc kubenswrapper[4689]: I1201 09:41:17.235276 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f85wqf_4205e462-7e96-4991-8157-5a483dec2452/pull/0.log" Dec 01 09:41:17 crc kubenswrapper[4689]: I1201 09:41:17.377504 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f85wqf_4205e462-7e96-4991-8157-5a483dec2452/util/0.log" Dec 01 09:41:17 crc kubenswrapper[4689]: I1201 09:41:17.416211 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f85wqf_4205e462-7e96-4991-8157-5a483dec2452/pull/0.log" Dec 01 09:41:17 crc kubenswrapper[4689]: I1201 09:41:17.443838 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f85wqf_4205e462-7e96-4991-8157-5a483dec2452/extract/0.log" Dec 01 09:41:17 crc kubenswrapper[4689]: I1201 09:41:17.578523 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83rntb4_76b524e4-b427-4426-86ee-aa0b67f86533/util/0.log" Dec 01 09:41:17 crc kubenswrapper[4689]: I1201 09:41:17.799486 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83rntb4_76b524e4-b427-4426-86ee-aa0b67f86533/util/0.log" Dec 01 09:41:17 crc kubenswrapper[4689]: I1201 09:41:17.821131 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83rntb4_76b524e4-b427-4426-86ee-aa0b67f86533/pull/0.log" Dec 01 09:41:17 crc kubenswrapper[4689]: I1201 09:41:17.840254 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83rntb4_76b524e4-b427-4426-86ee-aa0b67f86533/pull/0.log" Dec 01 09:41:17 crc kubenswrapper[4689]: I1201 09:41:17.995993 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83rntb4_76b524e4-b427-4426-86ee-aa0b67f86533/util/0.log" Dec 01 09:41:18 crc kubenswrapper[4689]: I1201 09:41:18.061556 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83rntb4_76b524e4-b427-4426-86ee-aa0b67f86533/pull/0.log" Dec 01 09:41:18 crc kubenswrapper[4689]: I1201 09:41:18.091249 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83rntb4_76b524e4-b427-4426-86ee-aa0b67f86533/extract/0.log" Dec 01 09:41:18 crc kubenswrapper[4689]: I1201 09:41:18.229897 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5nsm4_3cea5449-8a30-47d4-bb0f-e7a6c785bee5/extract-utilities/0.log" Dec 01 09:41:18 crc kubenswrapper[4689]: I1201 09:41:18.494892 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5nsm4_3cea5449-8a30-47d4-bb0f-e7a6c785bee5/extract-content/0.log" Dec 01 09:41:18 crc kubenswrapper[4689]: I1201 09:41:18.526467 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5nsm4_3cea5449-8a30-47d4-bb0f-e7a6c785bee5/extract-content/0.log" Dec 01 09:41:18 crc kubenswrapper[4689]: I1201 09:41:18.545151 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5nsm4_3cea5449-8a30-47d4-bb0f-e7a6c785bee5/extract-utilities/0.log" Dec 01 09:41:18 crc kubenswrapper[4689]: I1201 09:41:18.784688 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5nsm4_3cea5449-8a30-47d4-bb0f-e7a6c785bee5/extract-content/0.log" Dec 01 09:41:18 crc kubenswrapper[4689]: I1201 09:41:18.794772 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5nsm4_3cea5449-8a30-47d4-bb0f-e7a6c785bee5/extract-utilities/0.log" Dec 01 09:41:19 crc kubenswrapper[4689]: I1201 09:41:19.127542 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hqnhc_493e02e9-20cb-4ef2-b0d7-94896afe320d/extract-utilities/0.log" Dec 01 09:41:19 crc kubenswrapper[4689]: I1201 09:41:19.263515 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5nsm4_3cea5449-8a30-47d4-bb0f-e7a6c785bee5/registry-server/0.log" Dec 01 09:41:19 crc kubenswrapper[4689]: I1201 09:41:19.407781 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hqnhc_493e02e9-20cb-4ef2-b0d7-94896afe320d/extract-content/0.log" Dec 01 09:41:19 crc kubenswrapper[4689]: I1201 09:41:19.442804 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hqnhc_493e02e9-20cb-4ef2-b0d7-94896afe320d/extract-utilities/0.log" Dec 01 09:41:19 crc kubenswrapper[4689]: I1201 09:41:19.499619 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hqnhc_493e02e9-20cb-4ef2-b0d7-94896afe320d/extract-content/0.log" Dec 01 09:41:19 crc kubenswrapper[4689]: I1201 09:41:19.642684 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hqnhc_493e02e9-20cb-4ef2-b0d7-94896afe320d/extract-utilities/0.log" Dec 01 09:41:19 crc kubenswrapper[4689]: I1201 09:41:19.656317 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hqnhc_493e02e9-20cb-4ef2-b0d7-94896afe320d/extract-content/0.log" Dec 01 09:41:19 crc kubenswrapper[4689]: I1201 09:41:19.866597 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hqnhc_493e02e9-20cb-4ef2-b0d7-94896afe320d/registry-server/0.log" Dec 01 09:41:19 crc kubenswrapper[4689]: I1201 09:41:19.952437 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-jhh4c_0cd9ccf0-2f85-4649-ac80-931f337566ca/marketplace-operator/0.log" Dec 01 09:41:20 crc kubenswrapper[4689]: I1201 09:41:20.071960 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-jhh4c_0cd9ccf0-2f85-4649-ac80-931f337566ca/marketplace-operator/1.log" Dec 01 09:41:20 crc kubenswrapper[4689]: I1201 09:41:20.154918 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-kww7g_075f35f7-3a97-4145-b911-9a14de1e1fee/extract-utilities/0.log" Dec 01 09:41:20 crc kubenswrapper[4689]: I1201 09:41:20.343799 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-kww7g_075f35f7-3a97-4145-b911-9a14de1e1fee/extract-utilities/0.log" Dec 01 09:41:20 crc kubenswrapper[4689]: I1201 09:41:20.365796 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-kww7g_075f35f7-3a97-4145-b911-9a14de1e1fee/extract-content/0.log" Dec 01 09:41:20 crc kubenswrapper[4689]: I1201 09:41:20.405461 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-kww7g_075f35f7-3a97-4145-b911-9a14de1e1fee/extract-content/0.log" Dec 01 09:41:20 crc kubenswrapper[4689]: I1201 09:41:20.632241 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-kww7g_075f35f7-3a97-4145-b911-9a14de1e1fee/extract-content/0.log" Dec 01 09:41:20 crc kubenswrapper[4689]: I1201 09:41:20.656934 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-kww7g_075f35f7-3a97-4145-b911-9a14de1e1fee/extract-utilities/0.log" Dec 01 09:41:20 crc kubenswrapper[4689]: I1201 09:41:20.779884 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-kww7g_075f35f7-3a97-4145-b911-9a14de1e1fee/registry-server/0.log" Dec 01 09:41:20 crc kubenswrapper[4689]: I1201 09:41:20.837603 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2vf7n_11f527ec-49a1-4be9-a67b-676eb6b8feba/extract-utilities/0.log" Dec 01 09:41:21 crc kubenswrapper[4689]: I1201 09:41:21.122720 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2vf7n_11f527ec-49a1-4be9-a67b-676eb6b8feba/extract-content/0.log" Dec 01 09:41:21 crc kubenswrapper[4689]: I1201 09:41:21.129568 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2vf7n_11f527ec-49a1-4be9-a67b-676eb6b8feba/extract-content/0.log" Dec 01 09:41:21 crc kubenswrapper[4689]: I1201 09:41:21.181057 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2vf7n_11f527ec-49a1-4be9-a67b-676eb6b8feba/extract-utilities/0.log" Dec 01 09:41:21 crc kubenswrapper[4689]: I1201 09:41:21.363250 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2vf7n_11f527ec-49a1-4be9-a67b-676eb6b8feba/extract-utilities/0.log" Dec 01 09:41:21 crc kubenswrapper[4689]: I1201 09:41:21.375987 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2vf7n_11f527ec-49a1-4be9-a67b-676eb6b8feba/extract-content/0.log" Dec 01 09:41:21 crc kubenswrapper[4689]: I1201 09:41:21.857469 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2vf7n_11f527ec-49a1-4be9-a67b-676eb6b8feba/registry-server/0.log" Dec 01 09:41:22 crc kubenswrapper[4689]: I1201 09:41:22.048006 4689 scope.go:117] "RemoveContainer" containerID="ce8ba0c4ebf8c19567df23bc269c8977e9b7f7f3972dba477579893ac7472ebf" Dec 01 09:41:22 crc kubenswrapper[4689]: E1201 09:41:22.048352 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:41:35 crc kubenswrapper[4689]: I1201 09:41:35.049034 4689 scope.go:117] "RemoveContainer" containerID="ce8ba0c4ebf8c19567df23bc269c8977e9b7f7f3972dba477579893ac7472ebf" Dec 01 09:41:35 crc kubenswrapper[4689]: E1201 09:41:35.049991 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:41:47 crc kubenswrapper[4689]: I1201 09:41:47.048032 4689 scope.go:117] "RemoveContainer" containerID="ce8ba0c4ebf8c19567df23bc269c8977e9b7f7f3972dba477579893ac7472ebf" Dec 01 09:41:47 crc kubenswrapper[4689]: E1201 09:41:47.048851 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:41:58 crc kubenswrapper[4689]: I1201 09:41:58.048283 4689 scope.go:117] "RemoveContainer" containerID="ce8ba0c4ebf8c19567df23bc269c8977e9b7f7f3972dba477579893ac7472ebf" Dec 01 09:41:58 crc kubenswrapper[4689]: E1201 09:41:58.050189 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:42:12 crc kubenswrapper[4689]: I1201 09:42:12.048502 4689 scope.go:117] "RemoveContainer" containerID="ce8ba0c4ebf8c19567df23bc269c8977e9b7f7f3972dba477579893ac7472ebf" Dec 01 09:42:12 crc kubenswrapper[4689]: E1201 09:42:12.049251 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:42:25 crc kubenswrapper[4689]: I1201 09:42:25.048243 4689 scope.go:117] "RemoveContainer" containerID="ce8ba0c4ebf8c19567df23bc269c8977e9b7f7f3972dba477579893ac7472ebf" Dec 01 09:42:25 crc kubenswrapper[4689]: E1201 09:42:25.049805 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:42:40 crc kubenswrapper[4689]: I1201 09:42:40.049160 4689 scope.go:117] "RemoveContainer" containerID="ce8ba0c4ebf8c19567df23bc269c8977e9b7f7f3972dba477579893ac7472ebf" Dec 01 09:42:40 crc kubenswrapper[4689]: E1201 09:42:40.050019 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:42:41 crc kubenswrapper[4689]: I1201 09:42:41.315440 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-d52hb"] Dec 01 09:42:41 crc kubenswrapper[4689]: E1201 09:42:41.316316 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42f54a15-e062-4951-9d0f-ba2abbf1e4e5" containerName="container-00" Dec 01 09:42:41 crc kubenswrapper[4689]: I1201 09:42:41.316333 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="42f54a15-e062-4951-9d0f-ba2abbf1e4e5" containerName="container-00" Dec 01 09:42:41 crc kubenswrapper[4689]: I1201 09:42:41.316578 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="42f54a15-e062-4951-9d0f-ba2abbf1e4e5" containerName="container-00" Dec 01 09:42:41 crc kubenswrapper[4689]: I1201 09:42:41.317995 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d52hb" Dec 01 09:42:41 crc kubenswrapper[4689]: I1201 09:42:41.339272 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-d52hb"] Dec 01 09:42:41 crc kubenswrapper[4689]: I1201 09:42:41.443168 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42l42\" (UniqueName: \"kubernetes.io/projected/eef735c9-4b54-484c-8602-e0a462dfdc69-kube-api-access-42l42\") pod \"redhat-marketplace-d52hb\" (UID: \"eef735c9-4b54-484c-8602-e0a462dfdc69\") " pod="openshift-marketplace/redhat-marketplace-d52hb" Dec 01 09:42:41 crc kubenswrapper[4689]: I1201 09:42:41.443346 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eef735c9-4b54-484c-8602-e0a462dfdc69-catalog-content\") pod \"redhat-marketplace-d52hb\" (UID: \"eef735c9-4b54-484c-8602-e0a462dfdc69\") " pod="openshift-marketplace/redhat-marketplace-d52hb" Dec 01 09:42:41 crc kubenswrapper[4689]: I1201 09:42:41.443471 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eef735c9-4b54-484c-8602-e0a462dfdc69-utilities\") pod \"redhat-marketplace-d52hb\" (UID: \"eef735c9-4b54-484c-8602-e0a462dfdc69\") " pod="openshift-marketplace/redhat-marketplace-d52hb" Dec 01 09:42:41 crc kubenswrapper[4689]: I1201 09:42:41.545672 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eef735c9-4b54-484c-8602-e0a462dfdc69-catalog-content\") pod \"redhat-marketplace-d52hb\" (UID: \"eef735c9-4b54-484c-8602-e0a462dfdc69\") " pod="openshift-marketplace/redhat-marketplace-d52hb" Dec 01 09:42:41 crc kubenswrapper[4689]: I1201 09:42:41.545775 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eef735c9-4b54-484c-8602-e0a462dfdc69-utilities\") pod \"redhat-marketplace-d52hb\" (UID: \"eef735c9-4b54-484c-8602-e0a462dfdc69\") " pod="openshift-marketplace/redhat-marketplace-d52hb" Dec 01 09:42:41 crc kubenswrapper[4689]: I1201 09:42:41.545879 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42l42\" (UniqueName: \"kubernetes.io/projected/eef735c9-4b54-484c-8602-e0a462dfdc69-kube-api-access-42l42\") pod \"redhat-marketplace-d52hb\" (UID: \"eef735c9-4b54-484c-8602-e0a462dfdc69\") " pod="openshift-marketplace/redhat-marketplace-d52hb" Dec 01 09:42:41 crc kubenswrapper[4689]: I1201 09:42:41.546246 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eef735c9-4b54-484c-8602-e0a462dfdc69-catalog-content\") pod \"redhat-marketplace-d52hb\" (UID: \"eef735c9-4b54-484c-8602-e0a462dfdc69\") " pod="openshift-marketplace/redhat-marketplace-d52hb" Dec 01 09:42:41 crc kubenswrapper[4689]: I1201 09:42:41.546623 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eef735c9-4b54-484c-8602-e0a462dfdc69-utilities\") pod \"redhat-marketplace-d52hb\" (UID: \"eef735c9-4b54-484c-8602-e0a462dfdc69\") " pod="openshift-marketplace/redhat-marketplace-d52hb" Dec 01 09:42:41 crc kubenswrapper[4689]: I1201 09:42:41.571963 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42l42\" (UniqueName: \"kubernetes.io/projected/eef735c9-4b54-484c-8602-e0a462dfdc69-kube-api-access-42l42\") pod \"redhat-marketplace-d52hb\" (UID: \"eef735c9-4b54-484c-8602-e0a462dfdc69\") " pod="openshift-marketplace/redhat-marketplace-d52hb" Dec 01 09:42:41 crc kubenswrapper[4689]: I1201 09:42:41.639655 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d52hb" Dec 01 09:42:42 crc kubenswrapper[4689]: I1201 09:42:42.217699 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-d52hb"] Dec 01 09:42:43 crc kubenswrapper[4689]: I1201 09:42:43.177546 4689 generic.go:334] "Generic (PLEG): container finished" podID="eef735c9-4b54-484c-8602-e0a462dfdc69" containerID="4ae96e93c35a629a4f75d049190331a0e5b37c64a22c6836bb1fe9437f0f8bdd" exitCode=0 Dec 01 09:42:43 crc kubenswrapper[4689]: I1201 09:42:43.177836 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d52hb" event={"ID":"eef735c9-4b54-484c-8602-e0a462dfdc69","Type":"ContainerDied","Data":"4ae96e93c35a629a4f75d049190331a0e5b37c64a22c6836bb1fe9437f0f8bdd"} Dec 01 09:42:43 crc kubenswrapper[4689]: I1201 09:42:43.177890 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d52hb" event={"ID":"eef735c9-4b54-484c-8602-e0a462dfdc69","Type":"ContainerStarted","Data":"063682886bf5eeb1a5d5bed3f35db9a652eece296fb5fab20ad4a2faae33e832"} Dec 01 09:42:43 crc kubenswrapper[4689]: I1201 09:42:43.182678 4689 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 01 09:42:44 crc kubenswrapper[4689]: I1201 09:42:44.210936 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d52hb" event={"ID":"eef735c9-4b54-484c-8602-e0a462dfdc69","Type":"ContainerStarted","Data":"c2e0a1a8321b1e47f76910d3bb4e4b46801eac6445d573f5099f5af6f93e8b7e"} Dec 01 09:42:45 crc kubenswrapper[4689]: I1201 09:42:45.236072 4689 generic.go:334] "Generic (PLEG): container finished" podID="eef735c9-4b54-484c-8602-e0a462dfdc69" containerID="c2e0a1a8321b1e47f76910d3bb4e4b46801eac6445d573f5099f5af6f93e8b7e" exitCode=0 Dec 01 09:42:45 crc kubenswrapper[4689]: I1201 09:42:45.237189 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d52hb" event={"ID":"eef735c9-4b54-484c-8602-e0a462dfdc69","Type":"ContainerDied","Data":"c2e0a1a8321b1e47f76910d3bb4e4b46801eac6445d573f5099f5af6f93e8b7e"} Dec 01 09:42:46 crc kubenswrapper[4689]: I1201 09:42:46.248828 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d52hb" event={"ID":"eef735c9-4b54-484c-8602-e0a462dfdc69","Type":"ContainerStarted","Data":"6c89ab2ba2cafbde793481f5364e32df69e44e263c118cbcee865059b7b35557"} Dec 01 09:42:46 crc kubenswrapper[4689]: I1201 09:42:46.273722 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-d52hb" podStartSLOduration=2.5771499479999997 podStartE2EDuration="5.273699643s" podCreationTimestamp="2025-12-01 09:42:41 +0000 UTC" firstStartedPulling="2025-12-01 09:42:43.182346416 +0000 UTC m=+3843.254634320" lastFinishedPulling="2025-12-01 09:42:45.878896111 +0000 UTC m=+3845.951184015" observedRunningTime="2025-12-01 09:42:46.270090375 +0000 UTC m=+3846.342378279" watchObservedRunningTime="2025-12-01 09:42:46.273699643 +0000 UTC m=+3846.345987547" Dec 01 09:42:51 crc kubenswrapper[4689]: I1201 09:42:51.647143 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-d52hb" Dec 01 09:42:51 crc kubenswrapper[4689]: I1201 09:42:51.648113 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-d52hb" Dec 01 09:42:51 crc kubenswrapper[4689]: I1201 09:42:51.715965 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-d52hb" Dec 01 09:42:52 crc kubenswrapper[4689]: I1201 09:42:52.358955 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-d52hb" Dec 01 09:42:52 crc kubenswrapper[4689]: I1201 09:42:52.410946 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-d52hb"] Dec 01 09:42:54 crc kubenswrapper[4689]: I1201 09:42:54.047097 4689 scope.go:117] "RemoveContainer" containerID="ce8ba0c4ebf8c19567df23bc269c8977e9b7f7f3972dba477579893ac7472ebf" Dec 01 09:42:54 crc kubenswrapper[4689]: E1201 09:42:54.047626 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:42:54 crc kubenswrapper[4689]: I1201 09:42:54.322000 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-d52hb" podUID="eef735c9-4b54-484c-8602-e0a462dfdc69" containerName="registry-server" containerID="cri-o://6c89ab2ba2cafbde793481f5364e32df69e44e263c118cbcee865059b7b35557" gracePeriod=2 Dec 01 09:42:54 crc kubenswrapper[4689]: I1201 09:42:54.804276 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d52hb" Dec 01 09:42:54 crc kubenswrapper[4689]: I1201 09:42:54.840395 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-42l42\" (UniqueName: \"kubernetes.io/projected/eef735c9-4b54-484c-8602-e0a462dfdc69-kube-api-access-42l42\") pod \"eef735c9-4b54-484c-8602-e0a462dfdc69\" (UID: \"eef735c9-4b54-484c-8602-e0a462dfdc69\") " Dec 01 09:42:54 crc kubenswrapper[4689]: I1201 09:42:54.840446 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eef735c9-4b54-484c-8602-e0a462dfdc69-catalog-content\") pod \"eef735c9-4b54-484c-8602-e0a462dfdc69\" (UID: \"eef735c9-4b54-484c-8602-e0a462dfdc69\") " Dec 01 09:42:54 crc kubenswrapper[4689]: I1201 09:42:54.840501 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eef735c9-4b54-484c-8602-e0a462dfdc69-utilities\") pod \"eef735c9-4b54-484c-8602-e0a462dfdc69\" (UID: \"eef735c9-4b54-484c-8602-e0a462dfdc69\") " Dec 01 09:42:54 crc kubenswrapper[4689]: I1201 09:42:54.841713 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eef735c9-4b54-484c-8602-e0a462dfdc69-utilities" (OuterVolumeSpecName: "utilities") pod "eef735c9-4b54-484c-8602-e0a462dfdc69" (UID: "eef735c9-4b54-484c-8602-e0a462dfdc69"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:42:54 crc kubenswrapper[4689]: I1201 09:42:54.862883 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eef735c9-4b54-484c-8602-e0a462dfdc69-kube-api-access-42l42" (OuterVolumeSpecName: "kube-api-access-42l42") pod "eef735c9-4b54-484c-8602-e0a462dfdc69" (UID: "eef735c9-4b54-484c-8602-e0a462dfdc69"). InnerVolumeSpecName "kube-api-access-42l42". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:42:54 crc kubenswrapper[4689]: I1201 09:42:54.867926 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eef735c9-4b54-484c-8602-e0a462dfdc69-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eef735c9-4b54-484c-8602-e0a462dfdc69" (UID: "eef735c9-4b54-484c-8602-e0a462dfdc69"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:42:54 crc kubenswrapper[4689]: I1201 09:42:54.942581 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-42l42\" (UniqueName: \"kubernetes.io/projected/eef735c9-4b54-484c-8602-e0a462dfdc69-kube-api-access-42l42\") on node \"crc\" DevicePath \"\"" Dec 01 09:42:54 crc kubenswrapper[4689]: I1201 09:42:54.942616 4689 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eef735c9-4b54-484c-8602-e0a462dfdc69-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 09:42:54 crc kubenswrapper[4689]: I1201 09:42:54.942628 4689 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eef735c9-4b54-484c-8602-e0a462dfdc69-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 09:42:55 crc kubenswrapper[4689]: I1201 09:42:55.335059 4689 generic.go:334] "Generic (PLEG): container finished" podID="eef735c9-4b54-484c-8602-e0a462dfdc69" containerID="6c89ab2ba2cafbde793481f5364e32df69e44e263c118cbcee865059b7b35557" exitCode=0 Dec 01 09:42:55 crc kubenswrapper[4689]: I1201 09:42:55.335113 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d52hb" event={"ID":"eef735c9-4b54-484c-8602-e0a462dfdc69","Type":"ContainerDied","Data":"6c89ab2ba2cafbde793481f5364e32df69e44e263c118cbcee865059b7b35557"} Dec 01 09:42:55 crc kubenswrapper[4689]: I1201 09:42:55.335413 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d52hb" event={"ID":"eef735c9-4b54-484c-8602-e0a462dfdc69","Type":"ContainerDied","Data":"063682886bf5eeb1a5d5bed3f35db9a652eece296fb5fab20ad4a2faae33e832"} Dec 01 09:42:55 crc kubenswrapper[4689]: I1201 09:42:55.335436 4689 scope.go:117] "RemoveContainer" containerID="6c89ab2ba2cafbde793481f5364e32df69e44e263c118cbcee865059b7b35557" Dec 01 09:42:55 crc kubenswrapper[4689]: I1201 09:42:55.335188 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d52hb" Dec 01 09:42:55 crc kubenswrapper[4689]: I1201 09:42:55.370227 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-d52hb"] Dec 01 09:42:55 crc kubenswrapper[4689]: I1201 09:42:55.379550 4689 scope.go:117] "RemoveContainer" containerID="c2e0a1a8321b1e47f76910d3bb4e4b46801eac6445d573f5099f5af6f93e8b7e" Dec 01 09:42:55 crc kubenswrapper[4689]: I1201 09:42:55.380472 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-d52hb"] Dec 01 09:42:55 crc kubenswrapper[4689]: I1201 09:42:55.413255 4689 scope.go:117] "RemoveContainer" containerID="4ae96e93c35a629a4f75d049190331a0e5b37c64a22c6836bb1fe9437f0f8bdd" Dec 01 09:42:55 crc kubenswrapper[4689]: I1201 09:42:55.477675 4689 scope.go:117] "RemoveContainer" containerID="6c89ab2ba2cafbde793481f5364e32df69e44e263c118cbcee865059b7b35557" Dec 01 09:42:55 crc kubenswrapper[4689]: E1201 09:42:55.478093 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c89ab2ba2cafbde793481f5364e32df69e44e263c118cbcee865059b7b35557\": container with ID starting with 6c89ab2ba2cafbde793481f5364e32df69e44e263c118cbcee865059b7b35557 not found: ID does not exist" containerID="6c89ab2ba2cafbde793481f5364e32df69e44e263c118cbcee865059b7b35557" Dec 01 09:42:55 crc kubenswrapper[4689]: I1201 09:42:55.478148 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c89ab2ba2cafbde793481f5364e32df69e44e263c118cbcee865059b7b35557"} err="failed to get container status \"6c89ab2ba2cafbde793481f5364e32df69e44e263c118cbcee865059b7b35557\": rpc error: code = NotFound desc = could not find container \"6c89ab2ba2cafbde793481f5364e32df69e44e263c118cbcee865059b7b35557\": container with ID starting with 6c89ab2ba2cafbde793481f5364e32df69e44e263c118cbcee865059b7b35557 not found: ID does not exist" Dec 01 09:42:55 crc kubenswrapper[4689]: I1201 09:42:55.478175 4689 scope.go:117] "RemoveContainer" containerID="c2e0a1a8321b1e47f76910d3bb4e4b46801eac6445d573f5099f5af6f93e8b7e" Dec 01 09:42:55 crc kubenswrapper[4689]: E1201 09:42:55.478468 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2e0a1a8321b1e47f76910d3bb4e4b46801eac6445d573f5099f5af6f93e8b7e\": container with ID starting with c2e0a1a8321b1e47f76910d3bb4e4b46801eac6445d573f5099f5af6f93e8b7e not found: ID does not exist" containerID="c2e0a1a8321b1e47f76910d3bb4e4b46801eac6445d573f5099f5af6f93e8b7e" Dec 01 09:42:55 crc kubenswrapper[4689]: I1201 09:42:55.478503 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2e0a1a8321b1e47f76910d3bb4e4b46801eac6445d573f5099f5af6f93e8b7e"} err="failed to get container status \"c2e0a1a8321b1e47f76910d3bb4e4b46801eac6445d573f5099f5af6f93e8b7e\": rpc error: code = NotFound desc = could not find container \"c2e0a1a8321b1e47f76910d3bb4e4b46801eac6445d573f5099f5af6f93e8b7e\": container with ID starting with c2e0a1a8321b1e47f76910d3bb4e4b46801eac6445d573f5099f5af6f93e8b7e not found: ID does not exist" Dec 01 09:42:55 crc kubenswrapper[4689]: I1201 09:42:55.478521 4689 scope.go:117] "RemoveContainer" containerID="4ae96e93c35a629a4f75d049190331a0e5b37c64a22c6836bb1fe9437f0f8bdd" Dec 01 09:42:55 crc kubenswrapper[4689]: E1201 09:42:55.478720 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ae96e93c35a629a4f75d049190331a0e5b37c64a22c6836bb1fe9437f0f8bdd\": container with ID starting with 4ae96e93c35a629a4f75d049190331a0e5b37c64a22c6836bb1fe9437f0f8bdd not found: ID does not exist" containerID="4ae96e93c35a629a4f75d049190331a0e5b37c64a22c6836bb1fe9437f0f8bdd" Dec 01 09:42:55 crc kubenswrapper[4689]: I1201 09:42:55.478749 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ae96e93c35a629a4f75d049190331a0e5b37c64a22c6836bb1fe9437f0f8bdd"} err="failed to get container status \"4ae96e93c35a629a4f75d049190331a0e5b37c64a22c6836bb1fe9437f0f8bdd\": rpc error: code = NotFound desc = could not find container \"4ae96e93c35a629a4f75d049190331a0e5b37c64a22c6836bb1fe9437f0f8bdd\": container with ID starting with 4ae96e93c35a629a4f75d049190331a0e5b37c64a22c6836bb1fe9437f0f8bdd not found: ID does not exist" Dec 01 09:42:57 crc kubenswrapper[4689]: I1201 09:42:57.069133 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eef735c9-4b54-484c-8602-e0a462dfdc69" path="/var/lib/kubelet/pods/eef735c9-4b54-484c-8602-e0a462dfdc69/volumes" Dec 01 09:43:09 crc kubenswrapper[4689]: I1201 09:43:09.047684 4689 scope.go:117] "RemoveContainer" containerID="ce8ba0c4ebf8c19567df23bc269c8977e9b7f7f3972dba477579893ac7472ebf" Dec 01 09:43:09 crc kubenswrapper[4689]: E1201 09:43:09.048506 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:43:16 crc kubenswrapper[4689]: I1201 09:43:16.589563 4689 generic.go:334] "Generic (PLEG): container finished" podID="d61f8f56-c6aa-469c-8ffc-178814fe85e5" containerID="fcd79970d34b8e555440efed755851e8ca9afd6ab4513a418f6d237e722097cc" exitCode=0 Dec 01 09:43:16 crc kubenswrapper[4689]: I1201 09:43:16.589661 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-f6hnn/must-gather-sv7dg" event={"ID":"d61f8f56-c6aa-469c-8ffc-178814fe85e5","Type":"ContainerDied","Data":"fcd79970d34b8e555440efed755851e8ca9afd6ab4513a418f6d237e722097cc"} Dec 01 09:43:16 crc kubenswrapper[4689]: I1201 09:43:16.590636 4689 scope.go:117] "RemoveContainer" containerID="fcd79970d34b8e555440efed755851e8ca9afd6ab4513a418f6d237e722097cc" Dec 01 09:43:17 crc kubenswrapper[4689]: I1201 09:43:17.238707 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-f6hnn_must-gather-sv7dg_d61f8f56-c6aa-469c-8ffc-178814fe85e5/gather/0.log" Dec 01 09:43:20 crc kubenswrapper[4689]: I1201 09:43:20.047131 4689 scope.go:117] "RemoveContainer" containerID="ce8ba0c4ebf8c19567df23bc269c8977e9b7f7f3972dba477579893ac7472ebf" Dec 01 09:43:20 crc kubenswrapper[4689]: I1201 09:43:20.633888 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" event={"ID":"3947625d-75bf-4332-a233-1491b2ee9d96","Type":"ContainerStarted","Data":"02184f9ae082d659d01c6605cb27fe963a839349b25bcf51719a6228ec800093"} Dec 01 09:43:25 crc kubenswrapper[4689]: I1201 09:43:25.623469 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-f6hnn/must-gather-sv7dg"] Dec 01 09:43:25 crc kubenswrapper[4689]: I1201 09:43:25.624337 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-f6hnn/must-gather-sv7dg" podUID="d61f8f56-c6aa-469c-8ffc-178814fe85e5" containerName="copy" containerID="cri-o://498ac31847139f993dafde9a0199b69197945c82f4da346fe45cd095927ec4fb" gracePeriod=2 Dec 01 09:43:25 crc kubenswrapper[4689]: I1201 09:43:25.637202 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-f6hnn/must-gather-sv7dg"] Dec 01 09:43:26 crc kubenswrapper[4689]: I1201 09:43:26.078917 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-f6hnn_must-gather-sv7dg_d61f8f56-c6aa-469c-8ffc-178814fe85e5/copy/0.log" Dec 01 09:43:26 crc kubenswrapper[4689]: I1201 09:43:26.079617 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-f6hnn/must-gather-sv7dg" Dec 01 09:43:26 crc kubenswrapper[4689]: I1201 09:43:26.167158 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qsrrt\" (UniqueName: \"kubernetes.io/projected/d61f8f56-c6aa-469c-8ffc-178814fe85e5-kube-api-access-qsrrt\") pod \"d61f8f56-c6aa-469c-8ffc-178814fe85e5\" (UID: \"d61f8f56-c6aa-469c-8ffc-178814fe85e5\") " Dec 01 09:43:26 crc kubenswrapper[4689]: I1201 09:43:26.167234 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d61f8f56-c6aa-469c-8ffc-178814fe85e5-must-gather-output\") pod \"d61f8f56-c6aa-469c-8ffc-178814fe85e5\" (UID: \"d61f8f56-c6aa-469c-8ffc-178814fe85e5\") " Dec 01 09:43:26 crc kubenswrapper[4689]: I1201 09:43:26.174162 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d61f8f56-c6aa-469c-8ffc-178814fe85e5-kube-api-access-qsrrt" (OuterVolumeSpecName: "kube-api-access-qsrrt") pod "d61f8f56-c6aa-469c-8ffc-178814fe85e5" (UID: "d61f8f56-c6aa-469c-8ffc-178814fe85e5"). InnerVolumeSpecName "kube-api-access-qsrrt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:43:26 crc kubenswrapper[4689]: I1201 09:43:26.269582 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qsrrt\" (UniqueName: \"kubernetes.io/projected/d61f8f56-c6aa-469c-8ffc-178814fe85e5-kube-api-access-qsrrt\") on node \"crc\" DevicePath \"\"" Dec 01 09:43:26 crc kubenswrapper[4689]: I1201 09:43:26.363880 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d61f8f56-c6aa-469c-8ffc-178814fe85e5-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "d61f8f56-c6aa-469c-8ffc-178814fe85e5" (UID: "d61f8f56-c6aa-469c-8ffc-178814fe85e5"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:43:26 crc kubenswrapper[4689]: I1201 09:43:26.371983 4689 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d61f8f56-c6aa-469c-8ffc-178814fe85e5-must-gather-output\") on node \"crc\" DevicePath \"\"" Dec 01 09:43:26 crc kubenswrapper[4689]: I1201 09:43:26.694775 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-f6hnn_must-gather-sv7dg_d61f8f56-c6aa-469c-8ffc-178814fe85e5/copy/0.log" Dec 01 09:43:26 crc kubenswrapper[4689]: I1201 09:43:26.696080 4689 generic.go:334] "Generic (PLEG): container finished" podID="d61f8f56-c6aa-469c-8ffc-178814fe85e5" containerID="498ac31847139f993dafde9a0199b69197945c82f4da346fe45cd095927ec4fb" exitCode=143 Dec 01 09:43:26 crc kubenswrapper[4689]: I1201 09:43:26.696210 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-f6hnn/must-gather-sv7dg" Dec 01 09:43:26 crc kubenswrapper[4689]: I1201 09:43:26.696212 4689 scope.go:117] "RemoveContainer" containerID="498ac31847139f993dafde9a0199b69197945c82f4da346fe45cd095927ec4fb" Dec 01 09:43:26 crc kubenswrapper[4689]: I1201 09:43:26.746101 4689 scope.go:117] "RemoveContainer" containerID="fcd79970d34b8e555440efed755851e8ca9afd6ab4513a418f6d237e722097cc" Dec 01 09:43:26 crc kubenswrapper[4689]: I1201 09:43:26.856492 4689 scope.go:117] "RemoveContainer" containerID="498ac31847139f993dafde9a0199b69197945c82f4da346fe45cd095927ec4fb" Dec 01 09:43:26 crc kubenswrapper[4689]: E1201 09:43:26.856840 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"498ac31847139f993dafde9a0199b69197945c82f4da346fe45cd095927ec4fb\": container with ID starting with 498ac31847139f993dafde9a0199b69197945c82f4da346fe45cd095927ec4fb not found: ID does not exist" containerID="498ac31847139f993dafde9a0199b69197945c82f4da346fe45cd095927ec4fb" Dec 01 09:43:26 crc kubenswrapper[4689]: I1201 09:43:26.856879 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"498ac31847139f993dafde9a0199b69197945c82f4da346fe45cd095927ec4fb"} err="failed to get container status \"498ac31847139f993dafde9a0199b69197945c82f4da346fe45cd095927ec4fb\": rpc error: code = NotFound desc = could not find container \"498ac31847139f993dafde9a0199b69197945c82f4da346fe45cd095927ec4fb\": container with ID starting with 498ac31847139f993dafde9a0199b69197945c82f4da346fe45cd095927ec4fb not found: ID does not exist" Dec 01 09:43:26 crc kubenswrapper[4689]: I1201 09:43:26.856907 4689 scope.go:117] "RemoveContainer" containerID="fcd79970d34b8e555440efed755851e8ca9afd6ab4513a418f6d237e722097cc" Dec 01 09:43:26 crc kubenswrapper[4689]: E1201 09:43:26.860779 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fcd79970d34b8e555440efed755851e8ca9afd6ab4513a418f6d237e722097cc\": container with ID starting with fcd79970d34b8e555440efed755851e8ca9afd6ab4513a418f6d237e722097cc not found: ID does not exist" containerID="fcd79970d34b8e555440efed755851e8ca9afd6ab4513a418f6d237e722097cc" Dec 01 09:43:26 crc kubenswrapper[4689]: I1201 09:43:26.860817 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fcd79970d34b8e555440efed755851e8ca9afd6ab4513a418f6d237e722097cc"} err="failed to get container status \"fcd79970d34b8e555440efed755851e8ca9afd6ab4513a418f6d237e722097cc\": rpc error: code = NotFound desc = could not find container \"fcd79970d34b8e555440efed755851e8ca9afd6ab4513a418f6d237e722097cc\": container with ID starting with fcd79970d34b8e555440efed755851e8ca9afd6ab4513a418f6d237e722097cc not found: ID does not exist" Dec 01 09:43:27 crc kubenswrapper[4689]: I1201 09:43:27.058612 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d61f8f56-c6aa-469c-8ffc-178814fe85e5" path="/var/lib/kubelet/pods/d61f8f56-c6aa-469c-8ffc-178814fe85e5/volumes" Dec 01 09:43:42 crc kubenswrapper[4689]: I1201 09:43:42.028208 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wtwp5"] Dec 01 09:43:42 crc kubenswrapper[4689]: E1201 09:43:42.029520 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eef735c9-4b54-484c-8602-e0a462dfdc69" containerName="extract-utilities" Dec 01 09:43:42 crc kubenswrapper[4689]: I1201 09:43:42.029548 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="eef735c9-4b54-484c-8602-e0a462dfdc69" containerName="extract-utilities" Dec 01 09:43:42 crc kubenswrapper[4689]: E1201 09:43:42.029613 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eef735c9-4b54-484c-8602-e0a462dfdc69" containerName="registry-server" Dec 01 09:43:42 crc kubenswrapper[4689]: I1201 09:43:42.029627 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="eef735c9-4b54-484c-8602-e0a462dfdc69" containerName="registry-server" Dec 01 09:43:42 crc kubenswrapper[4689]: E1201 09:43:42.029649 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d61f8f56-c6aa-469c-8ffc-178814fe85e5" containerName="gather" Dec 01 09:43:42 crc kubenswrapper[4689]: I1201 09:43:42.029660 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="d61f8f56-c6aa-469c-8ffc-178814fe85e5" containerName="gather" Dec 01 09:43:42 crc kubenswrapper[4689]: E1201 09:43:42.029683 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eef735c9-4b54-484c-8602-e0a462dfdc69" containerName="extract-content" Dec 01 09:43:42 crc kubenswrapper[4689]: I1201 09:43:42.029694 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="eef735c9-4b54-484c-8602-e0a462dfdc69" containerName="extract-content" Dec 01 09:43:42 crc kubenswrapper[4689]: E1201 09:43:42.029713 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d61f8f56-c6aa-469c-8ffc-178814fe85e5" containerName="copy" Dec 01 09:43:42 crc kubenswrapper[4689]: I1201 09:43:42.029723 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="d61f8f56-c6aa-469c-8ffc-178814fe85e5" containerName="copy" Dec 01 09:43:42 crc kubenswrapper[4689]: I1201 09:43:42.030038 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="d61f8f56-c6aa-469c-8ffc-178814fe85e5" containerName="copy" Dec 01 09:43:42 crc kubenswrapper[4689]: I1201 09:43:42.030067 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="eef735c9-4b54-484c-8602-e0a462dfdc69" containerName="registry-server" Dec 01 09:43:42 crc kubenswrapper[4689]: I1201 09:43:42.030083 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="d61f8f56-c6aa-469c-8ffc-178814fe85e5" containerName="gather" Dec 01 09:43:42 crc kubenswrapper[4689]: I1201 09:43:42.037351 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wtwp5" Dec 01 09:43:42 crc kubenswrapper[4689]: I1201 09:43:42.050815 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wtwp5"] Dec 01 09:43:42 crc kubenswrapper[4689]: I1201 09:43:42.161190 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8v52q\" (UniqueName: \"kubernetes.io/projected/00c2dba0-444f-4f17-80df-c2380723c45a-kube-api-access-8v52q\") pod \"community-operators-wtwp5\" (UID: \"00c2dba0-444f-4f17-80df-c2380723c45a\") " pod="openshift-marketplace/community-operators-wtwp5" Dec 01 09:43:42 crc kubenswrapper[4689]: I1201 09:43:42.161250 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00c2dba0-444f-4f17-80df-c2380723c45a-catalog-content\") pod \"community-operators-wtwp5\" (UID: \"00c2dba0-444f-4f17-80df-c2380723c45a\") " pod="openshift-marketplace/community-operators-wtwp5" Dec 01 09:43:42 crc kubenswrapper[4689]: I1201 09:43:42.161508 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00c2dba0-444f-4f17-80df-c2380723c45a-utilities\") pod \"community-operators-wtwp5\" (UID: \"00c2dba0-444f-4f17-80df-c2380723c45a\") " pod="openshift-marketplace/community-operators-wtwp5" Dec 01 09:43:42 crc kubenswrapper[4689]: I1201 09:43:42.263915 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8v52q\" (UniqueName: \"kubernetes.io/projected/00c2dba0-444f-4f17-80df-c2380723c45a-kube-api-access-8v52q\") pod \"community-operators-wtwp5\" (UID: \"00c2dba0-444f-4f17-80df-c2380723c45a\") " pod="openshift-marketplace/community-operators-wtwp5" Dec 01 09:43:42 crc kubenswrapper[4689]: I1201 09:43:42.263973 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00c2dba0-444f-4f17-80df-c2380723c45a-catalog-content\") pod \"community-operators-wtwp5\" (UID: \"00c2dba0-444f-4f17-80df-c2380723c45a\") " pod="openshift-marketplace/community-operators-wtwp5" Dec 01 09:43:42 crc kubenswrapper[4689]: I1201 09:43:42.264012 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00c2dba0-444f-4f17-80df-c2380723c45a-utilities\") pod \"community-operators-wtwp5\" (UID: \"00c2dba0-444f-4f17-80df-c2380723c45a\") " pod="openshift-marketplace/community-operators-wtwp5" Dec 01 09:43:42 crc kubenswrapper[4689]: I1201 09:43:42.264475 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00c2dba0-444f-4f17-80df-c2380723c45a-catalog-content\") pod \"community-operators-wtwp5\" (UID: \"00c2dba0-444f-4f17-80df-c2380723c45a\") " pod="openshift-marketplace/community-operators-wtwp5" Dec 01 09:43:42 crc kubenswrapper[4689]: I1201 09:43:42.264541 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00c2dba0-444f-4f17-80df-c2380723c45a-utilities\") pod \"community-operators-wtwp5\" (UID: \"00c2dba0-444f-4f17-80df-c2380723c45a\") " pod="openshift-marketplace/community-operators-wtwp5" Dec 01 09:43:42 crc kubenswrapper[4689]: I1201 09:43:42.287879 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8v52q\" (UniqueName: \"kubernetes.io/projected/00c2dba0-444f-4f17-80df-c2380723c45a-kube-api-access-8v52q\") pod \"community-operators-wtwp5\" (UID: \"00c2dba0-444f-4f17-80df-c2380723c45a\") " pod="openshift-marketplace/community-operators-wtwp5" Dec 01 09:43:42 crc kubenswrapper[4689]: I1201 09:43:42.394456 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wtwp5" Dec 01 09:43:42 crc kubenswrapper[4689]: I1201 09:43:42.956291 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wtwp5"] Dec 01 09:43:43 crc kubenswrapper[4689]: I1201 09:43:43.872282 4689 generic.go:334] "Generic (PLEG): container finished" podID="00c2dba0-444f-4f17-80df-c2380723c45a" containerID="c1d450353f1e59b60cbb18b6e4de47cb0da1ab004c45f7a3b6e7753a7c663faf" exitCode=0 Dec 01 09:43:43 crc kubenswrapper[4689]: I1201 09:43:43.872502 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wtwp5" event={"ID":"00c2dba0-444f-4f17-80df-c2380723c45a","Type":"ContainerDied","Data":"c1d450353f1e59b60cbb18b6e4de47cb0da1ab004c45f7a3b6e7753a7c663faf"} Dec 01 09:43:43 crc kubenswrapper[4689]: I1201 09:43:43.872586 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wtwp5" event={"ID":"00c2dba0-444f-4f17-80df-c2380723c45a","Type":"ContainerStarted","Data":"db3df5b79ab0b14fae8a5cc2946549a97c2c8eeee9af1f8e528a8eb8d89670a9"} Dec 01 09:43:44 crc kubenswrapper[4689]: I1201 09:43:44.882894 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wtwp5" event={"ID":"00c2dba0-444f-4f17-80df-c2380723c45a","Type":"ContainerStarted","Data":"ff3e823f50881c7631c46d17e3b147eb6d3c3694378a2ae1d2184e80f02ee71a"} Dec 01 09:43:45 crc kubenswrapper[4689]: I1201 09:43:45.899275 4689 generic.go:334] "Generic (PLEG): container finished" podID="00c2dba0-444f-4f17-80df-c2380723c45a" containerID="ff3e823f50881c7631c46d17e3b147eb6d3c3694378a2ae1d2184e80f02ee71a" exitCode=0 Dec 01 09:43:45 crc kubenswrapper[4689]: I1201 09:43:45.899347 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wtwp5" event={"ID":"00c2dba0-444f-4f17-80df-c2380723c45a","Type":"ContainerDied","Data":"ff3e823f50881c7631c46d17e3b147eb6d3c3694378a2ae1d2184e80f02ee71a"} Dec 01 09:43:46 crc kubenswrapper[4689]: I1201 09:43:46.910209 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wtwp5" event={"ID":"00c2dba0-444f-4f17-80df-c2380723c45a","Type":"ContainerStarted","Data":"ca947fc859009f8143bd9d2be2efd742f76ab879186b540064df68762fa30987"} Dec 01 09:43:46 crc kubenswrapper[4689]: I1201 09:43:46.937461 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wtwp5" podStartSLOduration=3.325247184 podStartE2EDuration="5.937358735s" podCreationTimestamp="2025-12-01 09:43:41 +0000 UTC" firstStartedPulling="2025-12-01 09:43:43.874339752 +0000 UTC m=+3903.946627666" lastFinishedPulling="2025-12-01 09:43:46.486451313 +0000 UTC m=+3906.558739217" observedRunningTime="2025-12-01 09:43:46.931823584 +0000 UTC m=+3907.004111508" watchObservedRunningTime="2025-12-01 09:43:46.937358735 +0000 UTC m=+3907.009646639" Dec 01 09:43:52 crc kubenswrapper[4689]: I1201 09:43:52.394598 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wtwp5" Dec 01 09:43:52 crc kubenswrapper[4689]: I1201 09:43:52.395216 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wtwp5" Dec 01 09:43:52 crc kubenswrapper[4689]: I1201 09:43:52.451042 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wtwp5" Dec 01 09:43:53 crc kubenswrapper[4689]: I1201 09:43:53.018874 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wtwp5" Dec 01 09:43:53 crc kubenswrapper[4689]: I1201 09:43:53.081100 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wtwp5"] Dec 01 09:43:54 crc kubenswrapper[4689]: I1201 09:43:54.980853 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wtwp5" podUID="00c2dba0-444f-4f17-80df-c2380723c45a" containerName="registry-server" containerID="cri-o://ca947fc859009f8143bd9d2be2efd742f76ab879186b540064df68762fa30987" gracePeriod=2 Dec 01 09:43:56 crc kubenswrapper[4689]: I1201 09:43:56.919390 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wtwp5" Dec 01 09:43:57 crc kubenswrapper[4689]: I1201 09:43:56.997911 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00c2dba0-444f-4f17-80df-c2380723c45a-utilities\") pod \"00c2dba0-444f-4f17-80df-c2380723c45a\" (UID: \"00c2dba0-444f-4f17-80df-c2380723c45a\") " Dec 01 09:43:57 crc kubenswrapper[4689]: I1201 09:43:56.998045 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8v52q\" (UniqueName: \"kubernetes.io/projected/00c2dba0-444f-4f17-80df-c2380723c45a-kube-api-access-8v52q\") pod \"00c2dba0-444f-4f17-80df-c2380723c45a\" (UID: \"00c2dba0-444f-4f17-80df-c2380723c45a\") " Dec 01 09:43:57 crc kubenswrapper[4689]: I1201 09:43:56.998106 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00c2dba0-444f-4f17-80df-c2380723c45a-catalog-content\") pod \"00c2dba0-444f-4f17-80df-c2380723c45a\" (UID: \"00c2dba0-444f-4f17-80df-c2380723c45a\") " Dec 01 09:43:57 crc kubenswrapper[4689]: I1201 09:43:57.002924 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00c2dba0-444f-4f17-80df-c2380723c45a-utilities" (OuterVolumeSpecName: "utilities") pod "00c2dba0-444f-4f17-80df-c2380723c45a" (UID: "00c2dba0-444f-4f17-80df-c2380723c45a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:43:57 crc kubenswrapper[4689]: I1201 09:43:57.028579 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00c2dba0-444f-4f17-80df-c2380723c45a-kube-api-access-8v52q" (OuterVolumeSpecName: "kube-api-access-8v52q") pod "00c2dba0-444f-4f17-80df-c2380723c45a" (UID: "00c2dba0-444f-4f17-80df-c2380723c45a"). InnerVolumeSpecName "kube-api-access-8v52q". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:43:57 crc kubenswrapper[4689]: I1201 09:43:57.101695 4689 generic.go:334] "Generic (PLEG): container finished" podID="00c2dba0-444f-4f17-80df-c2380723c45a" containerID="ca947fc859009f8143bd9d2be2efd742f76ab879186b540064df68762fa30987" exitCode=0 Dec 01 09:43:57 crc kubenswrapper[4689]: I1201 09:43:57.101959 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wtwp5" event={"ID":"00c2dba0-444f-4f17-80df-c2380723c45a","Type":"ContainerDied","Data":"ca947fc859009f8143bd9d2be2efd742f76ab879186b540064df68762fa30987"} Dec 01 09:43:57 crc kubenswrapper[4689]: I1201 09:43:57.102074 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wtwp5" event={"ID":"00c2dba0-444f-4f17-80df-c2380723c45a","Type":"ContainerDied","Data":"db3df5b79ab0b14fae8a5cc2946549a97c2c8eeee9af1f8e528a8eb8d89670a9"} Dec 01 09:43:57 crc kubenswrapper[4689]: I1201 09:43:57.102079 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wtwp5" Dec 01 09:43:57 crc kubenswrapper[4689]: I1201 09:43:57.102123 4689 scope.go:117] "RemoveContainer" containerID="ca947fc859009f8143bd9d2be2efd742f76ab879186b540064df68762fa30987" Dec 01 09:43:57 crc kubenswrapper[4689]: I1201 09:43:57.110101 4689 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00c2dba0-444f-4f17-80df-c2380723c45a-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 09:43:57 crc kubenswrapper[4689]: I1201 09:43:57.110158 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8v52q\" (UniqueName: \"kubernetes.io/projected/00c2dba0-444f-4f17-80df-c2380723c45a-kube-api-access-8v52q\") on node \"crc\" DevicePath \"\"" Dec 01 09:43:57 crc kubenswrapper[4689]: I1201 09:43:57.124713 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00c2dba0-444f-4f17-80df-c2380723c45a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "00c2dba0-444f-4f17-80df-c2380723c45a" (UID: "00c2dba0-444f-4f17-80df-c2380723c45a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:43:57 crc kubenswrapper[4689]: I1201 09:43:57.145618 4689 scope.go:117] "RemoveContainer" containerID="ff3e823f50881c7631c46d17e3b147eb6d3c3694378a2ae1d2184e80f02ee71a" Dec 01 09:43:57 crc kubenswrapper[4689]: I1201 09:43:57.200285 4689 scope.go:117] "RemoveContainer" containerID="c1d450353f1e59b60cbb18b6e4de47cb0da1ab004c45f7a3b6e7753a7c663faf" Dec 01 09:43:57 crc kubenswrapper[4689]: I1201 09:43:57.211554 4689 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00c2dba0-444f-4f17-80df-c2380723c45a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 09:43:57 crc kubenswrapper[4689]: I1201 09:43:57.250428 4689 scope.go:117] "RemoveContainer" containerID="ca947fc859009f8143bd9d2be2efd742f76ab879186b540064df68762fa30987" Dec 01 09:43:57 crc kubenswrapper[4689]: E1201 09:43:57.250967 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca947fc859009f8143bd9d2be2efd742f76ab879186b540064df68762fa30987\": container with ID starting with ca947fc859009f8143bd9d2be2efd742f76ab879186b540064df68762fa30987 not found: ID does not exist" containerID="ca947fc859009f8143bd9d2be2efd742f76ab879186b540064df68762fa30987" Dec 01 09:43:57 crc kubenswrapper[4689]: I1201 09:43:57.251087 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca947fc859009f8143bd9d2be2efd742f76ab879186b540064df68762fa30987"} err="failed to get container status \"ca947fc859009f8143bd9d2be2efd742f76ab879186b540064df68762fa30987\": rpc error: code = NotFound desc = could not find container \"ca947fc859009f8143bd9d2be2efd742f76ab879186b540064df68762fa30987\": container with ID starting with ca947fc859009f8143bd9d2be2efd742f76ab879186b540064df68762fa30987 not found: ID does not exist" Dec 01 09:43:57 crc kubenswrapper[4689]: I1201 09:43:57.251186 4689 scope.go:117] "RemoveContainer" containerID="ff3e823f50881c7631c46d17e3b147eb6d3c3694378a2ae1d2184e80f02ee71a" Dec 01 09:43:57 crc kubenswrapper[4689]: E1201 09:43:57.251705 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff3e823f50881c7631c46d17e3b147eb6d3c3694378a2ae1d2184e80f02ee71a\": container with ID starting with ff3e823f50881c7631c46d17e3b147eb6d3c3694378a2ae1d2184e80f02ee71a not found: ID does not exist" containerID="ff3e823f50881c7631c46d17e3b147eb6d3c3694378a2ae1d2184e80f02ee71a" Dec 01 09:43:57 crc kubenswrapper[4689]: I1201 09:43:57.251828 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff3e823f50881c7631c46d17e3b147eb6d3c3694378a2ae1d2184e80f02ee71a"} err="failed to get container status \"ff3e823f50881c7631c46d17e3b147eb6d3c3694378a2ae1d2184e80f02ee71a\": rpc error: code = NotFound desc = could not find container \"ff3e823f50881c7631c46d17e3b147eb6d3c3694378a2ae1d2184e80f02ee71a\": container with ID starting with ff3e823f50881c7631c46d17e3b147eb6d3c3694378a2ae1d2184e80f02ee71a not found: ID does not exist" Dec 01 09:43:57 crc kubenswrapper[4689]: I1201 09:43:57.251946 4689 scope.go:117] "RemoveContainer" containerID="c1d450353f1e59b60cbb18b6e4de47cb0da1ab004c45f7a3b6e7753a7c663faf" Dec 01 09:43:57 crc kubenswrapper[4689]: E1201 09:43:57.252319 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1d450353f1e59b60cbb18b6e4de47cb0da1ab004c45f7a3b6e7753a7c663faf\": container with ID starting with c1d450353f1e59b60cbb18b6e4de47cb0da1ab004c45f7a3b6e7753a7c663faf not found: ID does not exist" containerID="c1d450353f1e59b60cbb18b6e4de47cb0da1ab004c45f7a3b6e7753a7c663faf" Dec 01 09:43:57 crc kubenswrapper[4689]: I1201 09:43:57.252439 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1d450353f1e59b60cbb18b6e4de47cb0da1ab004c45f7a3b6e7753a7c663faf"} err="failed to get container status \"c1d450353f1e59b60cbb18b6e4de47cb0da1ab004c45f7a3b6e7753a7c663faf\": rpc error: code = NotFound desc = could not find container \"c1d450353f1e59b60cbb18b6e4de47cb0da1ab004c45f7a3b6e7753a7c663faf\": container with ID starting with c1d450353f1e59b60cbb18b6e4de47cb0da1ab004c45f7a3b6e7753a7c663faf not found: ID does not exist" Dec 01 09:43:57 crc kubenswrapper[4689]: I1201 09:43:57.437348 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wtwp5"] Dec 01 09:43:57 crc kubenswrapper[4689]: I1201 09:43:57.448506 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wtwp5"] Dec 01 09:43:59 crc kubenswrapper[4689]: I1201 09:43:59.058678 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00c2dba0-444f-4f17-80df-c2380723c45a" path="/var/lib/kubelet/pods/00c2dba0-444f-4f17-80df-c2380723c45a/volumes" Dec 01 09:44:00 crc kubenswrapper[4689]: I1201 09:44:00.188674 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-5c56f" podUID="4e0a2cf0-4edb-4f5e-90d2-e276ddd43fd1" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 01 09:44:17 crc kubenswrapper[4689]: I1201 09:44:17.272222 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-lm6l4"] Dec 01 09:44:17 crc kubenswrapper[4689]: E1201 09:44:17.273214 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00c2dba0-444f-4f17-80df-c2380723c45a" containerName="registry-server" Dec 01 09:44:17 crc kubenswrapper[4689]: I1201 09:44:17.273228 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="00c2dba0-444f-4f17-80df-c2380723c45a" containerName="registry-server" Dec 01 09:44:17 crc kubenswrapper[4689]: E1201 09:44:17.273243 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00c2dba0-444f-4f17-80df-c2380723c45a" containerName="extract-content" Dec 01 09:44:17 crc kubenswrapper[4689]: I1201 09:44:17.273249 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="00c2dba0-444f-4f17-80df-c2380723c45a" containerName="extract-content" Dec 01 09:44:17 crc kubenswrapper[4689]: E1201 09:44:17.273279 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00c2dba0-444f-4f17-80df-c2380723c45a" containerName="extract-utilities" Dec 01 09:44:17 crc kubenswrapper[4689]: I1201 09:44:17.273286 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="00c2dba0-444f-4f17-80df-c2380723c45a" containerName="extract-utilities" Dec 01 09:44:17 crc kubenswrapper[4689]: I1201 09:44:17.273547 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="00c2dba0-444f-4f17-80df-c2380723c45a" containerName="registry-server" Dec 01 09:44:17 crc kubenswrapper[4689]: I1201 09:44:17.275536 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lm6l4" Dec 01 09:44:17 crc kubenswrapper[4689]: I1201 09:44:17.301319 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lm6l4"] Dec 01 09:44:17 crc kubenswrapper[4689]: I1201 09:44:17.407537 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10c7d8b4-c2eb-45c6-943f-f8a7967b6e19-catalog-content\") pod \"certified-operators-lm6l4\" (UID: \"10c7d8b4-c2eb-45c6-943f-f8a7967b6e19\") " pod="openshift-marketplace/certified-operators-lm6l4" Dec 01 09:44:17 crc kubenswrapper[4689]: I1201 09:44:17.407775 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10c7d8b4-c2eb-45c6-943f-f8a7967b6e19-utilities\") pod \"certified-operators-lm6l4\" (UID: \"10c7d8b4-c2eb-45c6-943f-f8a7967b6e19\") " pod="openshift-marketplace/certified-operators-lm6l4" Dec 01 09:44:17 crc kubenswrapper[4689]: I1201 09:44:17.407922 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmnxw\" (UniqueName: \"kubernetes.io/projected/10c7d8b4-c2eb-45c6-943f-f8a7967b6e19-kube-api-access-nmnxw\") pod \"certified-operators-lm6l4\" (UID: \"10c7d8b4-c2eb-45c6-943f-f8a7967b6e19\") " pod="openshift-marketplace/certified-operators-lm6l4" Dec 01 09:44:17 crc kubenswrapper[4689]: I1201 09:44:17.509471 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10c7d8b4-c2eb-45c6-943f-f8a7967b6e19-catalog-content\") pod \"certified-operators-lm6l4\" (UID: \"10c7d8b4-c2eb-45c6-943f-f8a7967b6e19\") " pod="openshift-marketplace/certified-operators-lm6l4" Dec 01 09:44:17 crc kubenswrapper[4689]: I1201 09:44:17.509645 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10c7d8b4-c2eb-45c6-943f-f8a7967b6e19-utilities\") pod \"certified-operators-lm6l4\" (UID: \"10c7d8b4-c2eb-45c6-943f-f8a7967b6e19\") " pod="openshift-marketplace/certified-operators-lm6l4" Dec 01 09:44:17 crc kubenswrapper[4689]: I1201 09:44:17.509737 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmnxw\" (UniqueName: \"kubernetes.io/projected/10c7d8b4-c2eb-45c6-943f-f8a7967b6e19-kube-api-access-nmnxw\") pod \"certified-operators-lm6l4\" (UID: \"10c7d8b4-c2eb-45c6-943f-f8a7967b6e19\") " pod="openshift-marketplace/certified-operators-lm6l4" Dec 01 09:44:17 crc kubenswrapper[4689]: I1201 09:44:17.510313 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10c7d8b4-c2eb-45c6-943f-f8a7967b6e19-catalog-content\") pod \"certified-operators-lm6l4\" (UID: \"10c7d8b4-c2eb-45c6-943f-f8a7967b6e19\") " pod="openshift-marketplace/certified-operators-lm6l4" Dec 01 09:44:17 crc kubenswrapper[4689]: I1201 09:44:17.510554 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10c7d8b4-c2eb-45c6-943f-f8a7967b6e19-utilities\") pod \"certified-operators-lm6l4\" (UID: \"10c7d8b4-c2eb-45c6-943f-f8a7967b6e19\") " pod="openshift-marketplace/certified-operators-lm6l4" Dec 01 09:44:17 crc kubenswrapper[4689]: I1201 09:44:17.532387 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmnxw\" (UniqueName: \"kubernetes.io/projected/10c7d8b4-c2eb-45c6-943f-f8a7967b6e19-kube-api-access-nmnxw\") pod \"certified-operators-lm6l4\" (UID: \"10c7d8b4-c2eb-45c6-943f-f8a7967b6e19\") " pod="openshift-marketplace/certified-operators-lm6l4" Dec 01 09:44:17 crc kubenswrapper[4689]: I1201 09:44:17.601869 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lm6l4" Dec 01 09:44:17 crc kubenswrapper[4689]: I1201 09:44:17.970063 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lm6l4"] Dec 01 09:44:18 crc kubenswrapper[4689]: I1201 09:44:18.384156 4689 generic.go:334] "Generic (PLEG): container finished" podID="10c7d8b4-c2eb-45c6-943f-f8a7967b6e19" containerID="5b2c1b4c35ef31d77aa5a27f60525828c3696a55ff35b681878131648d034b07" exitCode=0 Dec 01 09:44:18 crc kubenswrapper[4689]: I1201 09:44:18.384245 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lm6l4" event={"ID":"10c7d8b4-c2eb-45c6-943f-f8a7967b6e19","Type":"ContainerDied","Data":"5b2c1b4c35ef31d77aa5a27f60525828c3696a55ff35b681878131648d034b07"} Dec 01 09:44:18 crc kubenswrapper[4689]: I1201 09:44:18.384504 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lm6l4" event={"ID":"10c7d8b4-c2eb-45c6-943f-f8a7967b6e19","Type":"ContainerStarted","Data":"458cd5fae9357e62d9d1c118ba2b055d2477b6213ad4ce3d2bcc100c04b90723"} Dec 01 09:44:19 crc kubenswrapper[4689]: I1201 09:44:19.397744 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lm6l4" event={"ID":"10c7d8b4-c2eb-45c6-943f-f8a7967b6e19","Type":"ContainerStarted","Data":"c0228d2bac87780e16844bd2f1c12ee4f7202bc23391653e33551ae4a7153ade"} Dec 01 09:44:21 crc kubenswrapper[4689]: I1201 09:44:21.434044 4689 generic.go:334] "Generic (PLEG): container finished" podID="10c7d8b4-c2eb-45c6-943f-f8a7967b6e19" containerID="c0228d2bac87780e16844bd2f1c12ee4f7202bc23391653e33551ae4a7153ade" exitCode=0 Dec 01 09:44:21 crc kubenswrapper[4689]: I1201 09:44:21.434117 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lm6l4" event={"ID":"10c7d8b4-c2eb-45c6-943f-f8a7967b6e19","Type":"ContainerDied","Data":"c0228d2bac87780e16844bd2f1c12ee4f7202bc23391653e33551ae4a7153ade"} Dec 01 09:44:23 crc kubenswrapper[4689]: I1201 09:44:23.455410 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lm6l4" event={"ID":"10c7d8b4-c2eb-45c6-943f-f8a7967b6e19","Type":"ContainerStarted","Data":"33b26d66d080852acaa629a3c40108db54f7a1e58df414d058a79e7ced0e4d88"} Dec 01 09:44:23 crc kubenswrapper[4689]: I1201 09:44:23.485742 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-lm6l4" podStartSLOduration=2.343325008 podStartE2EDuration="6.485715701s" podCreationTimestamp="2025-12-01 09:44:17 +0000 UTC" firstStartedPulling="2025-12-01 09:44:18.386392549 +0000 UTC m=+3938.458680453" lastFinishedPulling="2025-12-01 09:44:22.528783222 +0000 UTC m=+3942.601071146" observedRunningTime="2025-12-01 09:44:23.478863054 +0000 UTC m=+3943.551150998" watchObservedRunningTime="2025-12-01 09:44:23.485715701 +0000 UTC m=+3943.558003605" Dec 01 09:44:27 crc kubenswrapper[4689]: I1201 09:44:27.602036 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-lm6l4" Dec 01 09:44:27 crc kubenswrapper[4689]: I1201 09:44:27.602669 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-lm6l4" Dec 01 09:44:27 crc kubenswrapper[4689]: I1201 09:44:27.650749 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-lm6l4" Dec 01 09:44:28 crc kubenswrapper[4689]: I1201 09:44:28.623835 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-lm6l4" Dec 01 09:44:28 crc kubenswrapper[4689]: I1201 09:44:28.739063 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lm6l4"] Dec 01 09:44:30 crc kubenswrapper[4689]: I1201 09:44:30.520551 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-lm6l4" podUID="10c7d8b4-c2eb-45c6-943f-f8a7967b6e19" containerName="registry-server" containerID="cri-o://33b26d66d080852acaa629a3c40108db54f7a1e58df414d058a79e7ced0e4d88" gracePeriod=2 Dec 01 09:44:31 crc kubenswrapper[4689]: I1201 09:44:31.053760 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lm6l4" Dec 01 09:44:31 crc kubenswrapper[4689]: I1201 09:44:31.189205 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmnxw\" (UniqueName: \"kubernetes.io/projected/10c7d8b4-c2eb-45c6-943f-f8a7967b6e19-kube-api-access-nmnxw\") pod \"10c7d8b4-c2eb-45c6-943f-f8a7967b6e19\" (UID: \"10c7d8b4-c2eb-45c6-943f-f8a7967b6e19\") " Dec 01 09:44:31 crc kubenswrapper[4689]: I1201 09:44:31.189322 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10c7d8b4-c2eb-45c6-943f-f8a7967b6e19-catalog-content\") pod \"10c7d8b4-c2eb-45c6-943f-f8a7967b6e19\" (UID: \"10c7d8b4-c2eb-45c6-943f-f8a7967b6e19\") " Dec 01 09:44:31 crc kubenswrapper[4689]: I1201 09:44:31.189523 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10c7d8b4-c2eb-45c6-943f-f8a7967b6e19-utilities\") pod \"10c7d8b4-c2eb-45c6-943f-f8a7967b6e19\" (UID: \"10c7d8b4-c2eb-45c6-943f-f8a7967b6e19\") " Dec 01 09:44:31 crc kubenswrapper[4689]: I1201 09:44:31.190262 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10c7d8b4-c2eb-45c6-943f-f8a7967b6e19-utilities" (OuterVolumeSpecName: "utilities") pod "10c7d8b4-c2eb-45c6-943f-f8a7967b6e19" (UID: "10c7d8b4-c2eb-45c6-943f-f8a7967b6e19"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:44:31 crc kubenswrapper[4689]: I1201 09:44:31.195656 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10c7d8b4-c2eb-45c6-943f-f8a7967b6e19-kube-api-access-nmnxw" (OuterVolumeSpecName: "kube-api-access-nmnxw") pod "10c7d8b4-c2eb-45c6-943f-f8a7967b6e19" (UID: "10c7d8b4-c2eb-45c6-943f-f8a7967b6e19"). InnerVolumeSpecName "kube-api-access-nmnxw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:44:31 crc kubenswrapper[4689]: I1201 09:44:31.259497 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10c7d8b4-c2eb-45c6-943f-f8a7967b6e19-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "10c7d8b4-c2eb-45c6-943f-f8a7967b6e19" (UID: "10c7d8b4-c2eb-45c6-943f-f8a7967b6e19"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:44:31 crc kubenswrapper[4689]: I1201 09:44:31.292345 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nmnxw\" (UniqueName: \"kubernetes.io/projected/10c7d8b4-c2eb-45c6-943f-f8a7967b6e19-kube-api-access-nmnxw\") on node \"crc\" DevicePath \"\"" Dec 01 09:44:31 crc kubenswrapper[4689]: I1201 09:44:31.292438 4689 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10c7d8b4-c2eb-45c6-943f-f8a7967b6e19-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 09:44:31 crc kubenswrapper[4689]: I1201 09:44:31.292449 4689 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10c7d8b4-c2eb-45c6-943f-f8a7967b6e19-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 09:44:31 crc kubenswrapper[4689]: I1201 09:44:31.534266 4689 generic.go:334] "Generic (PLEG): container finished" podID="10c7d8b4-c2eb-45c6-943f-f8a7967b6e19" containerID="33b26d66d080852acaa629a3c40108db54f7a1e58df414d058a79e7ced0e4d88" exitCode=0 Dec 01 09:44:31 crc kubenswrapper[4689]: I1201 09:44:31.534324 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lm6l4" event={"ID":"10c7d8b4-c2eb-45c6-943f-f8a7967b6e19","Type":"ContainerDied","Data":"33b26d66d080852acaa629a3c40108db54f7a1e58df414d058a79e7ced0e4d88"} Dec 01 09:44:31 crc kubenswrapper[4689]: I1201 09:44:31.534499 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lm6l4" Dec 01 09:44:31 crc kubenswrapper[4689]: I1201 09:44:31.535737 4689 scope.go:117] "RemoveContainer" containerID="33b26d66d080852acaa629a3c40108db54f7a1e58df414d058a79e7ced0e4d88" Dec 01 09:44:31 crc kubenswrapper[4689]: I1201 09:44:31.535602 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lm6l4" event={"ID":"10c7d8b4-c2eb-45c6-943f-f8a7967b6e19","Type":"ContainerDied","Data":"458cd5fae9357e62d9d1c118ba2b055d2477b6213ad4ce3d2bcc100c04b90723"} Dec 01 09:44:31 crc kubenswrapper[4689]: I1201 09:44:31.556785 4689 scope.go:117] "RemoveContainer" containerID="c0228d2bac87780e16844bd2f1c12ee4f7202bc23391653e33551ae4a7153ade" Dec 01 09:44:31 crc kubenswrapper[4689]: I1201 09:44:31.575627 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lm6l4"] Dec 01 09:44:31 crc kubenswrapper[4689]: I1201 09:44:31.584475 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-lm6l4"] Dec 01 09:44:31 crc kubenswrapper[4689]: I1201 09:44:31.595921 4689 scope.go:117] "RemoveContainer" containerID="5b2c1b4c35ef31d77aa5a27f60525828c3696a55ff35b681878131648d034b07" Dec 01 09:44:31 crc kubenswrapper[4689]: I1201 09:44:31.646749 4689 scope.go:117] "RemoveContainer" containerID="33b26d66d080852acaa629a3c40108db54f7a1e58df414d058a79e7ced0e4d88" Dec 01 09:44:31 crc kubenswrapper[4689]: E1201 09:44:31.647440 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33b26d66d080852acaa629a3c40108db54f7a1e58df414d058a79e7ced0e4d88\": container with ID starting with 33b26d66d080852acaa629a3c40108db54f7a1e58df414d058a79e7ced0e4d88 not found: ID does not exist" containerID="33b26d66d080852acaa629a3c40108db54f7a1e58df414d058a79e7ced0e4d88" Dec 01 09:44:31 crc kubenswrapper[4689]: I1201 09:44:31.647495 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33b26d66d080852acaa629a3c40108db54f7a1e58df414d058a79e7ced0e4d88"} err="failed to get container status \"33b26d66d080852acaa629a3c40108db54f7a1e58df414d058a79e7ced0e4d88\": rpc error: code = NotFound desc = could not find container \"33b26d66d080852acaa629a3c40108db54f7a1e58df414d058a79e7ced0e4d88\": container with ID starting with 33b26d66d080852acaa629a3c40108db54f7a1e58df414d058a79e7ced0e4d88 not found: ID does not exist" Dec 01 09:44:31 crc kubenswrapper[4689]: I1201 09:44:31.647524 4689 scope.go:117] "RemoveContainer" containerID="c0228d2bac87780e16844bd2f1c12ee4f7202bc23391653e33551ae4a7153ade" Dec 01 09:44:31 crc kubenswrapper[4689]: E1201 09:44:31.647816 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0228d2bac87780e16844bd2f1c12ee4f7202bc23391653e33551ae4a7153ade\": container with ID starting with c0228d2bac87780e16844bd2f1c12ee4f7202bc23391653e33551ae4a7153ade not found: ID does not exist" containerID="c0228d2bac87780e16844bd2f1c12ee4f7202bc23391653e33551ae4a7153ade" Dec 01 09:44:31 crc kubenswrapper[4689]: I1201 09:44:31.647838 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0228d2bac87780e16844bd2f1c12ee4f7202bc23391653e33551ae4a7153ade"} err="failed to get container status \"c0228d2bac87780e16844bd2f1c12ee4f7202bc23391653e33551ae4a7153ade\": rpc error: code = NotFound desc = could not find container \"c0228d2bac87780e16844bd2f1c12ee4f7202bc23391653e33551ae4a7153ade\": container with ID starting with c0228d2bac87780e16844bd2f1c12ee4f7202bc23391653e33551ae4a7153ade not found: ID does not exist" Dec 01 09:44:31 crc kubenswrapper[4689]: I1201 09:44:31.647852 4689 scope.go:117] "RemoveContainer" containerID="5b2c1b4c35ef31d77aa5a27f60525828c3696a55ff35b681878131648d034b07" Dec 01 09:44:31 crc kubenswrapper[4689]: E1201 09:44:31.648066 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b2c1b4c35ef31d77aa5a27f60525828c3696a55ff35b681878131648d034b07\": container with ID starting with 5b2c1b4c35ef31d77aa5a27f60525828c3696a55ff35b681878131648d034b07 not found: ID does not exist" containerID="5b2c1b4c35ef31d77aa5a27f60525828c3696a55ff35b681878131648d034b07" Dec 01 09:44:31 crc kubenswrapper[4689]: I1201 09:44:31.648081 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b2c1b4c35ef31d77aa5a27f60525828c3696a55ff35b681878131648d034b07"} err="failed to get container status \"5b2c1b4c35ef31d77aa5a27f60525828c3696a55ff35b681878131648d034b07\": rpc error: code = NotFound desc = could not find container \"5b2c1b4c35ef31d77aa5a27f60525828c3696a55ff35b681878131648d034b07\": container with ID starting with 5b2c1b4c35ef31d77aa5a27f60525828c3696a55ff35b681878131648d034b07 not found: ID does not exist" Dec 01 09:44:33 crc kubenswrapper[4689]: I1201 09:44:33.060245 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10c7d8b4-c2eb-45c6-943f-f8a7967b6e19" path="/var/lib/kubelet/pods/10c7d8b4-c2eb-45c6-943f-f8a7967b6e19/volumes" Dec 01 09:44:51 crc kubenswrapper[4689]: I1201 09:44:51.449499 4689 scope.go:117] "RemoveContainer" containerID="b01ceed10892271b43789bf7ad7b96a4edb6d8dfba8866f5401eda94e8c8d239" Dec 01 09:45:00 crc kubenswrapper[4689]: I1201 09:45:00.157346 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409705-8dl8n"] Dec 01 09:45:00 crc kubenswrapper[4689]: E1201 09:45:00.158493 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10c7d8b4-c2eb-45c6-943f-f8a7967b6e19" containerName="registry-server" Dec 01 09:45:00 crc kubenswrapper[4689]: I1201 09:45:00.158510 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="10c7d8b4-c2eb-45c6-943f-f8a7967b6e19" containerName="registry-server" Dec 01 09:45:00 crc kubenswrapper[4689]: E1201 09:45:00.158533 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10c7d8b4-c2eb-45c6-943f-f8a7967b6e19" containerName="extract-content" Dec 01 09:45:00 crc kubenswrapper[4689]: I1201 09:45:00.158538 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="10c7d8b4-c2eb-45c6-943f-f8a7967b6e19" containerName="extract-content" Dec 01 09:45:00 crc kubenswrapper[4689]: E1201 09:45:00.158552 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10c7d8b4-c2eb-45c6-943f-f8a7967b6e19" containerName="extract-utilities" Dec 01 09:45:00 crc kubenswrapper[4689]: I1201 09:45:00.158559 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="10c7d8b4-c2eb-45c6-943f-f8a7967b6e19" containerName="extract-utilities" Dec 01 09:45:00 crc kubenswrapper[4689]: I1201 09:45:00.158739 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="10c7d8b4-c2eb-45c6-943f-f8a7967b6e19" containerName="registry-server" Dec 01 09:45:00 crc kubenswrapper[4689]: I1201 09:45:00.159609 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409705-8dl8n" Dec 01 09:45:00 crc kubenswrapper[4689]: I1201 09:45:00.162226 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 01 09:45:00 crc kubenswrapper[4689]: I1201 09:45:00.162331 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 01 09:45:00 crc kubenswrapper[4689]: I1201 09:45:00.175572 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409705-8dl8n"] Dec 01 09:45:00 crc kubenswrapper[4689]: I1201 09:45:00.246056 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ad6dee67-ebf8-4e1a-bf38-909f8cf9cb18-secret-volume\") pod \"collect-profiles-29409705-8dl8n\" (UID: \"ad6dee67-ebf8-4e1a-bf38-909f8cf9cb18\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409705-8dl8n" Dec 01 09:45:00 crc kubenswrapper[4689]: I1201 09:45:00.246122 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbbcc\" (UniqueName: \"kubernetes.io/projected/ad6dee67-ebf8-4e1a-bf38-909f8cf9cb18-kube-api-access-hbbcc\") pod \"collect-profiles-29409705-8dl8n\" (UID: \"ad6dee67-ebf8-4e1a-bf38-909f8cf9cb18\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409705-8dl8n" Dec 01 09:45:00 crc kubenswrapper[4689]: I1201 09:45:00.246270 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ad6dee67-ebf8-4e1a-bf38-909f8cf9cb18-config-volume\") pod \"collect-profiles-29409705-8dl8n\" (UID: \"ad6dee67-ebf8-4e1a-bf38-909f8cf9cb18\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409705-8dl8n" Dec 01 09:45:00 crc kubenswrapper[4689]: I1201 09:45:00.348306 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ad6dee67-ebf8-4e1a-bf38-909f8cf9cb18-secret-volume\") pod \"collect-profiles-29409705-8dl8n\" (UID: \"ad6dee67-ebf8-4e1a-bf38-909f8cf9cb18\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409705-8dl8n" Dec 01 09:45:00 crc kubenswrapper[4689]: I1201 09:45:00.348480 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbbcc\" (UniqueName: \"kubernetes.io/projected/ad6dee67-ebf8-4e1a-bf38-909f8cf9cb18-kube-api-access-hbbcc\") pod \"collect-profiles-29409705-8dl8n\" (UID: \"ad6dee67-ebf8-4e1a-bf38-909f8cf9cb18\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409705-8dl8n" Dec 01 09:45:00 crc kubenswrapper[4689]: I1201 09:45:00.348604 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ad6dee67-ebf8-4e1a-bf38-909f8cf9cb18-config-volume\") pod \"collect-profiles-29409705-8dl8n\" (UID: \"ad6dee67-ebf8-4e1a-bf38-909f8cf9cb18\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409705-8dl8n" Dec 01 09:45:00 crc kubenswrapper[4689]: I1201 09:45:00.349842 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ad6dee67-ebf8-4e1a-bf38-909f8cf9cb18-config-volume\") pod \"collect-profiles-29409705-8dl8n\" (UID: \"ad6dee67-ebf8-4e1a-bf38-909f8cf9cb18\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409705-8dl8n" Dec 01 09:45:00 crc kubenswrapper[4689]: I1201 09:45:00.363346 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ad6dee67-ebf8-4e1a-bf38-909f8cf9cb18-secret-volume\") pod \"collect-profiles-29409705-8dl8n\" (UID: \"ad6dee67-ebf8-4e1a-bf38-909f8cf9cb18\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409705-8dl8n" Dec 01 09:45:00 crc kubenswrapper[4689]: I1201 09:45:00.367916 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbbcc\" (UniqueName: \"kubernetes.io/projected/ad6dee67-ebf8-4e1a-bf38-909f8cf9cb18-kube-api-access-hbbcc\") pod \"collect-profiles-29409705-8dl8n\" (UID: \"ad6dee67-ebf8-4e1a-bf38-909f8cf9cb18\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409705-8dl8n" Dec 01 09:45:00 crc kubenswrapper[4689]: I1201 09:45:00.487452 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409705-8dl8n" Dec 01 09:45:01 crc kubenswrapper[4689]: I1201 09:45:01.085069 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409705-8dl8n"] Dec 01 09:45:01 crc kubenswrapper[4689]: I1201 09:45:01.821876 4689 generic.go:334] "Generic (PLEG): container finished" podID="ad6dee67-ebf8-4e1a-bf38-909f8cf9cb18" containerID="160620bdd391e030dfe9128304b37593a7e23cef46fb24c3e9f6615a48f3dc8a" exitCode=0 Dec 01 09:45:01 crc kubenswrapper[4689]: I1201 09:45:01.822083 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29409705-8dl8n" event={"ID":"ad6dee67-ebf8-4e1a-bf38-909f8cf9cb18","Type":"ContainerDied","Data":"160620bdd391e030dfe9128304b37593a7e23cef46fb24c3e9f6615a48f3dc8a"} Dec 01 09:45:01 crc kubenswrapper[4689]: I1201 09:45:01.823407 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29409705-8dl8n" event={"ID":"ad6dee67-ebf8-4e1a-bf38-909f8cf9cb18","Type":"ContainerStarted","Data":"8847e4a1e6c342e6cf7e769fe8485042f7adad8fd0acc48f7d5dcd9d75ee8b13"} Dec 01 09:45:03 crc kubenswrapper[4689]: I1201 09:45:03.252519 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409705-8dl8n" Dec 01 09:45:03 crc kubenswrapper[4689]: I1201 09:45:03.410600 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbbcc\" (UniqueName: \"kubernetes.io/projected/ad6dee67-ebf8-4e1a-bf38-909f8cf9cb18-kube-api-access-hbbcc\") pod \"ad6dee67-ebf8-4e1a-bf38-909f8cf9cb18\" (UID: \"ad6dee67-ebf8-4e1a-bf38-909f8cf9cb18\") " Dec 01 09:45:03 crc kubenswrapper[4689]: I1201 09:45:03.410737 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ad6dee67-ebf8-4e1a-bf38-909f8cf9cb18-secret-volume\") pod \"ad6dee67-ebf8-4e1a-bf38-909f8cf9cb18\" (UID: \"ad6dee67-ebf8-4e1a-bf38-909f8cf9cb18\") " Dec 01 09:45:03 crc kubenswrapper[4689]: I1201 09:45:03.410921 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ad6dee67-ebf8-4e1a-bf38-909f8cf9cb18-config-volume\") pod \"ad6dee67-ebf8-4e1a-bf38-909f8cf9cb18\" (UID: \"ad6dee67-ebf8-4e1a-bf38-909f8cf9cb18\") " Dec 01 09:45:03 crc kubenswrapper[4689]: I1201 09:45:03.412009 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad6dee67-ebf8-4e1a-bf38-909f8cf9cb18-config-volume" (OuterVolumeSpecName: "config-volume") pod "ad6dee67-ebf8-4e1a-bf38-909f8cf9cb18" (UID: "ad6dee67-ebf8-4e1a-bf38-909f8cf9cb18"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 09:45:03 crc kubenswrapper[4689]: I1201 09:45:03.417269 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad6dee67-ebf8-4e1a-bf38-909f8cf9cb18-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ad6dee67-ebf8-4e1a-bf38-909f8cf9cb18" (UID: "ad6dee67-ebf8-4e1a-bf38-909f8cf9cb18"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 09:45:03 crc kubenswrapper[4689]: I1201 09:45:03.422607 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad6dee67-ebf8-4e1a-bf38-909f8cf9cb18-kube-api-access-hbbcc" (OuterVolumeSpecName: "kube-api-access-hbbcc") pod "ad6dee67-ebf8-4e1a-bf38-909f8cf9cb18" (UID: "ad6dee67-ebf8-4e1a-bf38-909f8cf9cb18"). InnerVolumeSpecName "kube-api-access-hbbcc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:45:03 crc kubenswrapper[4689]: I1201 09:45:03.513308 4689 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ad6dee67-ebf8-4e1a-bf38-909f8cf9cb18-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 01 09:45:03 crc kubenswrapper[4689]: I1201 09:45:03.513980 4689 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ad6dee67-ebf8-4e1a-bf38-909f8cf9cb18-config-volume\") on node \"crc\" DevicePath \"\"" Dec 01 09:45:03 crc kubenswrapper[4689]: I1201 09:45:03.514010 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hbbcc\" (UniqueName: \"kubernetes.io/projected/ad6dee67-ebf8-4e1a-bf38-909f8cf9cb18-kube-api-access-hbbcc\") on node \"crc\" DevicePath \"\"" Dec 01 09:45:03 crc kubenswrapper[4689]: I1201 09:45:03.842101 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29409705-8dl8n" event={"ID":"ad6dee67-ebf8-4e1a-bf38-909f8cf9cb18","Type":"ContainerDied","Data":"8847e4a1e6c342e6cf7e769fe8485042f7adad8fd0acc48f7d5dcd9d75ee8b13"} Dec 01 09:45:03 crc kubenswrapper[4689]: I1201 09:45:03.842157 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8847e4a1e6c342e6cf7e769fe8485042f7adad8fd0acc48f7d5dcd9d75ee8b13" Dec 01 09:45:03 crc kubenswrapper[4689]: I1201 09:45:03.842231 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409705-8dl8n" Dec 01 09:45:04 crc kubenswrapper[4689]: I1201 09:45:04.355154 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409660-gb5z4"] Dec 01 09:45:04 crc kubenswrapper[4689]: I1201 09:45:04.367529 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409660-gb5z4"] Dec 01 09:45:05 crc kubenswrapper[4689]: I1201 09:45:05.062666 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b222f1da-8be5-48e4-acfd-0d2979cd16f9" path="/var/lib/kubelet/pods/b222f1da-8be5-48e4-acfd-0d2979cd16f9/volumes" Dec 01 09:45:39 crc kubenswrapper[4689]: I1201 09:45:39.147341 4689 patch_prober.go:28] interesting pod/machine-config-daemon-hmdnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 09:45:39 crc kubenswrapper[4689]: I1201 09:45:39.147880 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 09:45:50 crc kubenswrapper[4689]: I1201 09:45:50.284673 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-cdmgd"] Dec 01 09:45:50 crc kubenswrapper[4689]: E1201 09:45:50.285717 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad6dee67-ebf8-4e1a-bf38-909f8cf9cb18" containerName="collect-profiles" Dec 01 09:45:50 crc kubenswrapper[4689]: I1201 09:45:50.285735 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad6dee67-ebf8-4e1a-bf38-909f8cf9cb18" containerName="collect-profiles" Dec 01 09:45:50 crc kubenswrapper[4689]: I1201 09:45:50.286002 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad6dee67-ebf8-4e1a-bf38-909f8cf9cb18" containerName="collect-profiles" Dec 01 09:45:50 crc kubenswrapper[4689]: I1201 09:45:50.287723 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cdmgd" Dec 01 09:45:50 crc kubenswrapper[4689]: I1201 09:45:50.331592 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cdmgd"] Dec 01 09:45:50 crc kubenswrapper[4689]: I1201 09:45:50.471154 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4324480-e485-4833-bc8c-0073ec58fdd0-catalog-content\") pod \"redhat-operators-cdmgd\" (UID: \"a4324480-e485-4833-bc8c-0073ec58fdd0\") " pod="openshift-marketplace/redhat-operators-cdmgd" Dec 01 09:45:50 crc kubenswrapper[4689]: I1201 09:45:50.471336 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4324480-e485-4833-bc8c-0073ec58fdd0-utilities\") pod \"redhat-operators-cdmgd\" (UID: \"a4324480-e485-4833-bc8c-0073ec58fdd0\") " pod="openshift-marketplace/redhat-operators-cdmgd" Dec 01 09:45:50 crc kubenswrapper[4689]: I1201 09:45:50.471388 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tq2tv\" (UniqueName: \"kubernetes.io/projected/a4324480-e485-4833-bc8c-0073ec58fdd0-kube-api-access-tq2tv\") pod \"redhat-operators-cdmgd\" (UID: \"a4324480-e485-4833-bc8c-0073ec58fdd0\") " pod="openshift-marketplace/redhat-operators-cdmgd" Dec 01 09:45:50 crc kubenswrapper[4689]: I1201 09:45:50.573810 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4324480-e485-4833-bc8c-0073ec58fdd0-utilities\") pod \"redhat-operators-cdmgd\" (UID: \"a4324480-e485-4833-bc8c-0073ec58fdd0\") " pod="openshift-marketplace/redhat-operators-cdmgd" Dec 01 09:45:50 crc kubenswrapper[4689]: I1201 09:45:50.574205 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tq2tv\" (UniqueName: \"kubernetes.io/projected/a4324480-e485-4833-bc8c-0073ec58fdd0-kube-api-access-tq2tv\") pod \"redhat-operators-cdmgd\" (UID: \"a4324480-e485-4833-bc8c-0073ec58fdd0\") " pod="openshift-marketplace/redhat-operators-cdmgd" Dec 01 09:45:50 crc kubenswrapper[4689]: I1201 09:45:50.574439 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4324480-e485-4833-bc8c-0073ec58fdd0-utilities\") pod \"redhat-operators-cdmgd\" (UID: \"a4324480-e485-4833-bc8c-0073ec58fdd0\") " pod="openshift-marketplace/redhat-operators-cdmgd" Dec 01 09:45:50 crc kubenswrapper[4689]: I1201 09:45:50.574462 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4324480-e485-4833-bc8c-0073ec58fdd0-catalog-content\") pod \"redhat-operators-cdmgd\" (UID: \"a4324480-e485-4833-bc8c-0073ec58fdd0\") " pod="openshift-marketplace/redhat-operators-cdmgd" Dec 01 09:45:50 crc kubenswrapper[4689]: I1201 09:45:50.574828 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4324480-e485-4833-bc8c-0073ec58fdd0-catalog-content\") pod \"redhat-operators-cdmgd\" (UID: \"a4324480-e485-4833-bc8c-0073ec58fdd0\") " pod="openshift-marketplace/redhat-operators-cdmgd" Dec 01 09:45:50 crc kubenswrapper[4689]: I1201 09:45:50.606988 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tq2tv\" (UniqueName: \"kubernetes.io/projected/a4324480-e485-4833-bc8c-0073ec58fdd0-kube-api-access-tq2tv\") pod \"redhat-operators-cdmgd\" (UID: \"a4324480-e485-4833-bc8c-0073ec58fdd0\") " pod="openshift-marketplace/redhat-operators-cdmgd" Dec 01 09:45:50 crc kubenswrapper[4689]: I1201 09:45:50.618566 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cdmgd" Dec 01 09:45:51 crc kubenswrapper[4689]: I1201 09:45:51.193957 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cdmgd"] Dec 01 09:45:51 crc kubenswrapper[4689]: W1201 09:45:51.199855 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda4324480_e485_4833_bc8c_0073ec58fdd0.slice/crio-f53b3590ad04e2c4318b79d65dffe6380f6bffc9d1a05431578bb6707f87f77e WatchSource:0}: Error finding container f53b3590ad04e2c4318b79d65dffe6380f6bffc9d1a05431578bb6707f87f77e: Status 404 returned error can't find the container with id f53b3590ad04e2c4318b79d65dffe6380f6bffc9d1a05431578bb6707f87f77e Dec 01 09:45:51 crc kubenswrapper[4689]: I1201 09:45:51.352047 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cdmgd" event={"ID":"a4324480-e485-4833-bc8c-0073ec58fdd0","Type":"ContainerStarted","Data":"f53b3590ad04e2c4318b79d65dffe6380f6bffc9d1a05431578bb6707f87f77e"} Dec 01 09:45:51 crc kubenswrapper[4689]: I1201 09:45:51.552221 4689 scope.go:117] "RemoveContainer" containerID="d4d8dcb2c79858da09495dc2092899c8fafb4be4d595dd1578c87c7627f8aba6" Dec 01 09:45:52 crc kubenswrapper[4689]: I1201 09:45:52.363418 4689 generic.go:334] "Generic (PLEG): container finished" podID="a4324480-e485-4833-bc8c-0073ec58fdd0" containerID="3dda6f41be17fe9196c3efdbff4d6cc26f8396728d3a90616527dfcbe3fb64f0" exitCode=0 Dec 01 09:45:52 crc kubenswrapper[4689]: I1201 09:45:52.363558 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cdmgd" event={"ID":"a4324480-e485-4833-bc8c-0073ec58fdd0","Type":"ContainerDied","Data":"3dda6f41be17fe9196c3efdbff4d6cc26f8396728d3a90616527dfcbe3fb64f0"} Dec 01 09:45:54 crc kubenswrapper[4689]: I1201 09:45:54.388679 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cdmgd" event={"ID":"a4324480-e485-4833-bc8c-0073ec58fdd0","Type":"ContainerStarted","Data":"743a309231b42ac01e06c8bc5703ad411f8ce35959b34b2f9f408b0f596e1652"} Dec 01 09:45:58 crc kubenswrapper[4689]: I1201 09:45:58.426792 4689 generic.go:334] "Generic (PLEG): container finished" podID="a4324480-e485-4833-bc8c-0073ec58fdd0" containerID="743a309231b42ac01e06c8bc5703ad411f8ce35959b34b2f9f408b0f596e1652" exitCode=0 Dec 01 09:45:58 crc kubenswrapper[4689]: I1201 09:45:58.426873 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cdmgd" event={"ID":"a4324480-e485-4833-bc8c-0073ec58fdd0","Type":"ContainerDied","Data":"743a309231b42ac01e06c8bc5703ad411f8ce35959b34b2f9f408b0f596e1652"} Dec 01 09:45:59 crc kubenswrapper[4689]: I1201 09:45:59.446224 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cdmgd" event={"ID":"a4324480-e485-4833-bc8c-0073ec58fdd0","Type":"ContainerStarted","Data":"95d575585894b78a57dc9f0028cbd2981dadea45e1923e3e4bfdf053e1de0b4b"} Dec 01 09:45:59 crc kubenswrapper[4689]: I1201 09:45:59.476193 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-cdmgd" podStartSLOduration=2.829076131 podStartE2EDuration="9.476160855s" podCreationTimestamp="2025-12-01 09:45:50 +0000 UTC" firstStartedPulling="2025-12-01 09:45:52.36586501 +0000 UTC m=+4032.438152914" lastFinishedPulling="2025-12-01 09:45:59.012949734 +0000 UTC m=+4039.085237638" observedRunningTime="2025-12-01 09:45:59.472848924 +0000 UTC m=+4039.545136848" watchObservedRunningTime="2025-12-01 09:45:59.476160855 +0000 UTC m=+4039.548448759" Dec 01 09:46:00 crc kubenswrapper[4689]: I1201 09:46:00.619390 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-cdmgd" Dec 01 09:46:00 crc kubenswrapper[4689]: I1201 09:46:00.619478 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-cdmgd" Dec 01 09:46:01 crc kubenswrapper[4689]: I1201 09:46:01.674813 4689 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cdmgd" podUID="a4324480-e485-4833-bc8c-0073ec58fdd0" containerName="registry-server" probeResult="failure" output=< Dec 01 09:46:01 crc kubenswrapper[4689]: timeout: failed to connect service ":50051" within 1s Dec 01 09:46:01 crc kubenswrapper[4689]: > Dec 01 09:46:09 crc kubenswrapper[4689]: I1201 09:46:09.146891 4689 patch_prober.go:28] interesting pod/machine-config-daemon-hmdnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 09:46:09 crc kubenswrapper[4689]: I1201 09:46:09.147331 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 09:46:10 crc kubenswrapper[4689]: I1201 09:46:10.864867 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-cdmgd" Dec 01 09:46:10 crc kubenswrapper[4689]: I1201 09:46:10.928502 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-cdmgd" Dec 01 09:46:11 crc kubenswrapper[4689]: I1201 09:46:11.100439 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cdmgd"] Dec 01 09:46:12 crc kubenswrapper[4689]: I1201 09:46:12.837755 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-cdmgd" podUID="a4324480-e485-4833-bc8c-0073ec58fdd0" containerName="registry-server" containerID="cri-o://95d575585894b78a57dc9f0028cbd2981dadea45e1923e3e4bfdf053e1de0b4b" gracePeriod=2 Dec 01 09:46:13 crc kubenswrapper[4689]: I1201 09:46:13.420355 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cdmgd" Dec 01 09:46:13 crc kubenswrapper[4689]: I1201 09:46:13.560080 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4324480-e485-4833-bc8c-0073ec58fdd0-catalog-content\") pod \"a4324480-e485-4833-bc8c-0073ec58fdd0\" (UID: \"a4324480-e485-4833-bc8c-0073ec58fdd0\") " Dec 01 09:46:13 crc kubenswrapper[4689]: I1201 09:46:13.560156 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4324480-e485-4833-bc8c-0073ec58fdd0-utilities\") pod \"a4324480-e485-4833-bc8c-0073ec58fdd0\" (UID: \"a4324480-e485-4833-bc8c-0073ec58fdd0\") " Dec 01 09:46:13 crc kubenswrapper[4689]: I1201 09:46:13.560203 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tq2tv\" (UniqueName: \"kubernetes.io/projected/a4324480-e485-4833-bc8c-0073ec58fdd0-kube-api-access-tq2tv\") pod \"a4324480-e485-4833-bc8c-0073ec58fdd0\" (UID: \"a4324480-e485-4833-bc8c-0073ec58fdd0\") " Dec 01 09:46:13 crc kubenswrapper[4689]: I1201 09:46:13.561301 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4324480-e485-4833-bc8c-0073ec58fdd0-utilities" (OuterVolumeSpecName: "utilities") pod "a4324480-e485-4833-bc8c-0073ec58fdd0" (UID: "a4324480-e485-4833-bc8c-0073ec58fdd0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:46:13 crc kubenswrapper[4689]: I1201 09:46:13.569629 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4324480-e485-4833-bc8c-0073ec58fdd0-kube-api-access-tq2tv" (OuterVolumeSpecName: "kube-api-access-tq2tv") pod "a4324480-e485-4833-bc8c-0073ec58fdd0" (UID: "a4324480-e485-4833-bc8c-0073ec58fdd0"). InnerVolumeSpecName "kube-api-access-tq2tv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:46:13 crc kubenswrapper[4689]: I1201 09:46:13.605129 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-trlc6/must-gather-j9kx7"] Dec 01 09:46:13 crc kubenswrapper[4689]: E1201 09:46:13.622170 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4324480-e485-4833-bc8c-0073ec58fdd0" containerName="extract-utilities" Dec 01 09:46:13 crc kubenswrapper[4689]: I1201 09:46:13.622236 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4324480-e485-4833-bc8c-0073ec58fdd0" containerName="extract-utilities" Dec 01 09:46:13 crc kubenswrapper[4689]: E1201 09:46:13.622269 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4324480-e485-4833-bc8c-0073ec58fdd0" containerName="extract-content" Dec 01 09:46:13 crc kubenswrapper[4689]: I1201 09:46:13.622281 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4324480-e485-4833-bc8c-0073ec58fdd0" containerName="extract-content" Dec 01 09:46:13 crc kubenswrapper[4689]: E1201 09:46:13.622310 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4324480-e485-4833-bc8c-0073ec58fdd0" containerName="registry-server" Dec 01 09:46:13 crc kubenswrapper[4689]: I1201 09:46:13.622322 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4324480-e485-4833-bc8c-0073ec58fdd0" containerName="registry-server" Dec 01 09:46:13 crc kubenswrapper[4689]: I1201 09:46:13.622640 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4324480-e485-4833-bc8c-0073ec58fdd0" containerName="registry-server" Dec 01 09:46:13 crc kubenswrapper[4689]: I1201 09:46:13.624427 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-trlc6/must-gather-j9kx7" Dec 01 09:46:13 crc kubenswrapper[4689]: I1201 09:46:13.633036 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-trlc6"/"openshift-service-ca.crt" Dec 01 09:46:13 crc kubenswrapper[4689]: I1201 09:46:13.633048 4689 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-trlc6"/"kube-root-ca.crt" Dec 01 09:46:13 crc kubenswrapper[4689]: I1201 09:46:13.652055 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-trlc6/must-gather-j9kx7"] Dec 01 09:46:13 crc kubenswrapper[4689]: I1201 09:46:13.663479 4689 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4324480-e485-4833-bc8c-0073ec58fdd0-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 09:46:13 crc kubenswrapper[4689]: I1201 09:46:13.663523 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tq2tv\" (UniqueName: \"kubernetes.io/projected/a4324480-e485-4833-bc8c-0073ec58fdd0-kube-api-access-tq2tv\") on node \"crc\" DevicePath \"\"" Dec 01 09:46:13 crc kubenswrapper[4689]: I1201 09:46:13.769199 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8s74\" (UniqueName: \"kubernetes.io/projected/6e26c226-e328-4af6-aeff-3579e26ad21e-kube-api-access-g8s74\") pod \"must-gather-j9kx7\" (UID: \"6e26c226-e328-4af6-aeff-3579e26ad21e\") " pod="openshift-must-gather-trlc6/must-gather-j9kx7" Dec 01 09:46:13 crc kubenswrapper[4689]: I1201 09:46:13.769283 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6e26c226-e328-4af6-aeff-3579e26ad21e-must-gather-output\") pod \"must-gather-j9kx7\" (UID: \"6e26c226-e328-4af6-aeff-3579e26ad21e\") " pod="openshift-must-gather-trlc6/must-gather-j9kx7" Dec 01 09:46:13 crc kubenswrapper[4689]: I1201 09:46:13.813148 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4324480-e485-4833-bc8c-0073ec58fdd0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a4324480-e485-4833-bc8c-0073ec58fdd0" (UID: "a4324480-e485-4833-bc8c-0073ec58fdd0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:46:13 crc kubenswrapper[4689]: I1201 09:46:13.865483 4689 generic.go:334] "Generic (PLEG): container finished" podID="a4324480-e485-4833-bc8c-0073ec58fdd0" containerID="95d575585894b78a57dc9f0028cbd2981dadea45e1923e3e4bfdf053e1de0b4b" exitCode=0 Dec 01 09:46:13 crc kubenswrapper[4689]: I1201 09:46:13.865560 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cdmgd" event={"ID":"a4324480-e485-4833-bc8c-0073ec58fdd0","Type":"ContainerDied","Data":"95d575585894b78a57dc9f0028cbd2981dadea45e1923e3e4bfdf053e1de0b4b"} Dec 01 09:46:13 crc kubenswrapper[4689]: I1201 09:46:13.865591 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cdmgd" event={"ID":"a4324480-e485-4833-bc8c-0073ec58fdd0","Type":"ContainerDied","Data":"f53b3590ad04e2c4318b79d65dffe6380f6bffc9d1a05431578bb6707f87f77e"} Dec 01 09:46:13 crc kubenswrapper[4689]: I1201 09:46:13.865627 4689 scope.go:117] "RemoveContainer" containerID="95d575585894b78a57dc9f0028cbd2981dadea45e1923e3e4bfdf053e1de0b4b" Dec 01 09:46:13 crc kubenswrapper[4689]: I1201 09:46:13.865798 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cdmgd" Dec 01 09:46:13 crc kubenswrapper[4689]: I1201 09:46:13.871641 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8s74\" (UniqueName: \"kubernetes.io/projected/6e26c226-e328-4af6-aeff-3579e26ad21e-kube-api-access-g8s74\") pod \"must-gather-j9kx7\" (UID: \"6e26c226-e328-4af6-aeff-3579e26ad21e\") " pod="openshift-must-gather-trlc6/must-gather-j9kx7" Dec 01 09:46:13 crc kubenswrapper[4689]: I1201 09:46:13.871933 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6e26c226-e328-4af6-aeff-3579e26ad21e-must-gather-output\") pod \"must-gather-j9kx7\" (UID: \"6e26c226-e328-4af6-aeff-3579e26ad21e\") " pod="openshift-must-gather-trlc6/must-gather-j9kx7" Dec 01 09:46:13 crc kubenswrapper[4689]: I1201 09:46:13.872227 4689 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4324480-e485-4833-bc8c-0073ec58fdd0-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 09:46:13 crc kubenswrapper[4689]: I1201 09:46:13.873141 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6e26c226-e328-4af6-aeff-3579e26ad21e-must-gather-output\") pod \"must-gather-j9kx7\" (UID: \"6e26c226-e328-4af6-aeff-3579e26ad21e\") " pod="openshift-must-gather-trlc6/must-gather-j9kx7" Dec 01 09:46:13 crc kubenswrapper[4689]: I1201 09:46:13.897536 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8s74\" (UniqueName: \"kubernetes.io/projected/6e26c226-e328-4af6-aeff-3579e26ad21e-kube-api-access-g8s74\") pod \"must-gather-j9kx7\" (UID: \"6e26c226-e328-4af6-aeff-3579e26ad21e\") " pod="openshift-must-gather-trlc6/must-gather-j9kx7" Dec 01 09:46:13 crc kubenswrapper[4689]: I1201 09:46:13.912121 4689 scope.go:117] "RemoveContainer" containerID="743a309231b42ac01e06c8bc5703ad411f8ce35959b34b2f9f408b0f596e1652" Dec 01 09:46:13 crc kubenswrapper[4689]: I1201 09:46:13.942197 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cdmgd"] Dec 01 09:46:13 crc kubenswrapper[4689]: I1201 09:46:13.956317 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-cdmgd"] Dec 01 09:46:13 crc kubenswrapper[4689]: I1201 09:46:13.973714 4689 scope.go:117] "RemoveContainer" containerID="3dda6f41be17fe9196c3efdbff4d6cc26f8396728d3a90616527dfcbe3fb64f0" Dec 01 09:46:13 crc kubenswrapper[4689]: I1201 09:46:13.974824 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-trlc6/must-gather-j9kx7" Dec 01 09:46:14 crc kubenswrapper[4689]: I1201 09:46:14.148055 4689 scope.go:117] "RemoveContainer" containerID="95d575585894b78a57dc9f0028cbd2981dadea45e1923e3e4bfdf053e1de0b4b" Dec 01 09:46:14 crc kubenswrapper[4689]: E1201 09:46:14.148582 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95d575585894b78a57dc9f0028cbd2981dadea45e1923e3e4bfdf053e1de0b4b\": container with ID starting with 95d575585894b78a57dc9f0028cbd2981dadea45e1923e3e4bfdf053e1de0b4b not found: ID does not exist" containerID="95d575585894b78a57dc9f0028cbd2981dadea45e1923e3e4bfdf053e1de0b4b" Dec 01 09:46:14 crc kubenswrapper[4689]: I1201 09:46:14.148631 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95d575585894b78a57dc9f0028cbd2981dadea45e1923e3e4bfdf053e1de0b4b"} err="failed to get container status \"95d575585894b78a57dc9f0028cbd2981dadea45e1923e3e4bfdf053e1de0b4b\": rpc error: code = NotFound desc = could not find container \"95d575585894b78a57dc9f0028cbd2981dadea45e1923e3e4bfdf053e1de0b4b\": container with ID starting with 95d575585894b78a57dc9f0028cbd2981dadea45e1923e3e4bfdf053e1de0b4b not found: ID does not exist" Dec 01 09:46:14 crc kubenswrapper[4689]: I1201 09:46:14.148656 4689 scope.go:117] "RemoveContainer" containerID="743a309231b42ac01e06c8bc5703ad411f8ce35959b34b2f9f408b0f596e1652" Dec 01 09:46:14 crc kubenswrapper[4689]: E1201 09:46:14.157626 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"743a309231b42ac01e06c8bc5703ad411f8ce35959b34b2f9f408b0f596e1652\": container with ID starting with 743a309231b42ac01e06c8bc5703ad411f8ce35959b34b2f9f408b0f596e1652 not found: ID does not exist" containerID="743a309231b42ac01e06c8bc5703ad411f8ce35959b34b2f9f408b0f596e1652" Dec 01 09:46:14 crc kubenswrapper[4689]: I1201 09:46:14.157675 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"743a309231b42ac01e06c8bc5703ad411f8ce35959b34b2f9f408b0f596e1652"} err="failed to get container status \"743a309231b42ac01e06c8bc5703ad411f8ce35959b34b2f9f408b0f596e1652\": rpc error: code = NotFound desc = could not find container \"743a309231b42ac01e06c8bc5703ad411f8ce35959b34b2f9f408b0f596e1652\": container with ID starting with 743a309231b42ac01e06c8bc5703ad411f8ce35959b34b2f9f408b0f596e1652 not found: ID does not exist" Dec 01 09:46:14 crc kubenswrapper[4689]: I1201 09:46:14.157703 4689 scope.go:117] "RemoveContainer" containerID="3dda6f41be17fe9196c3efdbff4d6cc26f8396728d3a90616527dfcbe3fb64f0" Dec 01 09:46:14 crc kubenswrapper[4689]: E1201 09:46:14.158222 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3dda6f41be17fe9196c3efdbff4d6cc26f8396728d3a90616527dfcbe3fb64f0\": container with ID starting with 3dda6f41be17fe9196c3efdbff4d6cc26f8396728d3a90616527dfcbe3fb64f0 not found: ID does not exist" containerID="3dda6f41be17fe9196c3efdbff4d6cc26f8396728d3a90616527dfcbe3fb64f0" Dec 01 09:46:14 crc kubenswrapper[4689]: I1201 09:46:14.158251 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3dda6f41be17fe9196c3efdbff4d6cc26f8396728d3a90616527dfcbe3fb64f0"} err="failed to get container status \"3dda6f41be17fe9196c3efdbff4d6cc26f8396728d3a90616527dfcbe3fb64f0\": rpc error: code = NotFound desc = could not find container \"3dda6f41be17fe9196c3efdbff4d6cc26f8396728d3a90616527dfcbe3fb64f0\": container with ID starting with 3dda6f41be17fe9196c3efdbff4d6cc26f8396728d3a90616527dfcbe3fb64f0 not found: ID does not exist" Dec 01 09:46:14 crc kubenswrapper[4689]: I1201 09:46:14.509780 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-trlc6/must-gather-j9kx7"] Dec 01 09:46:14 crc kubenswrapper[4689]: I1201 09:46:14.876861 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-trlc6/must-gather-j9kx7" event={"ID":"6e26c226-e328-4af6-aeff-3579e26ad21e","Type":"ContainerStarted","Data":"b2092a7bd386167b10d85f5d2f4c7576e0b5e1dbf3c413f96946b6ca45ff9993"} Dec 01 09:46:15 crc kubenswrapper[4689]: I1201 09:46:15.058338 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4324480-e485-4833-bc8c-0073ec58fdd0" path="/var/lib/kubelet/pods/a4324480-e485-4833-bc8c-0073ec58fdd0/volumes" Dec 01 09:46:15 crc kubenswrapper[4689]: I1201 09:46:15.886862 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-trlc6/must-gather-j9kx7" event={"ID":"6e26c226-e328-4af6-aeff-3579e26ad21e","Type":"ContainerStarted","Data":"86a0e617e0f7fe255f8cdc6f77901360e7c539b557021a337c20c59494761bcf"} Dec 01 09:46:16 crc kubenswrapper[4689]: I1201 09:46:16.908620 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-trlc6/must-gather-j9kx7" event={"ID":"6e26c226-e328-4af6-aeff-3579e26ad21e","Type":"ContainerStarted","Data":"f0bbbb998151076f4b15165f27db0f99d52878318a98c2ccf14a2829e006d8b9"} Dec 01 09:46:16 crc kubenswrapper[4689]: I1201 09:46:16.943195 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-trlc6/must-gather-j9kx7" podStartSLOduration=3.943168521 podStartE2EDuration="3.943168521s" podCreationTimestamp="2025-12-01 09:46:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:46:16.927046019 +0000 UTC m=+4056.999333933" watchObservedRunningTime="2025-12-01 09:46:16.943168521 +0000 UTC m=+4057.015456425" Dec 01 09:46:20 crc kubenswrapper[4689]: I1201 09:46:20.141031 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-trlc6/crc-debug-bpdvs"] Dec 01 09:46:20 crc kubenswrapper[4689]: I1201 09:46:20.143481 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-trlc6/crc-debug-bpdvs" Dec 01 09:46:20 crc kubenswrapper[4689]: I1201 09:46:20.144965 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-trlc6"/"default-dockercfg-9tppb" Dec 01 09:46:20 crc kubenswrapper[4689]: I1201 09:46:20.317047 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/23c3481f-069d-4b47-9958-4fd08e3d1e0b-host\") pod \"crc-debug-bpdvs\" (UID: \"23c3481f-069d-4b47-9958-4fd08e3d1e0b\") " pod="openshift-must-gather-trlc6/crc-debug-bpdvs" Dec 01 09:46:20 crc kubenswrapper[4689]: I1201 09:46:20.317172 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54rqh\" (UniqueName: \"kubernetes.io/projected/23c3481f-069d-4b47-9958-4fd08e3d1e0b-kube-api-access-54rqh\") pod \"crc-debug-bpdvs\" (UID: \"23c3481f-069d-4b47-9958-4fd08e3d1e0b\") " pod="openshift-must-gather-trlc6/crc-debug-bpdvs" Dec 01 09:46:20 crc kubenswrapper[4689]: I1201 09:46:20.433054 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/23c3481f-069d-4b47-9958-4fd08e3d1e0b-host\") pod \"crc-debug-bpdvs\" (UID: \"23c3481f-069d-4b47-9958-4fd08e3d1e0b\") " pod="openshift-must-gather-trlc6/crc-debug-bpdvs" Dec 01 09:46:20 crc kubenswrapper[4689]: I1201 09:46:20.433176 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54rqh\" (UniqueName: \"kubernetes.io/projected/23c3481f-069d-4b47-9958-4fd08e3d1e0b-kube-api-access-54rqh\") pod \"crc-debug-bpdvs\" (UID: \"23c3481f-069d-4b47-9958-4fd08e3d1e0b\") " pod="openshift-must-gather-trlc6/crc-debug-bpdvs" Dec 01 09:46:20 crc kubenswrapper[4689]: I1201 09:46:20.433208 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/23c3481f-069d-4b47-9958-4fd08e3d1e0b-host\") pod \"crc-debug-bpdvs\" (UID: \"23c3481f-069d-4b47-9958-4fd08e3d1e0b\") " pod="openshift-must-gather-trlc6/crc-debug-bpdvs" Dec 01 09:46:20 crc kubenswrapper[4689]: I1201 09:46:20.467250 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54rqh\" (UniqueName: \"kubernetes.io/projected/23c3481f-069d-4b47-9958-4fd08e3d1e0b-kube-api-access-54rqh\") pod \"crc-debug-bpdvs\" (UID: \"23c3481f-069d-4b47-9958-4fd08e3d1e0b\") " pod="openshift-must-gather-trlc6/crc-debug-bpdvs" Dec 01 09:46:20 crc kubenswrapper[4689]: I1201 09:46:20.761715 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-trlc6/crc-debug-bpdvs" Dec 01 09:46:20 crc kubenswrapper[4689]: I1201 09:46:20.959008 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-trlc6/crc-debug-bpdvs" event={"ID":"23c3481f-069d-4b47-9958-4fd08e3d1e0b","Type":"ContainerStarted","Data":"5452ef65ae310dfea79c3b51815abe81b84b595d30b42dbd042803810582c394"} Dec 01 09:46:21 crc kubenswrapper[4689]: I1201 09:46:21.987097 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-trlc6/crc-debug-bpdvs" event={"ID":"23c3481f-069d-4b47-9958-4fd08e3d1e0b","Type":"ContainerStarted","Data":"e675fb07467a93a815f141b80064e6fb8a69e29e583381d25487281d6523988b"} Dec 01 09:46:22 crc kubenswrapper[4689]: I1201 09:46:22.018032 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-trlc6/crc-debug-bpdvs" podStartSLOduration=2.018010851 podStartE2EDuration="2.018010851s" podCreationTimestamp="2025-12-01 09:46:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 09:46:22.004842681 +0000 UTC m=+4062.077130585" watchObservedRunningTime="2025-12-01 09:46:22.018010851 +0000 UTC m=+4062.090298745" Dec 01 09:46:39 crc kubenswrapper[4689]: I1201 09:46:39.147173 4689 patch_prober.go:28] interesting pod/machine-config-daemon-hmdnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 09:46:39 crc kubenswrapper[4689]: I1201 09:46:39.147781 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 09:46:39 crc kubenswrapper[4689]: I1201 09:46:39.147839 4689 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" Dec 01 09:46:39 crc kubenswrapper[4689]: I1201 09:46:39.148711 4689 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"02184f9ae082d659d01c6605cb27fe963a839349b25bcf51719a6228ec800093"} pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 01 09:46:39 crc kubenswrapper[4689]: I1201 09:46:39.148786 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" containerName="machine-config-daemon" containerID="cri-o://02184f9ae082d659d01c6605cb27fe963a839349b25bcf51719a6228ec800093" gracePeriod=600 Dec 01 09:46:40 crc kubenswrapper[4689]: I1201 09:46:40.203995 4689 generic.go:334] "Generic (PLEG): container finished" podID="3947625d-75bf-4332-a233-1491b2ee9d96" containerID="02184f9ae082d659d01c6605cb27fe963a839349b25bcf51719a6228ec800093" exitCode=0 Dec 01 09:46:40 crc kubenswrapper[4689]: I1201 09:46:40.204048 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" event={"ID":"3947625d-75bf-4332-a233-1491b2ee9d96","Type":"ContainerDied","Data":"02184f9ae082d659d01c6605cb27fe963a839349b25bcf51719a6228ec800093"} Dec 01 09:46:40 crc kubenswrapper[4689]: I1201 09:46:40.204078 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" event={"ID":"3947625d-75bf-4332-a233-1491b2ee9d96","Type":"ContainerStarted","Data":"e36fd29c354928ae5e5e41432d56ac7d589bb3fcd68c98fda545a3d82585498f"} Dec 01 09:46:40 crc kubenswrapper[4689]: I1201 09:46:40.204096 4689 scope.go:117] "RemoveContainer" containerID="ce8ba0c4ebf8c19567df23bc269c8977e9b7f7f3972dba477579893ac7472ebf" Dec 01 09:47:01 crc kubenswrapper[4689]: I1201 09:47:01.410511 4689 generic.go:334] "Generic (PLEG): container finished" podID="23c3481f-069d-4b47-9958-4fd08e3d1e0b" containerID="e675fb07467a93a815f141b80064e6fb8a69e29e583381d25487281d6523988b" exitCode=0 Dec 01 09:47:01 crc kubenswrapper[4689]: I1201 09:47:01.410605 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-trlc6/crc-debug-bpdvs" event={"ID":"23c3481f-069d-4b47-9958-4fd08e3d1e0b","Type":"ContainerDied","Data":"e675fb07467a93a815f141b80064e6fb8a69e29e583381d25487281d6523988b"} Dec 01 09:47:02 crc kubenswrapper[4689]: I1201 09:47:02.534525 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-trlc6/crc-debug-bpdvs" Dec 01 09:47:02 crc kubenswrapper[4689]: I1201 09:47:02.544824 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/23c3481f-069d-4b47-9958-4fd08e3d1e0b-host\") pod \"23c3481f-069d-4b47-9958-4fd08e3d1e0b\" (UID: \"23c3481f-069d-4b47-9958-4fd08e3d1e0b\") " Dec 01 09:47:02 crc kubenswrapper[4689]: I1201 09:47:02.544967 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/23c3481f-069d-4b47-9958-4fd08e3d1e0b-host" (OuterVolumeSpecName: "host") pod "23c3481f-069d-4b47-9958-4fd08e3d1e0b" (UID: "23c3481f-069d-4b47-9958-4fd08e3d1e0b"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 09:47:02 crc kubenswrapper[4689]: I1201 09:47:02.545200 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-54rqh\" (UniqueName: \"kubernetes.io/projected/23c3481f-069d-4b47-9958-4fd08e3d1e0b-kube-api-access-54rqh\") pod \"23c3481f-069d-4b47-9958-4fd08e3d1e0b\" (UID: \"23c3481f-069d-4b47-9958-4fd08e3d1e0b\") " Dec 01 09:47:02 crc kubenswrapper[4689]: I1201 09:47:02.545642 4689 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/23c3481f-069d-4b47-9958-4fd08e3d1e0b-host\") on node \"crc\" DevicePath \"\"" Dec 01 09:47:02 crc kubenswrapper[4689]: I1201 09:47:02.574744 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-trlc6/crc-debug-bpdvs"] Dec 01 09:47:02 crc kubenswrapper[4689]: I1201 09:47:02.582987 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-trlc6/crc-debug-bpdvs"] Dec 01 09:47:02 crc kubenswrapper[4689]: I1201 09:47:02.922679 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23c3481f-069d-4b47-9958-4fd08e3d1e0b-kube-api-access-54rqh" (OuterVolumeSpecName: "kube-api-access-54rqh") pod "23c3481f-069d-4b47-9958-4fd08e3d1e0b" (UID: "23c3481f-069d-4b47-9958-4fd08e3d1e0b"). InnerVolumeSpecName "kube-api-access-54rqh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:47:02 crc kubenswrapper[4689]: I1201 09:47:02.953565 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-54rqh\" (UniqueName: \"kubernetes.io/projected/23c3481f-069d-4b47-9958-4fd08e3d1e0b-kube-api-access-54rqh\") on node \"crc\" DevicePath \"\"" Dec 01 09:47:03 crc kubenswrapper[4689]: I1201 09:47:03.078770 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23c3481f-069d-4b47-9958-4fd08e3d1e0b" path="/var/lib/kubelet/pods/23c3481f-069d-4b47-9958-4fd08e3d1e0b/volumes" Dec 01 09:47:03 crc kubenswrapper[4689]: I1201 09:47:03.429763 4689 scope.go:117] "RemoveContainer" containerID="e675fb07467a93a815f141b80064e6fb8a69e29e583381d25487281d6523988b" Dec 01 09:47:03 crc kubenswrapper[4689]: I1201 09:47:03.429840 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-trlc6/crc-debug-bpdvs" Dec 01 09:47:04 crc kubenswrapper[4689]: I1201 09:47:04.143746 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-trlc6/crc-debug-4qpt6"] Dec 01 09:47:04 crc kubenswrapper[4689]: E1201 09:47:04.144228 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23c3481f-069d-4b47-9958-4fd08e3d1e0b" containerName="container-00" Dec 01 09:47:04 crc kubenswrapper[4689]: I1201 09:47:04.144244 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="23c3481f-069d-4b47-9958-4fd08e3d1e0b" containerName="container-00" Dec 01 09:47:04 crc kubenswrapper[4689]: I1201 09:47:04.144474 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="23c3481f-069d-4b47-9958-4fd08e3d1e0b" containerName="container-00" Dec 01 09:47:04 crc kubenswrapper[4689]: I1201 09:47:04.145075 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-trlc6/crc-debug-4qpt6" Dec 01 09:47:04 crc kubenswrapper[4689]: I1201 09:47:04.147478 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-trlc6"/"default-dockercfg-9tppb" Dec 01 09:47:04 crc kubenswrapper[4689]: I1201 09:47:04.276188 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/73a496e1-8314-4a10-bda0-5f5b31b319db-host\") pod \"crc-debug-4qpt6\" (UID: \"73a496e1-8314-4a10-bda0-5f5b31b319db\") " pod="openshift-must-gather-trlc6/crc-debug-4qpt6" Dec 01 09:47:04 crc kubenswrapper[4689]: I1201 09:47:04.276610 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwh5j\" (UniqueName: \"kubernetes.io/projected/73a496e1-8314-4a10-bda0-5f5b31b319db-kube-api-access-cwh5j\") pod \"crc-debug-4qpt6\" (UID: \"73a496e1-8314-4a10-bda0-5f5b31b319db\") " pod="openshift-must-gather-trlc6/crc-debug-4qpt6" Dec 01 09:47:04 crc kubenswrapper[4689]: I1201 09:47:04.378756 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/73a496e1-8314-4a10-bda0-5f5b31b319db-host\") pod \"crc-debug-4qpt6\" (UID: \"73a496e1-8314-4a10-bda0-5f5b31b319db\") " pod="openshift-must-gather-trlc6/crc-debug-4qpt6" Dec 01 09:47:04 crc kubenswrapper[4689]: I1201 09:47:04.378903 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/73a496e1-8314-4a10-bda0-5f5b31b319db-host\") pod \"crc-debug-4qpt6\" (UID: \"73a496e1-8314-4a10-bda0-5f5b31b319db\") " pod="openshift-must-gather-trlc6/crc-debug-4qpt6" Dec 01 09:47:04 crc kubenswrapper[4689]: I1201 09:47:04.378917 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwh5j\" (UniqueName: \"kubernetes.io/projected/73a496e1-8314-4a10-bda0-5f5b31b319db-kube-api-access-cwh5j\") pod \"crc-debug-4qpt6\" (UID: \"73a496e1-8314-4a10-bda0-5f5b31b319db\") " pod="openshift-must-gather-trlc6/crc-debug-4qpt6" Dec 01 09:47:04 crc kubenswrapper[4689]: I1201 09:47:04.822390 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwh5j\" (UniqueName: \"kubernetes.io/projected/73a496e1-8314-4a10-bda0-5f5b31b319db-kube-api-access-cwh5j\") pod \"crc-debug-4qpt6\" (UID: \"73a496e1-8314-4a10-bda0-5f5b31b319db\") " pod="openshift-must-gather-trlc6/crc-debug-4qpt6" Dec 01 09:47:05 crc kubenswrapper[4689]: I1201 09:47:05.065572 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-trlc6/crc-debug-4qpt6" Dec 01 09:47:05 crc kubenswrapper[4689]: I1201 09:47:05.454754 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-trlc6/crc-debug-4qpt6" event={"ID":"73a496e1-8314-4a10-bda0-5f5b31b319db","Type":"ContainerStarted","Data":"b6bac490a10e9c30902b818df2b1d86f46429429abec37d496e25d474c91baf5"} Dec 01 09:47:05 crc kubenswrapper[4689]: I1201 09:47:05.454820 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-trlc6/crc-debug-4qpt6" event={"ID":"73a496e1-8314-4a10-bda0-5f5b31b319db","Type":"ContainerStarted","Data":"8bed1526219c62a3ee94bb7001075c7d97a160bae6aa0c260db4fe15cb2c0147"} Dec 01 09:47:06 crc kubenswrapper[4689]: I1201 09:47:06.469022 4689 generic.go:334] "Generic (PLEG): container finished" podID="73a496e1-8314-4a10-bda0-5f5b31b319db" containerID="b6bac490a10e9c30902b818df2b1d86f46429429abec37d496e25d474c91baf5" exitCode=0 Dec 01 09:47:06 crc kubenswrapper[4689]: I1201 09:47:06.469099 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-trlc6/crc-debug-4qpt6" event={"ID":"73a496e1-8314-4a10-bda0-5f5b31b319db","Type":"ContainerDied","Data":"b6bac490a10e9c30902b818df2b1d86f46429429abec37d496e25d474c91baf5"} Dec 01 09:47:07 crc kubenswrapper[4689]: I1201 09:47:07.582318 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-trlc6/crc-debug-4qpt6" Dec 01 09:47:07 crc kubenswrapper[4689]: I1201 09:47:07.613800 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-trlc6/crc-debug-4qpt6"] Dec 01 09:47:07 crc kubenswrapper[4689]: I1201 09:47:07.621982 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-trlc6/crc-debug-4qpt6"] Dec 01 09:47:07 crc kubenswrapper[4689]: I1201 09:47:07.749504 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/73a496e1-8314-4a10-bda0-5f5b31b319db-host\") pod \"73a496e1-8314-4a10-bda0-5f5b31b319db\" (UID: \"73a496e1-8314-4a10-bda0-5f5b31b319db\") " Dec 01 09:47:07 crc kubenswrapper[4689]: I1201 09:47:07.749610 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwh5j\" (UniqueName: \"kubernetes.io/projected/73a496e1-8314-4a10-bda0-5f5b31b319db-kube-api-access-cwh5j\") pod \"73a496e1-8314-4a10-bda0-5f5b31b319db\" (UID: \"73a496e1-8314-4a10-bda0-5f5b31b319db\") " Dec 01 09:47:07 crc kubenswrapper[4689]: I1201 09:47:07.749939 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/73a496e1-8314-4a10-bda0-5f5b31b319db-host" (OuterVolumeSpecName: "host") pod "73a496e1-8314-4a10-bda0-5f5b31b319db" (UID: "73a496e1-8314-4a10-bda0-5f5b31b319db"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 09:47:07 crc kubenswrapper[4689]: I1201 09:47:07.750536 4689 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/73a496e1-8314-4a10-bda0-5f5b31b319db-host\") on node \"crc\" DevicePath \"\"" Dec 01 09:47:07 crc kubenswrapper[4689]: I1201 09:47:07.759395 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73a496e1-8314-4a10-bda0-5f5b31b319db-kube-api-access-cwh5j" (OuterVolumeSpecName: "kube-api-access-cwh5j") pod "73a496e1-8314-4a10-bda0-5f5b31b319db" (UID: "73a496e1-8314-4a10-bda0-5f5b31b319db"). InnerVolumeSpecName "kube-api-access-cwh5j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:47:07 crc kubenswrapper[4689]: I1201 09:47:07.852072 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cwh5j\" (UniqueName: \"kubernetes.io/projected/73a496e1-8314-4a10-bda0-5f5b31b319db-kube-api-access-cwh5j\") on node \"crc\" DevicePath \"\"" Dec 01 09:47:08 crc kubenswrapper[4689]: I1201 09:47:08.488590 4689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8bed1526219c62a3ee94bb7001075c7d97a160bae6aa0c260db4fe15cb2c0147" Dec 01 09:47:08 crc kubenswrapper[4689]: I1201 09:47:08.488663 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-trlc6/crc-debug-4qpt6" Dec 01 09:47:08 crc kubenswrapper[4689]: I1201 09:47:08.930145 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-trlc6/crc-debug-jpxwg"] Dec 01 09:47:08 crc kubenswrapper[4689]: E1201 09:47:08.930652 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73a496e1-8314-4a10-bda0-5f5b31b319db" containerName="container-00" Dec 01 09:47:08 crc kubenswrapper[4689]: I1201 09:47:08.930667 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="73a496e1-8314-4a10-bda0-5f5b31b319db" containerName="container-00" Dec 01 09:47:08 crc kubenswrapper[4689]: I1201 09:47:08.930978 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="73a496e1-8314-4a10-bda0-5f5b31b319db" containerName="container-00" Dec 01 09:47:08 crc kubenswrapper[4689]: I1201 09:47:08.931756 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-trlc6/crc-debug-jpxwg" Dec 01 09:47:08 crc kubenswrapper[4689]: I1201 09:47:08.936260 4689 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-trlc6"/"default-dockercfg-9tppb" Dec 01 09:47:09 crc kubenswrapper[4689]: I1201 09:47:09.058156 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73a496e1-8314-4a10-bda0-5f5b31b319db" path="/var/lib/kubelet/pods/73a496e1-8314-4a10-bda0-5f5b31b319db/volumes" Dec 01 09:47:09 crc kubenswrapper[4689]: I1201 09:47:09.075851 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hktd9\" (UniqueName: \"kubernetes.io/projected/0608b7a8-eb02-4fcd-9a47-6f87e3345339-kube-api-access-hktd9\") pod \"crc-debug-jpxwg\" (UID: \"0608b7a8-eb02-4fcd-9a47-6f87e3345339\") " pod="openshift-must-gather-trlc6/crc-debug-jpxwg" Dec 01 09:47:09 crc kubenswrapper[4689]: I1201 09:47:09.076001 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0608b7a8-eb02-4fcd-9a47-6f87e3345339-host\") pod \"crc-debug-jpxwg\" (UID: \"0608b7a8-eb02-4fcd-9a47-6f87e3345339\") " pod="openshift-must-gather-trlc6/crc-debug-jpxwg" Dec 01 09:47:09 crc kubenswrapper[4689]: I1201 09:47:09.177540 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hktd9\" (UniqueName: \"kubernetes.io/projected/0608b7a8-eb02-4fcd-9a47-6f87e3345339-kube-api-access-hktd9\") pod \"crc-debug-jpxwg\" (UID: \"0608b7a8-eb02-4fcd-9a47-6f87e3345339\") " pod="openshift-must-gather-trlc6/crc-debug-jpxwg" Dec 01 09:47:09 crc kubenswrapper[4689]: I1201 09:47:09.177663 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0608b7a8-eb02-4fcd-9a47-6f87e3345339-host\") pod \"crc-debug-jpxwg\" (UID: \"0608b7a8-eb02-4fcd-9a47-6f87e3345339\") " pod="openshift-must-gather-trlc6/crc-debug-jpxwg" Dec 01 09:47:09 crc kubenswrapper[4689]: I1201 09:47:09.179734 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0608b7a8-eb02-4fcd-9a47-6f87e3345339-host\") pod \"crc-debug-jpxwg\" (UID: \"0608b7a8-eb02-4fcd-9a47-6f87e3345339\") " pod="openshift-must-gather-trlc6/crc-debug-jpxwg" Dec 01 09:47:09 crc kubenswrapper[4689]: I1201 09:47:09.202055 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hktd9\" (UniqueName: \"kubernetes.io/projected/0608b7a8-eb02-4fcd-9a47-6f87e3345339-kube-api-access-hktd9\") pod \"crc-debug-jpxwg\" (UID: \"0608b7a8-eb02-4fcd-9a47-6f87e3345339\") " pod="openshift-must-gather-trlc6/crc-debug-jpxwg" Dec 01 09:47:09 crc kubenswrapper[4689]: I1201 09:47:09.252072 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-trlc6/crc-debug-jpxwg" Dec 01 09:47:10 crc kubenswrapper[4689]: W1201 09:47:10.259358 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0608b7a8_eb02_4fcd_9a47_6f87e3345339.slice/crio-b024a615747a874a9edd76874b04d5c3344972f88afec83bc1e2c55d792e6549 WatchSource:0}: Error finding container b024a615747a874a9edd76874b04d5c3344972f88afec83bc1e2c55d792e6549: Status 404 returned error can't find the container with id b024a615747a874a9edd76874b04d5c3344972f88afec83bc1e2c55d792e6549 Dec 01 09:47:10 crc kubenswrapper[4689]: I1201 09:47:10.285890 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-trlc6/crc-debug-jpxwg" event={"ID":"0608b7a8-eb02-4fcd-9a47-6f87e3345339","Type":"ContainerStarted","Data":"b024a615747a874a9edd76874b04d5c3344972f88afec83bc1e2c55d792e6549"} Dec 01 09:47:11 crc kubenswrapper[4689]: I1201 09:47:11.297119 4689 generic.go:334] "Generic (PLEG): container finished" podID="0608b7a8-eb02-4fcd-9a47-6f87e3345339" containerID="54410d49c518553b6eb059d7a11e49784655be42cecf02c2a8396e7f8f7aa21c" exitCode=0 Dec 01 09:47:11 crc kubenswrapper[4689]: I1201 09:47:11.297326 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-trlc6/crc-debug-jpxwg" event={"ID":"0608b7a8-eb02-4fcd-9a47-6f87e3345339","Type":"ContainerDied","Data":"54410d49c518553b6eb059d7a11e49784655be42cecf02c2a8396e7f8f7aa21c"} Dec 01 09:47:11 crc kubenswrapper[4689]: I1201 09:47:11.340600 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-trlc6/crc-debug-jpxwg"] Dec 01 09:47:11 crc kubenswrapper[4689]: I1201 09:47:11.355142 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-trlc6/crc-debug-jpxwg"] Dec 01 09:47:12 crc kubenswrapper[4689]: I1201 09:47:12.463188 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-trlc6/crc-debug-jpxwg" Dec 01 09:47:12 crc kubenswrapper[4689]: I1201 09:47:12.517796 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0608b7a8-eb02-4fcd-9a47-6f87e3345339-host\") pod \"0608b7a8-eb02-4fcd-9a47-6f87e3345339\" (UID: \"0608b7a8-eb02-4fcd-9a47-6f87e3345339\") " Dec 01 09:47:12 crc kubenswrapper[4689]: I1201 09:47:12.517874 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0608b7a8-eb02-4fcd-9a47-6f87e3345339-host" (OuterVolumeSpecName: "host") pod "0608b7a8-eb02-4fcd-9a47-6f87e3345339" (UID: "0608b7a8-eb02-4fcd-9a47-6f87e3345339"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 09:47:12 crc kubenswrapper[4689]: I1201 09:47:12.518354 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hktd9\" (UniqueName: \"kubernetes.io/projected/0608b7a8-eb02-4fcd-9a47-6f87e3345339-kube-api-access-hktd9\") pod \"0608b7a8-eb02-4fcd-9a47-6f87e3345339\" (UID: \"0608b7a8-eb02-4fcd-9a47-6f87e3345339\") " Dec 01 09:47:12 crc kubenswrapper[4689]: I1201 09:47:12.519142 4689 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0608b7a8-eb02-4fcd-9a47-6f87e3345339-host\") on node \"crc\" DevicePath \"\"" Dec 01 09:47:12 crc kubenswrapper[4689]: I1201 09:47:12.528254 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0608b7a8-eb02-4fcd-9a47-6f87e3345339-kube-api-access-hktd9" (OuterVolumeSpecName: "kube-api-access-hktd9") pod "0608b7a8-eb02-4fcd-9a47-6f87e3345339" (UID: "0608b7a8-eb02-4fcd-9a47-6f87e3345339"). InnerVolumeSpecName "kube-api-access-hktd9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:47:12 crc kubenswrapper[4689]: I1201 09:47:12.620891 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hktd9\" (UniqueName: \"kubernetes.io/projected/0608b7a8-eb02-4fcd-9a47-6f87e3345339-kube-api-access-hktd9\") on node \"crc\" DevicePath \"\"" Dec 01 09:47:13 crc kubenswrapper[4689]: I1201 09:47:13.059009 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0608b7a8-eb02-4fcd-9a47-6f87e3345339" path="/var/lib/kubelet/pods/0608b7a8-eb02-4fcd-9a47-6f87e3345339/volumes" Dec 01 09:47:13 crc kubenswrapper[4689]: I1201 09:47:13.320084 4689 scope.go:117] "RemoveContainer" containerID="54410d49c518553b6eb059d7a11e49784655be42cecf02c2a8396e7f8f7aa21c" Dec 01 09:47:13 crc kubenswrapper[4689]: I1201 09:47:13.320134 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-trlc6/crc-debug-jpxwg" Dec 01 09:47:41 crc kubenswrapper[4689]: I1201 09:47:41.039929 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7bd884c498-fvqdz_1bd94e50-aa23-4249-acd5-293b272a8123/barbican-api-log/0.log" Dec 01 09:47:41 crc kubenswrapper[4689]: I1201 09:47:41.164755 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7bd884c498-fvqdz_1bd94e50-aa23-4249-acd5-293b272a8123/barbican-api/0.log" Dec 01 09:47:41 crc kubenswrapper[4689]: I1201 09:47:41.284263 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7cdd6b5dcb-j5dgx_215d6908-3cbd-486b-adc3-82cdaddef118/barbican-keystone-listener/0.log" Dec 01 09:47:41 crc kubenswrapper[4689]: I1201 09:47:41.416815 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-844c6c5cff-mqnnk_059abe7a-8a94-4c9a-8ac2-1830fffad22c/barbican-worker/0.log" Dec 01 09:47:41 crc kubenswrapper[4689]: I1201 09:47:41.479209 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7cdd6b5dcb-j5dgx_215d6908-3cbd-486b-adc3-82cdaddef118/barbican-keystone-listener-log/0.log" Dec 01 09:47:41 crc kubenswrapper[4689]: I1201 09:47:41.636033 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-844c6c5cff-mqnnk_059abe7a-8a94-4c9a-8ac2-1830fffad22c/barbican-worker-log/0.log" Dec 01 09:47:41 crc kubenswrapper[4689]: I1201 09:47:41.813748 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-qsxf4_caf4bdec-471c-4c07-a5a7-294faf35c880/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Dec 01 09:47:41 crc kubenswrapper[4689]: I1201 09:47:41.863838 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_5971de46-c278-4f0d-80be-0a7a25d7678c/ceilometer-central-agent/0.log" Dec 01 09:47:41 crc kubenswrapper[4689]: I1201 09:47:41.955663 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_5971de46-c278-4f0d-80be-0a7a25d7678c/ceilometer-notification-agent/0.log" Dec 01 09:47:42 crc kubenswrapper[4689]: I1201 09:47:42.076349 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_5971de46-c278-4f0d-80be-0a7a25d7678c/proxy-httpd/0.log" Dec 01 09:47:42 crc kubenswrapper[4689]: I1201 09:47:42.165152 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_5971de46-c278-4f0d-80be-0a7a25d7678c/sg-core/0.log" Dec 01 09:47:42 crc kubenswrapper[4689]: I1201 09:47:42.246793 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_8f0f718c-3a19-482a-9ed0-4c4d7dbac886/cinder-api/0.log" Dec 01 09:47:42 crc kubenswrapper[4689]: I1201 09:47:42.354044 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_8f0f718c-3a19-482a-9ed0-4c4d7dbac886/cinder-api-log/0.log" Dec 01 09:47:42 crc kubenswrapper[4689]: I1201 09:47:42.484720 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_0556c1c8-69cc-4fa6-a3df-46a4ed439312/cinder-scheduler/1.log" Dec 01 09:47:42 crc kubenswrapper[4689]: I1201 09:47:42.596124 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_0556c1c8-69cc-4fa6-a3df-46a4ed439312/cinder-scheduler/0.log" Dec 01 09:47:42 crc kubenswrapper[4689]: I1201 09:47:42.643617 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_0556c1c8-69cc-4fa6-a3df-46a4ed439312/probe/0.log" Dec 01 09:47:42 crc kubenswrapper[4689]: I1201 09:47:42.771774 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-vf78w_14713a8f-36bf-48fa-bfb2-3c384ad7abd0/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Dec 01 09:47:42 crc kubenswrapper[4689]: I1201 09:47:42.926859 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-fndhp_4ae39c64-0beb-4b8c-a08b-35aba6ecb704/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 01 09:47:43 crc kubenswrapper[4689]: I1201 09:47:43.045940 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6bcf8b9d95-dkgsn_fb61d912-665c-4e59-b0cf-7e46e24e5201/init/0.log" Dec 01 09:47:43 crc kubenswrapper[4689]: I1201 09:47:43.275467 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6bcf8b9d95-dkgsn_fb61d912-665c-4e59-b0cf-7e46e24e5201/init/0.log" Dec 01 09:47:43 crc kubenswrapper[4689]: I1201 09:47:43.334979 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-rdqqh_7f3287e5-9e76-46ee-91c4-8bc9b69a738f/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Dec 01 09:47:43 crc kubenswrapper[4689]: I1201 09:47:43.337127 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6bcf8b9d95-dkgsn_fb61d912-665c-4e59-b0cf-7e46e24e5201/dnsmasq-dns/0.log" Dec 01 09:47:43 crc kubenswrapper[4689]: I1201 09:47:43.614208 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_097455e0-57c7-4c8e-bd14-86890aecc860/glance-httpd/0.log" Dec 01 09:47:43 crc kubenswrapper[4689]: I1201 09:47:43.623306 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_097455e0-57c7-4c8e-bd14-86890aecc860/glance-log/0.log" Dec 01 09:47:43 crc kubenswrapper[4689]: I1201 09:47:43.787256 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_5dd0e22c-0f46-4089-9ef0-7882c6068697/glance-httpd/0.log" Dec 01 09:47:43 crc kubenswrapper[4689]: I1201 09:47:43.866999 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_5dd0e22c-0f46-4089-9ef0-7882c6068697/glance-log/0.log" Dec 01 09:47:44 crc kubenswrapper[4689]: I1201 09:47:44.239341 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-d65b9788-2kr5p_fcebf70c-3de0-499e-928d-3419299a512f/horizon/1.log" Dec 01 09:47:44 crc kubenswrapper[4689]: I1201 09:47:44.242421 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-d65b9788-2kr5p_fcebf70c-3de0-499e-928d-3419299a512f/horizon/0.log" Dec 01 09:47:44 crc kubenswrapper[4689]: I1201 09:47:44.610775 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-cbwf4_c8be40c2-4b56-46d0-b99b-0fd198004a03/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Dec 01 09:47:44 crc kubenswrapper[4689]: I1201 09:47:44.701817 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-x69xg_7b1625a4-a976-4cd2-8e93-7022d1571f1f/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 01 09:47:44 crc kubenswrapper[4689]: I1201 09:47:44.736643 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-d65b9788-2kr5p_fcebf70c-3de0-499e-928d-3419299a512f/horizon-log/0.log" Dec 01 09:47:45 crc kubenswrapper[4689]: I1201 09:47:45.128541 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29409661-b4dz4_06af101b-855c-409b-8f88-171d7e9aaffc/keystone-cron/0.log" Dec 01 09:47:45 crc kubenswrapper[4689]: I1201 09:47:45.330669 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_432574e7-df30-4103-a396-c758c4df932c/kube-state-metrics/1.log" Dec 01 09:47:45 crc kubenswrapper[4689]: I1201 09:47:45.429309 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_432574e7-df30-4103-a396-c758c4df932c/kube-state-metrics/0.log" Dec 01 09:47:45 crc kubenswrapper[4689]: I1201 09:47:45.486837 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-7575f55b68-75xn5_3c402617-8f98-4531-b798-f395844db3ea/keystone-api/0.log" Dec 01 09:47:45 crc kubenswrapper[4689]: I1201 09:47:45.645839 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-m5ms8_79ac411d-051b-464c-ab78-a5e99ef18520/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Dec 01 09:47:45 crc kubenswrapper[4689]: I1201 09:47:45.960148 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-58c7f9c74f-nqnzt_9834ce74-a0c7-4e32-9d8b-1d39b27c62b6/neutron-api/0.log" Dec 01 09:47:46 crc kubenswrapper[4689]: I1201 09:47:46.004482 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-58c7f9c74f-nqnzt_9834ce74-a0c7-4e32-9d8b-1d39b27c62b6/neutron-httpd/0.log" Dec 01 09:47:46 crc kubenswrapper[4689]: I1201 09:47:46.084333 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-tmxfz_ccee02a7-c83e-4eb5-a6e7-f2ad619d948a/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Dec 01 09:47:46 crc kubenswrapper[4689]: I1201 09:47:46.584070 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_a3a578c7-bcdf-46f5-a781-5759e3c6da45/nova-api-api/0.log" Dec 01 09:47:46 crc kubenswrapper[4689]: I1201 09:47:46.819541 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_a3a578c7-bcdf-46f5-a781-5759e3c6da45/nova-api-log/0.log" Dec 01 09:47:47 crc kubenswrapper[4689]: I1201 09:47:47.093843 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_a3a578c7-bcdf-46f5-a781-5759e3c6da45/nova-api-log/1.log" Dec 01 09:47:47 crc kubenswrapper[4689]: I1201 09:47:47.255088 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_a3a578c7-bcdf-46f5-a781-5759e3c6da45/nova-api-api/1.log" Dec 01 09:47:47 crc kubenswrapper[4689]: I1201 09:47:47.284747 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_d111a251-6be1-4996-a20d-a6ecdb0dbec9/nova-cell0-conductor-conductor/0.log" Dec 01 09:47:47 crc kubenswrapper[4689]: I1201 09:47:47.558148 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_4a7f30c7-ee71-44e3-9aed-b1e65916e8b7/nova-cell1-conductor-conductor/0.log" Dec 01 09:47:47 crc kubenswrapper[4689]: I1201 09:47:47.682794 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_d1e959a4-6ab1-4c6c-86e4-8e319fc8806a/nova-cell1-novncproxy-novncproxy/0.log" Dec 01 09:47:47 crc kubenswrapper[4689]: I1201 09:47:47.898356 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-mkgfg_5351042e-776c-44c1-a6ad-bf530a24bfb7/nova-edpm-deployment-openstack-edpm-ipam/0.log" Dec 01 09:47:48 crc kubenswrapper[4689]: I1201 09:47:48.031586 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_b0e9419c-e23b-4c71-b88e-736138bcdd65/nova-metadata-log/0.log" Dec 01 09:47:48 crc kubenswrapper[4689]: I1201 09:47:48.143636 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_b0e9419c-e23b-4c71-b88e-736138bcdd65/nova-metadata-log/1.log" Dec 01 09:47:48 crc kubenswrapper[4689]: I1201 09:47:48.223601 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_b0e9419c-e23b-4c71-b88e-736138bcdd65/nova-metadata-metadata/0.log" Dec 01 09:47:48 crc kubenswrapper[4689]: I1201 09:47:48.663455 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_a13f2879-b8c7-42d5-8f88-ca9aeb7f26bc/nova-scheduler-scheduler/0.log" Dec 01 09:47:48 crc kubenswrapper[4689]: I1201 09:47:48.677638 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_bc1ecd4c-eede-492c-ac97-071c42545607/mysql-bootstrap/0.log" Dec 01 09:47:49 crc kubenswrapper[4689]: I1201 09:47:49.478243 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_bc1ecd4c-eede-492c-ac97-071c42545607/mysql-bootstrap/0.log" Dec 01 09:47:49 crc kubenswrapper[4689]: I1201 09:47:49.533590 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_bc1ecd4c-eede-492c-ac97-071c42545607/galera/0.log" Dec 01 09:47:49 crc kubenswrapper[4689]: I1201 09:47:49.751779 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_b0e9419c-e23b-4c71-b88e-736138bcdd65/nova-metadata-metadata/1.log" Dec 01 09:47:49 crc kubenswrapper[4689]: I1201 09:47:49.783796 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_555543d8-21bb-4dba-9c08-ab82e90ea894/mysql-bootstrap/0.log" Dec 01 09:47:50 crc kubenswrapper[4689]: I1201 09:47:50.040581 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_555543d8-21bb-4dba-9c08-ab82e90ea894/mysql-bootstrap/0.log" Dec 01 09:47:50 crc kubenswrapper[4689]: I1201 09:47:50.147907 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_555543d8-21bb-4dba-9c08-ab82e90ea894/galera/0.log" Dec 01 09:47:50 crc kubenswrapper[4689]: I1201 09:47:50.186847 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_555543d8-21bb-4dba-9c08-ab82e90ea894/galera/1.log" Dec 01 09:47:50 crc kubenswrapper[4689]: I1201 09:47:50.367530 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_0cbf9f73-fecd-4c17-95c6-b0bd5a1ae285/openstackclient/0.log" Dec 01 09:47:50 crc kubenswrapper[4689]: I1201 09:47:50.556003 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-48955_8731b0fb-0429-4730-8da9-cc182fdf29e1/ovn-controller/0.log" Dec 01 09:47:50 crc kubenswrapper[4689]: I1201 09:47:50.679705 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-t4rfs_5b0566d9-e730-4929-aa69-fba41a7c88c0/openstack-network-exporter/0.log" Dec 01 09:47:51 crc kubenswrapper[4689]: I1201 09:47:51.143941 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-sj4xx_a0d0f0ef-1203-4001-9872-7c32022a4839/ovsdb-server-init/0.log" Dec 01 09:47:51 crc kubenswrapper[4689]: I1201 09:47:51.413525 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-sj4xx_a0d0f0ef-1203-4001-9872-7c32022a4839/ovs-vswitchd/0.log" Dec 01 09:47:51 crc kubenswrapper[4689]: I1201 09:47:51.419779 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-sj4xx_a0d0f0ef-1203-4001-9872-7c32022a4839/ovsdb-server-init/0.log" Dec 01 09:47:51 crc kubenswrapper[4689]: I1201 09:47:51.455650 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-sj4xx_a0d0f0ef-1203-4001-9872-7c32022a4839/ovsdb-server/0.log" Dec 01 09:47:51 crc kubenswrapper[4689]: I1201 09:47:51.775026 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-9d8rc_8df0b21e-33ac-48fa-b46f-558a7e4c37fc/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Dec 01 09:47:51 crc kubenswrapper[4689]: I1201 09:47:51.811752 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_aa6871a1-f6d5-44b1-a4b7-638763c9c92b/openstack-network-exporter/0.log" Dec 01 09:47:51 crc kubenswrapper[4689]: I1201 09:47:51.923824 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_aa6871a1-f6d5-44b1-a4b7-638763c9c92b/ovn-northd/0.log" Dec 01 09:47:52 crc kubenswrapper[4689]: I1201 09:47:52.102471 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_150dfc79-4971-4c3d-aada-13fc85bd101c/ovsdbserver-nb/0.log" Dec 01 09:47:52 crc kubenswrapper[4689]: I1201 09:47:52.169265 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_150dfc79-4971-4c3d-aada-13fc85bd101c/openstack-network-exporter/0.log" Dec 01 09:47:52 crc kubenswrapper[4689]: I1201 09:47:52.437710 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_5b1a856a-afb7-4839-a797-7625521520b2/openstack-network-exporter/0.log" Dec 01 09:47:52 crc kubenswrapper[4689]: I1201 09:47:52.501177 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_5b1a856a-afb7-4839-a797-7625521520b2/ovsdbserver-sb/0.log" Dec 01 09:47:52 crc kubenswrapper[4689]: I1201 09:47:52.707324 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5454b5d64d-5p8d8_a31e6c25-e2a2-4c12-9138-6969155a7f20/placement-api/0.log" Dec 01 09:47:52 crc kubenswrapper[4689]: I1201 09:47:52.768381 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5454b5d64d-5p8d8_a31e6c25-e2a2-4c12-9138-6969155a7f20/placement-log/0.log" Dec 01 09:47:52 crc kubenswrapper[4689]: I1201 09:47:52.954477 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_5100fd48-e762-41b7-ac48-29b85c21dd3d/setup-container/0.log" Dec 01 09:47:53 crc kubenswrapper[4689]: I1201 09:47:53.109097 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_5100fd48-e762-41b7-ac48-29b85c21dd3d/setup-container/0.log" Dec 01 09:47:53 crc kubenswrapper[4689]: I1201 09:47:53.134417 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_5100fd48-e762-41b7-ac48-29b85c21dd3d/rabbitmq/0.log" Dec 01 09:47:53 crc kubenswrapper[4689]: I1201 09:47:53.254630 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_4b5ea820-9372-4a98-8000-75815f156435/setup-container/0.log" Dec 01 09:47:53 crc kubenswrapper[4689]: I1201 09:47:53.475151 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_4b5ea820-9372-4a98-8000-75815f156435/setup-container/0.log" Dec 01 09:47:53 crc kubenswrapper[4689]: I1201 09:47:53.666522 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_4b5ea820-9372-4a98-8000-75815f156435/rabbitmq/0.log" Dec 01 09:47:53 crc kubenswrapper[4689]: I1201 09:47:53.728718 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-mjpf5_defe39e2-091c-472e-aefe-7691672100e7/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 01 09:47:53 crc kubenswrapper[4689]: I1201 09:47:53.924064 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-gcfdn_1da07875-e46b-4de1-8eea-fb33b293b5a7/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Dec 01 09:47:54 crc kubenswrapper[4689]: I1201 09:47:54.085924 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-drctr_0e13f608-85ec-4fe4-b6bb-e651d2f736d3/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Dec 01 09:47:54 crc kubenswrapper[4689]: I1201 09:47:54.181772 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-9vvhw_3bdb7314-795b-4a29-a1a6-ba5f3bccdcbe/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 01 09:47:54 crc kubenswrapper[4689]: I1201 09:47:54.354259 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-q6z88_8fd75600-1f4f-4bfb-94fd-d9778efd0e5e/ssh-known-hosts-edpm-deployment/0.log" Dec 01 09:47:54 crc kubenswrapper[4689]: I1201 09:47:54.640699 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-7459744dff-cxqv7_e242b763-d0db-401f-b552-d109d6c5ec28/proxy-server/0.log" Dec 01 09:47:54 crc kubenswrapper[4689]: I1201 09:47:54.725666 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-7459744dff-cxqv7_e242b763-d0db-401f-b552-d109d6c5ec28/proxy-httpd/0.log" Dec 01 09:47:54 crc kubenswrapper[4689]: I1201 09:47:54.808009 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-66t7q_0e57c646-4b20-4bb9-9c89-bad52b7a1c07/swift-ring-rebalance/0.log" Dec 01 09:47:55 crc kubenswrapper[4689]: I1201 09:47:55.013246 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c18c4a63-48ba-42e2-a7f0-d5750963b90f/account-auditor/0.log" Dec 01 09:47:55 crc kubenswrapper[4689]: I1201 09:47:55.131775 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c18c4a63-48ba-42e2-a7f0-d5750963b90f/account-reaper/0.log" Dec 01 09:47:55 crc kubenswrapper[4689]: I1201 09:47:55.215055 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c18c4a63-48ba-42e2-a7f0-d5750963b90f/account-replicator/0.log" Dec 01 09:47:55 crc kubenswrapper[4689]: I1201 09:47:55.347713 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c18c4a63-48ba-42e2-a7f0-d5750963b90f/account-server/0.log" Dec 01 09:47:55 crc kubenswrapper[4689]: I1201 09:47:55.362662 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c18c4a63-48ba-42e2-a7f0-d5750963b90f/container-auditor/0.log" Dec 01 09:47:55 crc kubenswrapper[4689]: I1201 09:47:55.432094 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c18c4a63-48ba-42e2-a7f0-d5750963b90f/container-replicator/0.log" Dec 01 09:47:55 crc kubenswrapper[4689]: I1201 09:47:55.515984 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c18c4a63-48ba-42e2-a7f0-d5750963b90f/container-server/0.log" Dec 01 09:47:55 crc kubenswrapper[4689]: I1201 09:47:55.627909 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c18c4a63-48ba-42e2-a7f0-d5750963b90f/container-updater/0.log" Dec 01 09:47:55 crc kubenswrapper[4689]: I1201 09:47:55.706023 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c18c4a63-48ba-42e2-a7f0-d5750963b90f/object-auditor/0.log" Dec 01 09:47:55 crc kubenswrapper[4689]: I1201 09:47:55.779575 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c18c4a63-48ba-42e2-a7f0-d5750963b90f/object-expirer/0.log" Dec 01 09:47:55 crc kubenswrapper[4689]: I1201 09:47:55.827384 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c18c4a63-48ba-42e2-a7f0-d5750963b90f/object-replicator/0.log" Dec 01 09:47:55 crc kubenswrapper[4689]: I1201 09:47:55.900421 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c18c4a63-48ba-42e2-a7f0-d5750963b90f/object-server/0.log" Dec 01 09:47:56 crc kubenswrapper[4689]: I1201 09:47:56.034163 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c18c4a63-48ba-42e2-a7f0-d5750963b90f/object-updater/0.log" Dec 01 09:47:56 crc kubenswrapper[4689]: I1201 09:47:56.099315 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c18c4a63-48ba-42e2-a7f0-d5750963b90f/rsync/0.log" Dec 01 09:47:56 crc kubenswrapper[4689]: I1201 09:47:56.126311 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c18c4a63-48ba-42e2-a7f0-d5750963b90f/swift-recon-cron/0.log" Dec 01 09:47:56 crc kubenswrapper[4689]: I1201 09:47:56.387333 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-g6njz_4a88b941-7390-4f78-83e5-733fe9d39482/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Dec 01 09:47:56 crc kubenswrapper[4689]: I1201 09:47:56.460689 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_107c3226-2b1b-4f80-9670-8f0c1ffd3337/tempest-tests-tempest-tests-runner/0.log" Dec 01 09:47:56 crc kubenswrapper[4689]: I1201 09:47:56.627085 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_308bdd90-c162-47a3-bc04-5369c9b235b8/test-operator-logs-container/0.log" Dec 01 09:47:56 crc kubenswrapper[4689]: I1201 09:47:56.715895 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-42d6w_dc01b01d-6ad2-4595-ab0f-42cc127d1a7a/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Dec 01 09:48:07 crc kubenswrapper[4689]: I1201 09:48:07.956307 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_f04989a7-e9bc-4d0b-a7a1-efe12657bd2b/memcached/0.log" Dec 01 09:48:29 crc kubenswrapper[4689]: I1201 09:48:29.589606 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-7vlqn_7ce2f328-3ee3-4800-89e4-9141c841c258/kube-rbac-proxy/0.log" Dec 01 09:48:30 crc kubenswrapper[4689]: I1201 09:48:30.155088 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-7vlqn_7ce2f328-3ee3-4800-89e4-9141c841c258/manager/1.log" Dec 01 09:48:30 crc kubenswrapper[4689]: I1201 09:48:30.224069 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-7vlqn_7ce2f328-3ee3-4800-89e4-9141c841c258/manager/0.log" Dec 01 09:48:30 crc kubenswrapper[4689]: I1201 09:48:30.268889 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-859b6ccc6-7vrt5_5266d333-3337-4481-9478-2e1df848bfa2/kube-rbac-proxy/0.log" Dec 01 09:48:30 crc kubenswrapper[4689]: I1201 09:48:30.372729 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-859b6ccc6-7vrt5_5266d333-3337-4481-9478-2e1df848bfa2/manager/1.log" Dec 01 09:48:30 crc kubenswrapper[4689]: I1201 09:48:30.465920 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-859b6ccc6-7vrt5_5266d333-3337-4481-9478-2e1df848bfa2/manager/0.log" Dec 01 09:48:30 crc kubenswrapper[4689]: I1201 09:48:30.541318 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-78b4bc895b-25q6j_2b35aff9-c66d-448c-9883-05e650f7f147/kube-rbac-proxy/0.log" Dec 01 09:48:30 crc kubenswrapper[4689]: I1201 09:48:30.661029 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-78b4bc895b-25q6j_2b35aff9-c66d-448c-9883-05e650f7f147/manager/0.log" Dec 01 09:48:30 crc kubenswrapper[4689]: I1201 09:48:30.822856 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f2e15f12445683aa2b8703b846077ffc01e309ca5b05095633f7ac13f1j55cj_8405d928-22c4-4389-9b08-f6e3dc2acfdc/util/0.log" Dec 01 09:48:31 crc kubenswrapper[4689]: I1201 09:48:31.040428 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f2e15f12445683aa2b8703b846077ffc01e309ca5b05095633f7ac13f1j55cj_8405d928-22c4-4389-9b08-f6e3dc2acfdc/util/0.log" Dec 01 09:48:31 crc kubenswrapper[4689]: I1201 09:48:31.043625 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f2e15f12445683aa2b8703b846077ffc01e309ca5b05095633f7ac13f1j55cj_8405d928-22c4-4389-9b08-f6e3dc2acfdc/pull/0.log" Dec 01 09:48:31 crc kubenswrapper[4689]: I1201 09:48:31.113227 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f2e15f12445683aa2b8703b846077ffc01e309ca5b05095633f7ac13f1j55cj_8405d928-22c4-4389-9b08-f6e3dc2acfdc/pull/0.log" Dec 01 09:48:31 crc kubenswrapper[4689]: I1201 09:48:31.307399 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f2e15f12445683aa2b8703b846077ffc01e309ca5b05095633f7ac13f1j55cj_8405d928-22c4-4389-9b08-f6e3dc2acfdc/util/0.log" Dec 01 09:48:31 crc kubenswrapper[4689]: I1201 09:48:31.340653 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f2e15f12445683aa2b8703b846077ffc01e309ca5b05095633f7ac13f1j55cj_8405d928-22c4-4389-9b08-f6e3dc2acfdc/extract/0.log" Dec 01 09:48:31 crc kubenswrapper[4689]: I1201 09:48:31.344783 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f2e15f12445683aa2b8703b846077ffc01e309ca5b05095633f7ac13f1j55cj_8405d928-22c4-4389-9b08-f6e3dc2acfdc/pull/0.log" Dec 01 09:48:31 crc kubenswrapper[4689]: I1201 09:48:31.531438 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-668d9c48b9-xhrp7_ae47d16a-5025-44f4-8fa4-f5aa08b126b8/manager/1.log" Dec 01 09:48:31 crc kubenswrapper[4689]: I1201 09:48:31.586564 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-668d9c48b9-xhrp7_ae47d16a-5025-44f4-8fa4-f5aa08b126b8/kube-rbac-proxy/0.log" Dec 01 09:48:31 crc kubenswrapper[4689]: I1201 09:48:31.626168 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-668d9c48b9-xhrp7_ae47d16a-5025-44f4-8fa4-f5aa08b126b8/manager/0.log" Dec 01 09:48:31 crc kubenswrapper[4689]: I1201 09:48:31.753879 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-w6qx2_fc02885a-340a-4800-bd0b-360c0476b456/kube-rbac-proxy/0.log" Dec 01 09:48:31 crc kubenswrapper[4689]: I1201 09:48:31.826960 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-w6qx2_fc02885a-340a-4800-bd0b-360c0476b456/manager/1.log" Dec 01 09:48:31 crc kubenswrapper[4689]: I1201 09:48:31.912929 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-w6qx2_fc02885a-340a-4800-bd0b-360c0476b456/manager/0.log" Dec 01 09:48:32 crc kubenswrapper[4689]: I1201 09:48:32.018302 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-dp8gl_ffc5e400-7853-4b1d-ae11-d6ffa553093a/kube-rbac-proxy/0.log" Dec 01 09:48:32 crc kubenswrapper[4689]: I1201 09:48:32.098757 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-dp8gl_ffc5e400-7853-4b1d-ae11-d6ffa553093a/manager/1.log" Dec 01 09:48:32 crc kubenswrapper[4689]: I1201 09:48:32.213788 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-dp8gl_ffc5e400-7853-4b1d-ae11-d6ffa553093a/manager/0.log" Dec 01 09:48:32 crc kubenswrapper[4689]: I1201 09:48:32.242344 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-tgmx9_e44ef73a-e172-4557-920d-42f84488390e/kube-rbac-proxy/0.log" Dec 01 09:48:32 crc kubenswrapper[4689]: I1201 09:48:32.372628 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-tgmx9_e44ef73a-e172-4557-920d-42f84488390e/manager/1.log" Dec 01 09:48:32 crc kubenswrapper[4689]: I1201 09:48:32.535073 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-tgmx9_e44ef73a-e172-4557-920d-42f84488390e/manager/0.log" Dec 01 09:48:32 crc kubenswrapper[4689]: I1201 09:48:32.572285 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6c548fd776-f7xtr_ea3e4b08-090d-444e-ba53-a3df490fbaf8/kube-rbac-proxy/0.log" Dec 01 09:48:32 crc kubenswrapper[4689]: I1201 09:48:32.700795 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6c548fd776-f7xtr_ea3e4b08-090d-444e-ba53-a3df490fbaf8/manager/1.log" Dec 01 09:48:32 crc kubenswrapper[4689]: I1201 09:48:32.712334 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6c548fd776-f7xtr_ea3e4b08-090d-444e-ba53-a3df490fbaf8/manager/0.log" Dec 01 09:48:32 crc kubenswrapper[4689]: I1201 09:48:32.781654 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-758d67db86-z298n_2974e300-3f26-4ec0-912a-9ee6b78f33ce/kube-rbac-proxy/0.log" Dec 01 09:48:33 crc kubenswrapper[4689]: I1201 09:48:33.039780 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-758d67db86-z298n_2974e300-3f26-4ec0-912a-9ee6b78f33ce/manager/0.log" Dec 01 09:48:33 crc kubenswrapper[4689]: I1201 09:48:33.097731 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-758d67db86-z298n_2974e300-3f26-4ec0-912a-9ee6b78f33ce/manager/1.log" Dec 01 09:48:33 crc kubenswrapper[4689]: I1201 09:48:33.228326 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-6546668bfd-x722t_3751be2a-8675-4b07-8198-101bfdd71d72/kube-rbac-proxy/0.log" Dec 01 09:48:33 crc kubenswrapper[4689]: I1201 09:48:33.233787 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-6546668bfd-x722t_3751be2a-8675-4b07-8198-101bfdd71d72/manager/1.log" Dec 01 09:48:33 crc kubenswrapper[4689]: I1201 09:48:33.325688 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-6546668bfd-x722t_3751be2a-8675-4b07-8198-101bfdd71d72/manager/0.log" Dec 01 09:48:33 crc kubenswrapper[4689]: I1201 09:48:33.446447 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-56bbcc9d85-fm9bv_0d311ded-de3a-42e8-87d3-23c50c4fbd8a/kube-rbac-proxy/0.log" Dec 01 09:48:33 crc kubenswrapper[4689]: I1201 09:48:33.461637 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-56bbcc9d85-fm9bv_0d311ded-de3a-42e8-87d3-23c50c4fbd8a/manager/1.log" Dec 01 09:48:33 crc kubenswrapper[4689]: I1201 09:48:33.508005 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-56bbcc9d85-fm9bv_0d311ded-de3a-42e8-87d3-23c50c4fbd8a/manager/0.log" Dec 01 09:48:33 crc kubenswrapper[4689]: I1201 09:48:33.652907 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-ghq5b_4d923f8c-103b-4b12-b2e7-ea926440e5e7/kube-rbac-proxy/0.log" Dec 01 09:48:33 crc kubenswrapper[4689]: I1201 09:48:33.712124 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-ghq5b_4d923f8c-103b-4b12-b2e7-ea926440e5e7/manager/1.log" Dec 01 09:48:33 crc kubenswrapper[4689]: I1201 09:48:33.753035 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-ghq5b_4d923f8c-103b-4b12-b2e7-ea926440e5e7/manager/0.log" Dec 01 09:48:33 crc kubenswrapper[4689]: I1201 09:48:33.864075 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-pssbg_d4a1d78c-9486-4b3b-afac-2d51d2cb14df/kube-rbac-proxy/0.log" Dec 01 09:48:34 crc kubenswrapper[4689]: I1201 09:48:34.006455 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-pssbg_d4a1d78c-9486-4b3b-afac-2d51d2cb14df/manager/1.log" Dec 01 09:48:34 crc kubenswrapper[4689]: I1201 09:48:34.043582 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-pssbg_d4a1d78c-9486-4b3b-afac-2d51d2cb14df/manager/0.log" Dec 01 09:48:34 crc kubenswrapper[4689]: I1201 09:48:34.097906 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-vfnzm_12885cbd-1d3e-40c1-b7f5-73bdb6572db9/manager/1.log" Dec 01 09:48:34 crc kubenswrapper[4689]: I1201 09:48:34.112756 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-vfnzm_12885cbd-1d3e-40c1-b7f5-73bdb6572db9/kube-rbac-proxy/0.log" Dec 01 09:48:34 crc kubenswrapper[4689]: I1201 09:48:34.268420 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-vfnzm_12885cbd-1d3e-40c1-b7f5-73bdb6572db9/manager/0.log" Dec 01 09:48:34 crc kubenswrapper[4689]: I1201 09:48:34.303129 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-64bc77cfd4sbfl9_6d24ac0e-a14a-4644-ba9a-bc0a6bb0c538/kube-rbac-proxy/0.log" Dec 01 09:48:34 crc kubenswrapper[4689]: I1201 09:48:34.358749 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-64bc77cfd4sbfl9_6d24ac0e-a14a-4644-ba9a-bc0a6bb0c538/manager/0.log" Dec 01 09:48:34 crc kubenswrapper[4689]: I1201 09:48:34.387782 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-64bc77cfd4sbfl9_6d24ac0e-a14a-4644-ba9a-bc0a6bb0c538/manager/1.log" Dec 01 09:48:34 crc kubenswrapper[4689]: I1201 09:48:34.912218 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-c6fb994fd-5lzsb_161f3daa-6403-48b2-8e33-b01d632a2316/operator/0.log" Dec 01 09:48:34 crc kubenswrapper[4689]: I1201 09:48:34.948813 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-jss7v_b5026a2c-ab73-4b77-99d4-79dd6bcdb139/registry-server/0.log" Dec 01 09:48:35 crc kubenswrapper[4689]: I1201 09:48:35.160115 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-p296h_b3049390-311d-46ed-b472-d32a22f2f8d2/kube-rbac-proxy/0.log" Dec 01 09:48:35 crc kubenswrapper[4689]: I1201 09:48:35.228272 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-p296h_b3049390-311d-46ed-b472-d32a22f2f8d2/manager/1.log" Dec 01 09:48:35 crc kubenswrapper[4689]: I1201 09:48:35.408897 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-6fc767d767-8r9dw_4f43cf3a-d166-44ba-8d44-9e81b0666e0a/manager/1.log" Dec 01 09:48:35 crc kubenswrapper[4689]: I1201 09:48:35.479112 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-p296h_b3049390-311d-46ed-b472-d32a22f2f8d2/manager/0.log" Dec 01 09:48:35 crc kubenswrapper[4689]: I1201 09:48:35.594974 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-nsnm9_3e8aa0dc-ea41-48e6-b047-4bb71fd01f8a/kube-rbac-proxy/0.log" Dec 01 09:48:35 crc kubenswrapper[4689]: I1201 09:48:35.778017 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-nsnm9_3e8aa0dc-ea41-48e6-b047-4bb71fd01f8a/manager/1.log" Dec 01 09:48:35 crc kubenswrapper[4689]: I1201 09:48:35.785917 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-6fc767d767-8r9dw_4f43cf3a-d166-44ba-8d44-9e81b0666e0a/manager/0.log" Dec 01 09:48:35 crc kubenswrapper[4689]: I1201 09:48:35.834806 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-nsnm9_3e8aa0dc-ea41-48e6-b047-4bb71fd01f8a/manager/0.log" Dec 01 09:48:35 crc kubenswrapper[4689]: I1201 09:48:35.903335 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-t56mz_7085b604-e50c-4940-ac21-b6fe208c82cd/operator/1.log" Dec 01 09:48:36 crc kubenswrapper[4689]: I1201 09:48:36.071467 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-t56mz_7085b604-e50c-4940-ac21-b6fe208c82cd/operator/0.log" Dec 01 09:48:36 crc kubenswrapper[4689]: I1201 09:48:36.149143 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f8c65bbfc-5d8x5_8b33263b-a51c-49e4-b301-b975791e098a/kube-rbac-proxy/0.log" Dec 01 09:48:36 crc kubenswrapper[4689]: I1201 09:48:36.202320 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f8c65bbfc-5d8x5_8b33263b-a51c-49e4-b301-b975791e098a/manager/1.log" Dec 01 09:48:36 crc kubenswrapper[4689]: I1201 09:48:36.315036 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f8c65bbfc-5d8x5_8b33263b-a51c-49e4-b301-b975791e098a/manager/0.log" Dec 01 09:48:36 crc kubenswrapper[4689]: I1201 09:48:36.398875 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-76cc84c6bb-prvxn_af92d0ca-8211-49a0-9362-bd5749143fff/kube-rbac-proxy/0.log" Dec 01 09:48:36 crc kubenswrapper[4689]: I1201 09:48:36.419620 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-76cc84c6bb-prvxn_af92d0ca-8211-49a0-9362-bd5749143fff/manager/1.log" Dec 01 09:48:36 crc kubenswrapper[4689]: I1201 09:48:36.494941 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-76cc84c6bb-prvxn_af92d0ca-8211-49a0-9362-bd5749143fff/manager/0.log" Dec 01 09:48:36 crc kubenswrapper[4689]: I1201 09:48:36.645262 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-vbkrn_f94d79da-740a-4080-81d0-ff3bf1867b3d/kube-rbac-proxy/0.log" Dec 01 09:48:36 crc kubenswrapper[4689]: I1201 09:48:36.646456 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-vbkrn_f94d79da-740a-4080-81d0-ff3bf1867b3d/manager/1.log" Dec 01 09:48:36 crc kubenswrapper[4689]: I1201 09:48:36.704816 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-vbkrn_f94d79da-740a-4080-81d0-ff3bf1867b3d/manager/0.log" Dec 01 09:48:36 crc kubenswrapper[4689]: I1201 09:48:36.829944 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-769dc69bc-sfplx_5f9861d6-2700-4af6-b385-e79220c14b2e/kube-rbac-proxy/0.log" Dec 01 09:48:36 crc kubenswrapper[4689]: I1201 09:48:36.900421 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-769dc69bc-sfplx_5f9861d6-2700-4af6-b385-e79220c14b2e/manager/0.log" Dec 01 09:48:36 crc kubenswrapper[4689]: I1201 09:48:36.922254 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-769dc69bc-sfplx_5f9861d6-2700-4af6-b385-e79220c14b2e/manager/1.log" Dec 01 09:48:39 crc kubenswrapper[4689]: I1201 09:48:39.147309 4689 patch_prober.go:28] interesting pod/machine-config-daemon-hmdnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 09:48:39 crc kubenswrapper[4689]: I1201 09:48:39.147629 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 09:49:01 crc kubenswrapper[4689]: I1201 09:49:01.451462 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-xjxwg_2499ecbd-1cda-49a9-8c8a-e80d44127f01/control-plane-machine-set-operator/0.log" Dec 01 09:49:01 crc kubenswrapper[4689]: I1201 09:49:01.618103 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-chlnk_c062b92b-1709-4892-9b40-b1d2405d5812/kube-rbac-proxy/0.log" Dec 01 09:49:01 crc kubenswrapper[4689]: I1201 09:49:01.712312 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-chlnk_c062b92b-1709-4892-9b40-b1d2405d5812/machine-api-operator/0.log" Dec 01 09:49:09 crc kubenswrapper[4689]: I1201 09:49:09.147240 4689 patch_prober.go:28] interesting pod/machine-config-daemon-hmdnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 09:49:09 crc kubenswrapper[4689]: I1201 09:49:09.147835 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 09:49:19 crc kubenswrapper[4689]: I1201 09:49:19.132867 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-jxq2j_159eaec1-709b-4f6b-9c2d-271433805055/cert-manager-controller/0.log" Dec 01 09:49:19 crc kubenswrapper[4689]: I1201 09:49:19.197471 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-jxq2j_159eaec1-709b-4f6b-9c2d-271433805055/cert-manager-controller/1.log" Dec 01 09:49:19 crc kubenswrapper[4689]: I1201 09:49:19.333067 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-lhzz2_f166eac0-2073-4aa8-9b0b-6b3c6e43b19e/cert-manager-cainjector/1.log" Dec 01 09:49:19 crc kubenswrapper[4689]: I1201 09:49:19.416122 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-lhzz2_f166eac0-2073-4aa8-9b0b-6b3c6e43b19e/cert-manager-cainjector/0.log" Dec 01 09:49:19 crc kubenswrapper[4689]: I1201 09:49:19.441404 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-mnqrt_0690c213-4822-49c3-a886-9dd92aa3f957/cert-manager-webhook/0.log" Dec 01 09:49:34 crc kubenswrapper[4689]: I1201 09:49:34.607716 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7fbb5f6569-rls4h_888875d4-358f-4232-96f5-7fe326118284/nmstate-console-plugin/0.log" Dec 01 09:49:35 crc kubenswrapper[4689]: I1201 09:49:35.123219 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-mtr66_569aea60-ecf2-4ccb-b516-93098c33139a/nmstate-handler/0.log" Dec 01 09:49:35 crc kubenswrapper[4689]: I1201 09:49:35.212110 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-nwsvj_c0686309-db1b-42c8-963c-e66bee2b8bb1/kube-rbac-proxy/0.log" Dec 01 09:49:35 crc kubenswrapper[4689]: I1201 09:49:35.236234 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-nwsvj_c0686309-db1b-42c8-963c-e66bee2b8bb1/nmstate-metrics/0.log" Dec 01 09:49:35 crc kubenswrapper[4689]: I1201 09:49:35.327693 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-5b5b58f5c8-ldxm6_f38467c3-1d62-49ae-97f5-1fa17dbb514e/nmstate-operator/0.log" Dec 01 09:49:35 crc kubenswrapper[4689]: I1201 09:49:35.434022 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f6d4c5ccb-fbrdp_4eb87e27-d5ce-4aa6-9808-862d7afb9fd1/nmstate-webhook/0.log" Dec 01 09:49:39 crc kubenswrapper[4689]: I1201 09:49:39.146639 4689 patch_prober.go:28] interesting pod/machine-config-daemon-hmdnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 09:49:39 crc kubenswrapper[4689]: I1201 09:49:39.147196 4689 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 09:49:39 crc kubenswrapper[4689]: I1201 09:49:39.147261 4689 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" Dec 01 09:49:39 crc kubenswrapper[4689]: I1201 09:49:39.148039 4689 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e36fd29c354928ae5e5e41432d56ac7d589bb3fcd68c98fda545a3d82585498f"} pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 01 09:49:39 crc kubenswrapper[4689]: I1201 09:49:39.148111 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" containerName="machine-config-daemon" containerID="cri-o://e36fd29c354928ae5e5e41432d56ac7d589bb3fcd68c98fda545a3d82585498f" gracePeriod=600 Dec 01 09:49:39 crc kubenswrapper[4689]: E1201 09:49:39.316607 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:49:39 crc kubenswrapper[4689]: I1201 09:49:39.909780 4689 generic.go:334] "Generic (PLEG): container finished" podID="3947625d-75bf-4332-a233-1491b2ee9d96" containerID="e36fd29c354928ae5e5e41432d56ac7d589bb3fcd68c98fda545a3d82585498f" exitCode=0 Dec 01 09:49:39 crc kubenswrapper[4689]: I1201 09:49:39.909837 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" event={"ID":"3947625d-75bf-4332-a233-1491b2ee9d96","Type":"ContainerDied","Data":"e36fd29c354928ae5e5e41432d56ac7d589bb3fcd68c98fda545a3d82585498f"} Dec 01 09:49:39 crc kubenswrapper[4689]: I1201 09:49:39.909897 4689 scope.go:117] "RemoveContainer" containerID="02184f9ae082d659d01c6605cb27fe963a839349b25bcf51719a6228ec800093" Dec 01 09:49:39 crc kubenswrapper[4689]: I1201 09:49:39.910697 4689 scope.go:117] "RemoveContainer" containerID="e36fd29c354928ae5e5e41432d56ac7d589bb3fcd68c98fda545a3d82585498f" Dec 01 09:49:39 crc kubenswrapper[4689]: E1201 09:49:39.910976 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:49:53 crc kubenswrapper[4689]: I1201 09:49:53.930129 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-wrs47_ef20aee9-f534-4832-9bb0-ef4ec0c3c807/kube-rbac-proxy/0.log" Dec 01 09:49:54 crc kubenswrapper[4689]: I1201 09:49:54.011642 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-wrs47_ef20aee9-f534-4832-9bb0-ef4ec0c3c807/controller/0.log" Dec 01 09:49:54 crc kubenswrapper[4689]: I1201 09:49:54.047903 4689 scope.go:117] "RemoveContainer" containerID="e36fd29c354928ae5e5e41432d56ac7d589bb3fcd68c98fda545a3d82585498f" Dec 01 09:49:54 crc kubenswrapper[4689]: E1201 09:49:54.048148 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:49:54 crc kubenswrapper[4689]: I1201 09:49:54.237109 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5j4hf_67f63643-d748-4058-b24c-66ce8a8c3234/cp-frr-files/0.log" Dec 01 09:49:54 crc kubenswrapper[4689]: I1201 09:49:54.418109 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5j4hf_67f63643-d748-4058-b24c-66ce8a8c3234/cp-frr-files/0.log" Dec 01 09:49:54 crc kubenswrapper[4689]: I1201 09:49:54.418591 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5j4hf_67f63643-d748-4058-b24c-66ce8a8c3234/cp-metrics/0.log" Dec 01 09:49:54 crc kubenswrapper[4689]: I1201 09:49:54.426741 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5j4hf_67f63643-d748-4058-b24c-66ce8a8c3234/cp-reloader/0.log" Dec 01 09:49:54 crc kubenswrapper[4689]: I1201 09:49:54.553439 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5j4hf_67f63643-d748-4058-b24c-66ce8a8c3234/cp-reloader/0.log" Dec 01 09:49:54 crc kubenswrapper[4689]: I1201 09:49:54.866155 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5j4hf_67f63643-d748-4058-b24c-66ce8a8c3234/cp-reloader/0.log" Dec 01 09:49:54 crc kubenswrapper[4689]: I1201 09:49:54.904216 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5j4hf_67f63643-d748-4058-b24c-66ce8a8c3234/cp-metrics/0.log" Dec 01 09:49:54 crc kubenswrapper[4689]: I1201 09:49:54.912292 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5j4hf_67f63643-d748-4058-b24c-66ce8a8c3234/cp-metrics/0.log" Dec 01 09:49:54 crc kubenswrapper[4689]: I1201 09:49:54.918668 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5j4hf_67f63643-d748-4058-b24c-66ce8a8c3234/cp-frr-files/0.log" Dec 01 09:49:55 crc kubenswrapper[4689]: I1201 09:49:55.163836 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5j4hf_67f63643-d748-4058-b24c-66ce8a8c3234/cp-frr-files/0.log" Dec 01 09:49:55 crc kubenswrapper[4689]: I1201 09:49:55.208150 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5j4hf_67f63643-d748-4058-b24c-66ce8a8c3234/controller/0.log" Dec 01 09:49:55 crc kubenswrapper[4689]: I1201 09:49:55.254552 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5j4hf_67f63643-d748-4058-b24c-66ce8a8c3234/cp-reloader/0.log" Dec 01 09:49:55 crc kubenswrapper[4689]: I1201 09:49:55.254726 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5j4hf_67f63643-d748-4058-b24c-66ce8a8c3234/cp-metrics/0.log" Dec 01 09:49:55 crc kubenswrapper[4689]: I1201 09:49:55.467550 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5j4hf_67f63643-d748-4058-b24c-66ce8a8c3234/kube-rbac-proxy/0.log" Dec 01 09:49:55 crc kubenswrapper[4689]: I1201 09:49:55.474589 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5j4hf_67f63643-d748-4058-b24c-66ce8a8c3234/frr-metrics/0.log" Dec 01 09:49:55 crc kubenswrapper[4689]: I1201 09:49:55.484861 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5j4hf_67f63643-d748-4058-b24c-66ce8a8c3234/kube-rbac-proxy-frr/0.log" Dec 01 09:49:55 crc kubenswrapper[4689]: I1201 09:49:55.814259 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5j4hf_67f63643-d748-4058-b24c-66ce8a8c3234/reloader/0.log" Dec 01 09:49:55 crc kubenswrapper[4689]: I1201 09:49:55.836558 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7fcb986d4-nmlzc_b3b8a95d-6924-4416-a625-995ed59e230d/frr-k8s-webhook-server/0.log" Dec 01 09:49:56 crc kubenswrapper[4689]: I1201 09:49:56.291382 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6599c4498-sh7sl_7d09395b-ad54-4b96-af05-ea6ce866de71/manager/1.log" Dec 01 09:49:56 crc kubenswrapper[4689]: I1201 09:49:56.373824 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6599c4498-sh7sl_7d09395b-ad54-4b96-af05-ea6ce866de71/manager/0.log" Dec 01 09:49:56 crc kubenswrapper[4689]: I1201 09:49:56.630683 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-fd7fdd679-r8jpf_1f4ef99a-e0b0-42c7-8599-284fdd6c5ae1/webhook-server/0.log" Dec 01 09:49:56 crc kubenswrapper[4689]: I1201 09:49:56.806852 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5j4hf_67f63643-d748-4058-b24c-66ce8a8c3234/frr/0.log" Dec 01 09:49:56 crc kubenswrapper[4689]: I1201 09:49:56.959031 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-5c56f_4e0a2cf0-4edb-4f5e-90d2-e276ddd43fd1/kube-rbac-proxy/0.log" Dec 01 09:49:57 crc kubenswrapper[4689]: I1201 09:49:57.231692 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-5c56f_4e0a2cf0-4edb-4f5e-90d2-e276ddd43fd1/speaker/1.log" Dec 01 09:49:57 crc kubenswrapper[4689]: I1201 09:49:57.335290 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-5c56f_4e0a2cf0-4edb-4f5e-90d2-e276ddd43fd1/speaker/0.log" Dec 01 09:50:08 crc kubenswrapper[4689]: I1201 09:50:08.047522 4689 scope.go:117] "RemoveContainer" containerID="e36fd29c354928ae5e5e41432d56ac7d589bb3fcd68c98fda545a3d82585498f" Dec 01 09:50:08 crc kubenswrapper[4689]: E1201 09:50:08.048156 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:50:13 crc kubenswrapper[4689]: I1201 09:50:13.465204 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f85wqf_4205e462-7e96-4991-8157-5a483dec2452/util/0.log" Dec 01 09:50:13 crc kubenswrapper[4689]: I1201 09:50:13.691568 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f85wqf_4205e462-7e96-4991-8157-5a483dec2452/util/0.log" Dec 01 09:50:13 crc kubenswrapper[4689]: I1201 09:50:13.697194 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f85wqf_4205e462-7e96-4991-8157-5a483dec2452/pull/0.log" Dec 01 09:50:13 crc kubenswrapper[4689]: I1201 09:50:13.757507 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f85wqf_4205e462-7e96-4991-8157-5a483dec2452/pull/0.log" Dec 01 09:50:14 crc kubenswrapper[4689]: I1201 09:50:14.061322 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f85wqf_4205e462-7e96-4991-8157-5a483dec2452/pull/0.log" Dec 01 09:50:14 crc kubenswrapper[4689]: I1201 09:50:14.090909 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f85wqf_4205e462-7e96-4991-8157-5a483dec2452/extract/0.log" Dec 01 09:50:14 crc kubenswrapper[4689]: I1201 09:50:14.096159 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f85wqf_4205e462-7e96-4991-8157-5a483dec2452/util/0.log" Dec 01 09:50:14 crc kubenswrapper[4689]: I1201 09:50:14.252607 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83rntb4_76b524e4-b427-4426-86ee-aa0b67f86533/util/0.log" Dec 01 09:50:14 crc kubenswrapper[4689]: I1201 09:50:14.476664 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83rntb4_76b524e4-b427-4426-86ee-aa0b67f86533/pull/0.log" Dec 01 09:50:14 crc kubenswrapper[4689]: I1201 09:50:14.528638 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83rntb4_76b524e4-b427-4426-86ee-aa0b67f86533/util/0.log" Dec 01 09:50:14 crc kubenswrapper[4689]: I1201 09:50:14.550845 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83rntb4_76b524e4-b427-4426-86ee-aa0b67f86533/pull/0.log" Dec 01 09:50:14 crc kubenswrapper[4689]: I1201 09:50:14.769708 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83rntb4_76b524e4-b427-4426-86ee-aa0b67f86533/util/0.log" Dec 01 09:50:14 crc kubenswrapper[4689]: I1201 09:50:14.919446 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83rntb4_76b524e4-b427-4426-86ee-aa0b67f86533/pull/0.log" Dec 01 09:50:14 crc kubenswrapper[4689]: I1201 09:50:14.947226 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83rntb4_76b524e4-b427-4426-86ee-aa0b67f86533/extract/0.log" Dec 01 09:50:15 crc kubenswrapper[4689]: I1201 09:50:15.385417 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5nsm4_3cea5449-8a30-47d4-bb0f-e7a6c785bee5/extract-utilities/0.log" Dec 01 09:50:15 crc kubenswrapper[4689]: I1201 09:50:15.644332 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5nsm4_3cea5449-8a30-47d4-bb0f-e7a6c785bee5/extract-content/0.log" Dec 01 09:50:15 crc kubenswrapper[4689]: I1201 09:50:15.654401 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5nsm4_3cea5449-8a30-47d4-bb0f-e7a6c785bee5/extract-utilities/0.log" Dec 01 09:50:15 crc kubenswrapper[4689]: I1201 09:50:15.732750 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5nsm4_3cea5449-8a30-47d4-bb0f-e7a6c785bee5/extract-content/0.log" Dec 01 09:50:15 crc kubenswrapper[4689]: I1201 09:50:15.868479 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5nsm4_3cea5449-8a30-47d4-bb0f-e7a6c785bee5/extract-utilities/0.log" Dec 01 09:50:15 crc kubenswrapper[4689]: I1201 09:50:15.925047 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5nsm4_3cea5449-8a30-47d4-bb0f-e7a6c785bee5/extract-content/0.log" Dec 01 09:50:16 crc kubenswrapper[4689]: I1201 09:50:16.173257 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hqnhc_493e02e9-20cb-4ef2-b0d7-94896afe320d/extract-utilities/0.log" Dec 01 09:50:16 crc kubenswrapper[4689]: I1201 09:50:16.547077 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5nsm4_3cea5449-8a30-47d4-bb0f-e7a6c785bee5/registry-server/0.log" Dec 01 09:50:16 crc kubenswrapper[4689]: I1201 09:50:16.554613 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hqnhc_493e02e9-20cb-4ef2-b0d7-94896afe320d/extract-content/0.log" Dec 01 09:50:16 crc kubenswrapper[4689]: I1201 09:50:16.598780 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hqnhc_493e02e9-20cb-4ef2-b0d7-94896afe320d/extract-content/0.log" Dec 01 09:50:16 crc kubenswrapper[4689]: I1201 09:50:16.598925 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hqnhc_493e02e9-20cb-4ef2-b0d7-94896afe320d/extract-utilities/0.log" Dec 01 09:50:16 crc kubenswrapper[4689]: I1201 09:50:16.837259 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hqnhc_493e02e9-20cb-4ef2-b0d7-94896afe320d/extract-content/0.log" Dec 01 09:50:16 crc kubenswrapper[4689]: I1201 09:50:16.859659 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hqnhc_493e02e9-20cb-4ef2-b0d7-94896afe320d/extract-utilities/0.log" Dec 01 09:50:17 crc kubenswrapper[4689]: I1201 09:50:17.110657 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hqnhc_493e02e9-20cb-4ef2-b0d7-94896afe320d/registry-server/0.log" Dec 01 09:50:17 crc kubenswrapper[4689]: I1201 09:50:17.125753 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-jhh4c_0cd9ccf0-2f85-4649-ac80-931f337566ca/marketplace-operator/1.log" Dec 01 09:50:17 crc kubenswrapper[4689]: I1201 09:50:17.181393 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-jhh4c_0cd9ccf0-2f85-4649-ac80-931f337566ca/marketplace-operator/0.log" Dec 01 09:50:17 crc kubenswrapper[4689]: I1201 09:50:17.713650 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-kww7g_075f35f7-3a97-4145-b911-9a14de1e1fee/extract-utilities/0.log" Dec 01 09:50:17 crc kubenswrapper[4689]: I1201 09:50:17.911267 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-kww7g_075f35f7-3a97-4145-b911-9a14de1e1fee/extract-content/0.log" Dec 01 09:50:17 crc kubenswrapper[4689]: I1201 09:50:17.924420 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-kww7g_075f35f7-3a97-4145-b911-9a14de1e1fee/extract-utilities/0.log" Dec 01 09:50:17 crc kubenswrapper[4689]: I1201 09:50:17.996125 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-kww7g_075f35f7-3a97-4145-b911-9a14de1e1fee/extract-content/0.log" Dec 01 09:50:18 crc kubenswrapper[4689]: I1201 09:50:18.125966 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-kww7g_075f35f7-3a97-4145-b911-9a14de1e1fee/extract-content/0.log" Dec 01 09:50:18 crc kubenswrapper[4689]: I1201 09:50:18.223824 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-kww7g_075f35f7-3a97-4145-b911-9a14de1e1fee/extract-utilities/0.log" Dec 01 09:50:18 crc kubenswrapper[4689]: I1201 09:50:18.285497 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-kww7g_075f35f7-3a97-4145-b911-9a14de1e1fee/registry-server/0.log" Dec 01 09:50:18 crc kubenswrapper[4689]: I1201 09:50:18.338133 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2vf7n_11f527ec-49a1-4be9-a67b-676eb6b8feba/extract-utilities/0.log" Dec 01 09:50:18 crc kubenswrapper[4689]: I1201 09:50:18.539514 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2vf7n_11f527ec-49a1-4be9-a67b-676eb6b8feba/extract-utilities/0.log" Dec 01 09:50:18 crc kubenswrapper[4689]: I1201 09:50:18.540834 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2vf7n_11f527ec-49a1-4be9-a67b-676eb6b8feba/extract-content/0.log" Dec 01 09:50:18 crc kubenswrapper[4689]: I1201 09:50:18.545534 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2vf7n_11f527ec-49a1-4be9-a67b-676eb6b8feba/extract-content/0.log" Dec 01 09:50:18 crc kubenswrapper[4689]: I1201 09:50:18.739139 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2vf7n_11f527ec-49a1-4be9-a67b-676eb6b8feba/extract-utilities/0.log" Dec 01 09:50:18 crc kubenswrapper[4689]: I1201 09:50:18.745135 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2vf7n_11f527ec-49a1-4be9-a67b-676eb6b8feba/extract-content/0.log" Dec 01 09:50:19 crc kubenswrapper[4689]: I1201 09:50:19.391577 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2vf7n_11f527ec-49a1-4be9-a67b-676eb6b8feba/registry-server/0.log" Dec 01 09:50:23 crc kubenswrapper[4689]: I1201 09:50:23.047079 4689 scope.go:117] "RemoveContainer" containerID="e36fd29c354928ae5e5e41432d56ac7d589bb3fcd68c98fda545a3d82585498f" Dec 01 09:50:23 crc kubenswrapper[4689]: E1201 09:50:23.047980 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:50:35 crc kubenswrapper[4689]: I1201 09:50:35.047212 4689 scope.go:117] "RemoveContainer" containerID="e36fd29c354928ae5e5e41432d56ac7d589bb3fcd68c98fda545a3d82585498f" Dec 01 09:50:35 crc kubenswrapper[4689]: E1201 09:50:35.048149 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:50:46 crc kubenswrapper[4689]: I1201 09:50:46.047187 4689 scope.go:117] "RemoveContainer" containerID="e36fd29c354928ae5e5e41432d56ac7d589bb3fcd68c98fda545a3d82585498f" Dec 01 09:50:46 crc kubenswrapper[4689]: E1201 09:50:46.047925 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:50:53 crc kubenswrapper[4689]: E1201 09:50:53.825711 4689 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.190:59932->38.102.83.190:35327: write tcp 38.102.83.190:59932->38.102.83.190:35327: write: broken pipe Dec 01 09:50:58 crc kubenswrapper[4689]: I1201 09:50:58.047198 4689 scope.go:117] "RemoveContainer" containerID="e36fd29c354928ae5e5e41432d56ac7d589bb3fcd68c98fda545a3d82585498f" Dec 01 09:50:58 crc kubenswrapper[4689]: E1201 09:50:58.049223 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:51:12 crc kubenswrapper[4689]: I1201 09:51:12.047217 4689 scope.go:117] "RemoveContainer" containerID="e36fd29c354928ae5e5e41432d56ac7d589bb3fcd68c98fda545a3d82585498f" Dec 01 09:51:12 crc kubenswrapper[4689]: E1201 09:51:12.050608 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:51:27 crc kubenswrapper[4689]: I1201 09:51:27.048116 4689 scope.go:117] "RemoveContainer" containerID="e36fd29c354928ae5e5e41432d56ac7d589bb3fcd68c98fda545a3d82585498f" Dec 01 09:51:27 crc kubenswrapper[4689]: E1201 09:51:27.049236 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:51:41 crc kubenswrapper[4689]: I1201 09:51:41.055714 4689 scope.go:117] "RemoveContainer" containerID="e36fd29c354928ae5e5e41432d56ac7d589bb3fcd68c98fda545a3d82585498f" Dec 01 09:51:41 crc kubenswrapper[4689]: E1201 09:51:41.056534 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:51:55 crc kubenswrapper[4689]: I1201 09:51:55.047909 4689 scope.go:117] "RemoveContainer" containerID="e36fd29c354928ae5e5e41432d56ac7d589bb3fcd68c98fda545a3d82585498f" Dec 01 09:51:55 crc kubenswrapper[4689]: E1201 09:51:55.049039 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:52:06 crc kubenswrapper[4689]: I1201 09:52:06.047345 4689 scope.go:117] "RemoveContainer" containerID="e36fd29c354928ae5e5e41432d56ac7d589bb3fcd68c98fda545a3d82585498f" Dec 01 09:52:06 crc kubenswrapper[4689]: E1201 09:52:06.048266 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:52:16 crc kubenswrapper[4689]: I1201 09:52:16.561591 4689 generic.go:334] "Generic (PLEG): container finished" podID="6e26c226-e328-4af6-aeff-3579e26ad21e" containerID="86a0e617e0f7fe255f8cdc6f77901360e7c539b557021a337c20c59494761bcf" exitCode=0 Dec 01 09:52:16 crc kubenswrapper[4689]: I1201 09:52:16.561679 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-trlc6/must-gather-j9kx7" event={"ID":"6e26c226-e328-4af6-aeff-3579e26ad21e","Type":"ContainerDied","Data":"86a0e617e0f7fe255f8cdc6f77901360e7c539b557021a337c20c59494761bcf"} Dec 01 09:52:16 crc kubenswrapper[4689]: I1201 09:52:16.562914 4689 scope.go:117] "RemoveContainer" containerID="86a0e617e0f7fe255f8cdc6f77901360e7c539b557021a337c20c59494761bcf" Dec 01 09:52:17 crc kubenswrapper[4689]: I1201 09:52:17.400359 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-trlc6_must-gather-j9kx7_6e26c226-e328-4af6-aeff-3579e26ad21e/gather/0.log" Dec 01 09:52:20 crc kubenswrapper[4689]: I1201 09:52:20.049486 4689 scope.go:117] "RemoveContainer" containerID="e36fd29c354928ae5e5e41432d56ac7d589bb3fcd68c98fda545a3d82585498f" Dec 01 09:52:20 crc kubenswrapper[4689]: E1201 09:52:20.050030 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:52:29 crc kubenswrapper[4689]: I1201 09:52:29.023509 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-trlc6/must-gather-j9kx7"] Dec 01 09:52:29 crc kubenswrapper[4689]: I1201 09:52:29.024442 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-trlc6/must-gather-j9kx7" podUID="6e26c226-e328-4af6-aeff-3579e26ad21e" containerName="copy" containerID="cri-o://f0bbbb998151076f4b15165f27db0f99d52878318a98c2ccf14a2829e006d8b9" gracePeriod=2 Dec 01 09:52:29 crc kubenswrapper[4689]: I1201 09:52:29.037241 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-trlc6/must-gather-j9kx7"] Dec 01 09:52:29 crc kubenswrapper[4689]: I1201 09:52:29.611110 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-trlc6_must-gather-j9kx7_6e26c226-e328-4af6-aeff-3579e26ad21e/copy/0.log" Dec 01 09:52:29 crc kubenswrapper[4689]: I1201 09:52:29.611763 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-trlc6/must-gather-j9kx7" Dec 01 09:52:29 crc kubenswrapper[4689]: I1201 09:52:29.712873 4689 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-trlc6_must-gather-j9kx7_6e26c226-e328-4af6-aeff-3579e26ad21e/copy/0.log" Dec 01 09:52:29 crc kubenswrapper[4689]: I1201 09:52:29.713233 4689 generic.go:334] "Generic (PLEG): container finished" podID="6e26c226-e328-4af6-aeff-3579e26ad21e" containerID="f0bbbb998151076f4b15165f27db0f99d52878318a98c2ccf14a2829e006d8b9" exitCode=143 Dec 01 09:52:29 crc kubenswrapper[4689]: I1201 09:52:29.713289 4689 scope.go:117] "RemoveContainer" containerID="f0bbbb998151076f4b15165f27db0f99d52878318a98c2ccf14a2829e006d8b9" Dec 01 09:52:29 crc kubenswrapper[4689]: I1201 09:52:29.713481 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-trlc6/must-gather-j9kx7" Dec 01 09:52:29 crc kubenswrapper[4689]: I1201 09:52:29.744408 4689 scope.go:117] "RemoveContainer" containerID="86a0e617e0f7fe255f8cdc6f77901360e7c539b557021a337c20c59494761bcf" Dec 01 09:52:29 crc kubenswrapper[4689]: I1201 09:52:29.753910 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8s74\" (UniqueName: \"kubernetes.io/projected/6e26c226-e328-4af6-aeff-3579e26ad21e-kube-api-access-g8s74\") pod \"6e26c226-e328-4af6-aeff-3579e26ad21e\" (UID: \"6e26c226-e328-4af6-aeff-3579e26ad21e\") " Dec 01 09:52:29 crc kubenswrapper[4689]: I1201 09:52:29.754213 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6e26c226-e328-4af6-aeff-3579e26ad21e-must-gather-output\") pod \"6e26c226-e328-4af6-aeff-3579e26ad21e\" (UID: \"6e26c226-e328-4af6-aeff-3579e26ad21e\") " Dec 01 09:52:29 crc kubenswrapper[4689]: I1201 09:52:29.782048 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e26c226-e328-4af6-aeff-3579e26ad21e-kube-api-access-g8s74" (OuterVolumeSpecName: "kube-api-access-g8s74") pod "6e26c226-e328-4af6-aeff-3579e26ad21e" (UID: "6e26c226-e328-4af6-aeff-3579e26ad21e"). InnerVolumeSpecName "kube-api-access-g8s74". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:52:29 crc kubenswrapper[4689]: I1201 09:52:29.837577 4689 scope.go:117] "RemoveContainer" containerID="f0bbbb998151076f4b15165f27db0f99d52878318a98c2ccf14a2829e006d8b9" Dec 01 09:52:29 crc kubenswrapper[4689]: E1201 09:52:29.844759 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0bbbb998151076f4b15165f27db0f99d52878318a98c2ccf14a2829e006d8b9\": container with ID starting with f0bbbb998151076f4b15165f27db0f99d52878318a98c2ccf14a2829e006d8b9 not found: ID does not exist" containerID="f0bbbb998151076f4b15165f27db0f99d52878318a98c2ccf14a2829e006d8b9" Dec 01 09:52:29 crc kubenswrapper[4689]: I1201 09:52:29.845553 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0bbbb998151076f4b15165f27db0f99d52878318a98c2ccf14a2829e006d8b9"} err="failed to get container status \"f0bbbb998151076f4b15165f27db0f99d52878318a98c2ccf14a2829e006d8b9\": rpc error: code = NotFound desc = could not find container \"f0bbbb998151076f4b15165f27db0f99d52878318a98c2ccf14a2829e006d8b9\": container with ID starting with f0bbbb998151076f4b15165f27db0f99d52878318a98c2ccf14a2829e006d8b9 not found: ID does not exist" Dec 01 09:52:29 crc kubenswrapper[4689]: I1201 09:52:29.845590 4689 scope.go:117] "RemoveContainer" containerID="86a0e617e0f7fe255f8cdc6f77901360e7c539b557021a337c20c59494761bcf" Dec 01 09:52:29 crc kubenswrapper[4689]: E1201 09:52:29.846760 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86a0e617e0f7fe255f8cdc6f77901360e7c539b557021a337c20c59494761bcf\": container with ID starting with 86a0e617e0f7fe255f8cdc6f77901360e7c539b557021a337c20c59494761bcf not found: ID does not exist" containerID="86a0e617e0f7fe255f8cdc6f77901360e7c539b557021a337c20c59494761bcf" Dec 01 09:52:29 crc kubenswrapper[4689]: I1201 09:52:29.846785 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86a0e617e0f7fe255f8cdc6f77901360e7c539b557021a337c20c59494761bcf"} err="failed to get container status \"86a0e617e0f7fe255f8cdc6f77901360e7c539b557021a337c20c59494761bcf\": rpc error: code = NotFound desc = could not find container \"86a0e617e0f7fe255f8cdc6f77901360e7c539b557021a337c20c59494761bcf\": container with ID starting with 86a0e617e0f7fe255f8cdc6f77901360e7c539b557021a337c20c59494761bcf not found: ID does not exist" Dec 01 09:52:29 crc kubenswrapper[4689]: I1201 09:52:29.856161 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g8s74\" (UniqueName: \"kubernetes.io/projected/6e26c226-e328-4af6-aeff-3579e26ad21e-kube-api-access-g8s74\") on node \"crc\" DevicePath \"\"" Dec 01 09:52:30 crc kubenswrapper[4689]: I1201 09:52:30.014462 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e26c226-e328-4af6-aeff-3579e26ad21e-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "6e26c226-e328-4af6-aeff-3579e26ad21e" (UID: "6e26c226-e328-4af6-aeff-3579e26ad21e"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:52:30 crc kubenswrapper[4689]: I1201 09:52:30.059730 4689 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6e26c226-e328-4af6-aeff-3579e26ad21e-must-gather-output\") on node \"crc\" DevicePath \"\"" Dec 01 09:52:31 crc kubenswrapper[4689]: I1201 09:52:31.054805 4689 scope.go:117] "RemoveContainer" containerID="e36fd29c354928ae5e5e41432d56ac7d589bb3fcd68c98fda545a3d82585498f" Dec 01 09:52:31 crc kubenswrapper[4689]: E1201 09:52:31.055087 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:52:31 crc kubenswrapper[4689]: I1201 09:52:31.060005 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e26c226-e328-4af6-aeff-3579e26ad21e" path="/var/lib/kubelet/pods/6e26c226-e328-4af6-aeff-3579e26ad21e/volumes" Dec 01 09:52:42 crc kubenswrapper[4689]: I1201 09:52:42.047629 4689 scope.go:117] "RemoveContainer" containerID="e36fd29c354928ae5e5e41432d56ac7d589bb3fcd68c98fda545a3d82585498f" Dec 01 09:52:42 crc kubenswrapper[4689]: E1201 09:52:42.048313 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:52:56 crc kubenswrapper[4689]: I1201 09:52:56.047706 4689 scope.go:117] "RemoveContainer" containerID="e36fd29c354928ae5e5e41432d56ac7d589bb3fcd68c98fda545a3d82585498f" Dec 01 09:52:56 crc kubenswrapper[4689]: E1201 09:52:56.049134 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:53:08 crc kubenswrapper[4689]: I1201 09:53:08.047402 4689 scope.go:117] "RemoveContainer" containerID="e36fd29c354928ae5e5e41432d56ac7d589bb3fcd68c98fda545a3d82585498f" Dec 01 09:53:08 crc kubenswrapper[4689]: E1201 09:53:08.049643 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:53:22 crc kubenswrapper[4689]: I1201 09:53:22.047901 4689 scope.go:117] "RemoveContainer" containerID="e36fd29c354928ae5e5e41432d56ac7d589bb3fcd68c98fda545a3d82585498f" Dec 01 09:53:22 crc kubenswrapper[4689]: E1201 09:53:22.048581 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:53:34 crc kubenswrapper[4689]: I1201 09:53:34.048201 4689 scope.go:117] "RemoveContainer" containerID="e36fd29c354928ae5e5e41432d56ac7d589bb3fcd68c98fda545a3d82585498f" Dec 01 09:53:34 crc kubenswrapper[4689]: E1201 09:53:34.049071 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:53:47 crc kubenswrapper[4689]: I1201 09:53:47.130497 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-949rp"] Dec 01 09:53:47 crc kubenswrapper[4689]: E1201 09:53:47.133108 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e26c226-e328-4af6-aeff-3579e26ad21e" containerName="gather" Dec 01 09:53:47 crc kubenswrapper[4689]: I1201 09:53:47.133221 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e26c226-e328-4af6-aeff-3579e26ad21e" containerName="gather" Dec 01 09:53:47 crc kubenswrapper[4689]: E1201 09:53:47.133318 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0608b7a8-eb02-4fcd-9a47-6f87e3345339" containerName="container-00" Dec 01 09:53:47 crc kubenswrapper[4689]: I1201 09:53:47.133402 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="0608b7a8-eb02-4fcd-9a47-6f87e3345339" containerName="container-00" Dec 01 09:53:47 crc kubenswrapper[4689]: E1201 09:53:47.133476 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e26c226-e328-4af6-aeff-3579e26ad21e" containerName="copy" Dec 01 09:53:47 crc kubenswrapper[4689]: I1201 09:53:47.133537 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e26c226-e328-4af6-aeff-3579e26ad21e" containerName="copy" Dec 01 09:53:47 crc kubenswrapper[4689]: I1201 09:53:47.133789 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e26c226-e328-4af6-aeff-3579e26ad21e" containerName="gather" Dec 01 09:53:47 crc kubenswrapper[4689]: I1201 09:53:47.133866 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="0608b7a8-eb02-4fcd-9a47-6f87e3345339" containerName="container-00" Dec 01 09:53:47 crc kubenswrapper[4689]: I1201 09:53:47.133949 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e26c226-e328-4af6-aeff-3579e26ad21e" containerName="copy" Dec 01 09:53:47 crc kubenswrapper[4689]: I1201 09:53:47.135974 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-949rp" Dec 01 09:53:47 crc kubenswrapper[4689]: I1201 09:53:47.148260 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-949rp"] Dec 01 09:53:47 crc kubenswrapper[4689]: I1201 09:53:47.219671 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z87dg\" (UniqueName: \"kubernetes.io/projected/c4b7ae15-5736-4d52-a3c9-dea5cbd8c7d7-kube-api-access-z87dg\") pod \"community-operators-949rp\" (UID: \"c4b7ae15-5736-4d52-a3c9-dea5cbd8c7d7\") " pod="openshift-marketplace/community-operators-949rp" Dec 01 09:53:47 crc kubenswrapper[4689]: I1201 09:53:47.220040 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4b7ae15-5736-4d52-a3c9-dea5cbd8c7d7-utilities\") pod \"community-operators-949rp\" (UID: \"c4b7ae15-5736-4d52-a3c9-dea5cbd8c7d7\") " pod="openshift-marketplace/community-operators-949rp" Dec 01 09:53:47 crc kubenswrapper[4689]: I1201 09:53:47.220059 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4b7ae15-5736-4d52-a3c9-dea5cbd8c7d7-catalog-content\") pod \"community-operators-949rp\" (UID: \"c4b7ae15-5736-4d52-a3c9-dea5cbd8c7d7\") " pod="openshift-marketplace/community-operators-949rp" Dec 01 09:53:47 crc kubenswrapper[4689]: I1201 09:53:47.321218 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z87dg\" (UniqueName: \"kubernetes.io/projected/c4b7ae15-5736-4d52-a3c9-dea5cbd8c7d7-kube-api-access-z87dg\") pod \"community-operators-949rp\" (UID: \"c4b7ae15-5736-4d52-a3c9-dea5cbd8c7d7\") " pod="openshift-marketplace/community-operators-949rp" Dec 01 09:53:47 crc kubenswrapper[4689]: I1201 09:53:47.321288 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4b7ae15-5736-4d52-a3c9-dea5cbd8c7d7-utilities\") pod \"community-operators-949rp\" (UID: \"c4b7ae15-5736-4d52-a3c9-dea5cbd8c7d7\") " pod="openshift-marketplace/community-operators-949rp" Dec 01 09:53:47 crc kubenswrapper[4689]: I1201 09:53:47.321304 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4b7ae15-5736-4d52-a3c9-dea5cbd8c7d7-catalog-content\") pod \"community-operators-949rp\" (UID: \"c4b7ae15-5736-4d52-a3c9-dea5cbd8c7d7\") " pod="openshift-marketplace/community-operators-949rp" Dec 01 09:53:47 crc kubenswrapper[4689]: I1201 09:53:47.321794 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4b7ae15-5736-4d52-a3c9-dea5cbd8c7d7-catalog-content\") pod \"community-operators-949rp\" (UID: \"c4b7ae15-5736-4d52-a3c9-dea5cbd8c7d7\") " pod="openshift-marketplace/community-operators-949rp" Dec 01 09:53:47 crc kubenswrapper[4689]: I1201 09:53:47.321944 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4b7ae15-5736-4d52-a3c9-dea5cbd8c7d7-utilities\") pod \"community-operators-949rp\" (UID: \"c4b7ae15-5736-4d52-a3c9-dea5cbd8c7d7\") " pod="openshift-marketplace/community-operators-949rp" Dec 01 09:53:47 crc kubenswrapper[4689]: I1201 09:53:47.346505 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z87dg\" (UniqueName: \"kubernetes.io/projected/c4b7ae15-5736-4d52-a3c9-dea5cbd8c7d7-kube-api-access-z87dg\") pod \"community-operators-949rp\" (UID: \"c4b7ae15-5736-4d52-a3c9-dea5cbd8c7d7\") " pod="openshift-marketplace/community-operators-949rp" Dec 01 09:53:47 crc kubenswrapper[4689]: I1201 09:53:47.473845 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-949rp" Dec 01 09:53:48 crc kubenswrapper[4689]: I1201 09:53:48.729777 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-949rp"] Dec 01 09:53:49 crc kubenswrapper[4689]: I1201 09:53:49.056513 4689 scope.go:117] "RemoveContainer" containerID="e36fd29c354928ae5e5e41432d56ac7d589bb3fcd68c98fda545a3d82585498f" Dec 01 09:53:49 crc kubenswrapper[4689]: E1201 09:53:49.056774 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:53:49 crc kubenswrapper[4689]: I1201 09:53:49.479169 4689 generic.go:334] "Generic (PLEG): container finished" podID="c4b7ae15-5736-4d52-a3c9-dea5cbd8c7d7" containerID="fa3c48bcc22fcae5961b36c3298c26221145d1531bdd4e492e3435dbdcf9e30f" exitCode=0 Dec 01 09:53:49 crc kubenswrapper[4689]: I1201 09:53:49.479215 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-949rp" event={"ID":"c4b7ae15-5736-4d52-a3c9-dea5cbd8c7d7","Type":"ContainerDied","Data":"fa3c48bcc22fcae5961b36c3298c26221145d1531bdd4e492e3435dbdcf9e30f"} Dec 01 09:53:49 crc kubenswrapper[4689]: I1201 09:53:49.479241 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-949rp" event={"ID":"c4b7ae15-5736-4d52-a3c9-dea5cbd8c7d7","Type":"ContainerStarted","Data":"7935e8f54bfe662193acf3b2679a26718168a3b2210a42a8701140001ba53e9c"} Dec 01 09:53:49 crc kubenswrapper[4689]: I1201 09:53:49.482570 4689 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 01 09:53:51 crc kubenswrapper[4689]: I1201 09:53:51.502221 4689 generic.go:334] "Generic (PLEG): container finished" podID="c4b7ae15-5736-4d52-a3c9-dea5cbd8c7d7" containerID="552bfb830fb6247b090b28411d30b0b672a4a8d94e911e95b3e745ebe743ffb4" exitCode=0 Dec 01 09:53:51 crc kubenswrapper[4689]: I1201 09:53:51.502458 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-949rp" event={"ID":"c4b7ae15-5736-4d52-a3c9-dea5cbd8c7d7","Type":"ContainerDied","Data":"552bfb830fb6247b090b28411d30b0b672a4a8d94e911e95b3e745ebe743ffb4"} Dec 01 09:53:51 crc kubenswrapper[4689]: I1201 09:53:51.944291 4689 scope.go:117] "RemoveContainer" containerID="b6bac490a10e9c30902b818df2b1d86f46429429abec37d496e25d474c91baf5" Dec 01 09:53:53 crc kubenswrapper[4689]: I1201 09:53:53.524876 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-949rp" event={"ID":"c4b7ae15-5736-4d52-a3c9-dea5cbd8c7d7","Type":"ContainerStarted","Data":"97a2ecb6ca4d410ec91c1ec62c2df9aa52392110c2d90aedd5525403675f99b4"} Dec 01 09:53:53 crc kubenswrapper[4689]: I1201 09:53:53.550029 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-949rp" podStartSLOduration=3.597026599 podStartE2EDuration="6.550006369s" podCreationTimestamp="2025-12-01 09:53:47 +0000 UTC" firstStartedPulling="2025-12-01 09:53:49.482167716 +0000 UTC m=+4509.554455620" lastFinishedPulling="2025-12-01 09:53:52.435147476 +0000 UTC m=+4512.507435390" observedRunningTime="2025-12-01 09:53:53.541539647 +0000 UTC m=+4513.613827571" watchObservedRunningTime="2025-12-01 09:53:53.550006369 +0000 UTC m=+4513.622294273" Dec 01 09:53:57 crc kubenswrapper[4689]: I1201 09:53:57.475124 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-949rp" Dec 01 09:53:57 crc kubenswrapper[4689]: I1201 09:53:57.476089 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-949rp" Dec 01 09:53:57 crc kubenswrapper[4689]: I1201 09:53:57.523122 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-949rp" Dec 01 09:53:57 crc kubenswrapper[4689]: I1201 09:53:57.609817 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-949rp" Dec 01 09:53:57 crc kubenswrapper[4689]: I1201 09:53:57.758764 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-949rp"] Dec 01 09:53:59 crc kubenswrapper[4689]: I1201 09:53:59.583196 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-949rp" podUID="c4b7ae15-5736-4d52-a3c9-dea5cbd8c7d7" containerName="registry-server" containerID="cri-o://97a2ecb6ca4d410ec91c1ec62c2df9aa52392110c2d90aedd5525403675f99b4" gracePeriod=2 Dec 01 09:54:00 crc kubenswrapper[4689]: I1201 09:54:00.050145 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-949rp" Dec 01 09:54:00 crc kubenswrapper[4689]: I1201 09:54:00.160406 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4b7ae15-5736-4d52-a3c9-dea5cbd8c7d7-utilities\") pod \"c4b7ae15-5736-4d52-a3c9-dea5cbd8c7d7\" (UID: \"c4b7ae15-5736-4d52-a3c9-dea5cbd8c7d7\") " Dec 01 09:54:00 crc kubenswrapper[4689]: I1201 09:54:00.160502 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4b7ae15-5736-4d52-a3c9-dea5cbd8c7d7-catalog-content\") pod \"c4b7ae15-5736-4d52-a3c9-dea5cbd8c7d7\" (UID: \"c4b7ae15-5736-4d52-a3c9-dea5cbd8c7d7\") " Dec 01 09:54:00 crc kubenswrapper[4689]: I1201 09:54:00.160729 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z87dg\" (UniqueName: \"kubernetes.io/projected/c4b7ae15-5736-4d52-a3c9-dea5cbd8c7d7-kube-api-access-z87dg\") pod \"c4b7ae15-5736-4d52-a3c9-dea5cbd8c7d7\" (UID: \"c4b7ae15-5736-4d52-a3c9-dea5cbd8c7d7\") " Dec 01 09:54:00 crc kubenswrapper[4689]: I1201 09:54:00.162412 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4b7ae15-5736-4d52-a3c9-dea5cbd8c7d7-utilities" (OuterVolumeSpecName: "utilities") pod "c4b7ae15-5736-4d52-a3c9-dea5cbd8c7d7" (UID: "c4b7ae15-5736-4d52-a3c9-dea5cbd8c7d7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:54:00 crc kubenswrapper[4689]: I1201 09:54:00.174182 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4b7ae15-5736-4d52-a3c9-dea5cbd8c7d7-kube-api-access-z87dg" (OuterVolumeSpecName: "kube-api-access-z87dg") pod "c4b7ae15-5736-4d52-a3c9-dea5cbd8c7d7" (UID: "c4b7ae15-5736-4d52-a3c9-dea5cbd8c7d7"). InnerVolumeSpecName "kube-api-access-z87dg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:54:00 crc kubenswrapper[4689]: I1201 09:54:00.219160 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4b7ae15-5736-4d52-a3c9-dea5cbd8c7d7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c4b7ae15-5736-4d52-a3c9-dea5cbd8c7d7" (UID: "c4b7ae15-5736-4d52-a3c9-dea5cbd8c7d7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:54:00 crc kubenswrapper[4689]: I1201 09:54:00.262977 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z87dg\" (UniqueName: \"kubernetes.io/projected/c4b7ae15-5736-4d52-a3c9-dea5cbd8c7d7-kube-api-access-z87dg\") on node \"crc\" DevicePath \"\"" Dec 01 09:54:00 crc kubenswrapper[4689]: I1201 09:54:00.263027 4689 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4b7ae15-5736-4d52-a3c9-dea5cbd8c7d7-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 09:54:00 crc kubenswrapper[4689]: I1201 09:54:00.263038 4689 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4b7ae15-5736-4d52-a3c9-dea5cbd8c7d7-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 09:54:00 crc kubenswrapper[4689]: I1201 09:54:00.595862 4689 generic.go:334] "Generic (PLEG): container finished" podID="c4b7ae15-5736-4d52-a3c9-dea5cbd8c7d7" containerID="97a2ecb6ca4d410ec91c1ec62c2df9aa52392110c2d90aedd5525403675f99b4" exitCode=0 Dec 01 09:54:00 crc kubenswrapper[4689]: I1201 09:54:00.595925 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-949rp" event={"ID":"c4b7ae15-5736-4d52-a3c9-dea5cbd8c7d7","Type":"ContainerDied","Data":"97a2ecb6ca4d410ec91c1ec62c2df9aa52392110c2d90aedd5525403675f99b4"} Dec 01 09:54:00 crc kubenswrapper[4689]: I1201 09:54:00.595963 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-949rp" Dec 01 09:54:00 crc kubenswrapper[4689]: I1201 09:54:00.595996 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-949rp" event={"ID":"c4b7ae15-5736-4d52-a3c9-dea5cbd8c7d7","Type":"ContainerDied","Data":"7935e8f54bfe662193acf3b2679a26718168a3b2210a42a8701140001ba53e9c"} Dec 01 09:54:00 crc kubenswrapper[4689]: I1201 09:54:00.596018 4689 scope.go:117] "RemoveContainer" containerID="97a2ecb6ca4d410ec91c1ec62c2df9aa52392110c2d90aedd5525403675f99b4" Dec 01 09:54:00 crc kubenswrapper[4689]: I1201 09:54:00.616605 4689 scope.go:117] "RemoveContainer" containerID="552bfb830fb6247b090b28411d30b0b672a4a8d94e911e95b3e745ebe743ffb4" Dec 01 09:54:00 crc kubenswrapper[4689]: I1201 09:54:00.654834 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-949rp"] Dec 01 09:54:00 crc kubenswrapper[4689]: I1201 09:54:00.664986 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-949rp"] Dec 01 09:54:00 crc kubenswrapper[4689]: I1201 09:54:00.842288 4689 scope.go:117] "RemoveContainer" containerID="fa3c48bcc22fcae5961b36c3298c26221145d1531bdd4e492e3435dbdcf9e30f" Dec 01 09:54:00 crc kubenswrapper[4689]: I1201 09:54:00.895693 4689 scope.go:117] "RemoveContainer" containerID="97a2ecb6ca4d410ec91c1ec62c2df9aa52392110c2d90aedd5525403675f99b4" Dec 01 09:54:00 crc kubenswrapper[4689]: E1201 09:54:00.896055 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97a2ecb6ca4d410ec91c1ec62c2df9aa52392110c2d90aedd5525403675f99b4\": container with ID starting with 97a2ecb6ca4d410ec91c1ec62c2df9aa52392110c2d90aedd5525403675f99b4 not found: ID does not exist" containerID="97a2ecb6ca4d410ec91c1ec62c2df9aa52392110c2d90aedd5525403675f99b4" Dec 01 09:54:00 crc kubenswrapper[4689]: I1201 09:54:00.896104 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97a2ecb6ca4d410ec91c1ec62c2df9aa52392110c2d90aedd5525403675f99b4"} err="failed to get container status \"97a2ecb6ca4d410ec91c1ec62c2df9aa52392110c2d90aedd5525403675f99b4\": rpc error: code = NotFound desc = could not find container \"97a2ecb6ca4d410ec91c1ec62c2df9aa52392110c2d90aedd5525403675f99b4\": container with ID starting with 97a2ecb6ca4d410ec91c1ec62c2df9aa52392110c2d90aedd5525403675f99b4 not found: ID does not exist" Dec 01 09:54:00 crc kubenswrapper[4689]: I1201 09:54:00.896133 4689 scope.go:117] "RemoveContainer" containerID="552bfb830fb6247b090b28411d30b0b672a4a8d94e911e95b3e745ebe743ffb4" Dec 01 09:54:00 crc kubenswrapper[4689]: E1201 09:54:00.896404 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"552bfb830fb6247b090b28411d30b0b672a4a8d94e911e95b3e745ebe743ffb4\": container with ID starting with 552bfb830fb6247b090b28411d30b0b672a4a8d94e911e95b3e745ebe743ffb4 not found: ID does not exist" containerID="552bfb830fb6247b090b28411d30b0b672a4a8d94e911e95b3e745ebe743ffb4" Dec 01 09:54:00 crc kubenswrapper[4689]: I1201 09:54:00.896430 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"552bfb830fb6247b090b28411d30b0b672a4a8d94e911e95b3e745ebe743ffb4"} err="failed to get container status \"552bfb830fb6247b090b28411d30b0b672a4a8d94e911e95b3e745ebe743ffb4\": rpc error: code = NotFound desc = could not find container \"552bfb830fb6247b090b28411d30b0b672a4a8d94e911e95b3e745ebe743ffb4\": container with ID starting with 552bfb830fb6247b090b28411d30b0b672a4a8d94e911e95b3e745ebe743ffb4 not found: ID does not exist" Dec 01 09:54:00 crc kubenswrapper[4689]: I1201 09:54:00.896447 4689 scope.go:117] "RemoveContainer" containerID="fa3c48bcc22fcae5961b36c3298c26221145d1531bdd4e492e3435dbdcf9e30f" Dec 01 09:54:00 crc kubenswrapper[4689]: E1201 09:54:00.896677 4689 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa3c48bcc22fcae5961b36c3298c26221145d1531bdd4e492e3435dbdcf9e30f\": container with ID starting with fa3c48bcc22fcae5961b36c3298c26221145d1531bdd4e492e3435dbdcf9e30f not found: ID does not exist" containerID="fa3c48bcc22fcae5961b36c3298c26221145d1531bdd4e492e3435dbdcf9e30f" Dec 01 09:54:00 crc kubenswrapper[4689]: I1201 09:54:00.896702 4689 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa3c48bcc22fcae5961b36c3298c26221145d1531bdd4e492e3435dbdcf9e30f"} err="failed to get container status \"fa3c48bcc22fcae5961b36c3298c26221145d1531bdd4e492e3435dbdcf9e30f\": rpc error: code = NotFound desc = could not find container \"fa3c48bcc22fcae5961b36c3298c26221145d1531bdd4e492e3435dbdcf9e30f\": container with ID starting with fa3c48bcc22fcae5961b36c3298c26221145d1531bdd4e492e3435dbdcf9e30f not found: ID does not exist" Dec 01 09:54:01 crc kubenswrapper[4689]: I1201 09:54:01.061995 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4b7ae15-5736-4d52-a3c9-dea5cbd8c7d7" path="/var/lib/kubelet/pods/c4b7ae15-5736-4d52-a3c9-dea5cbd8c7d7/volumes" Dec 01 09:54:03 crc kubenswrapper[4689]: I1201 09:54:03.047337 4689 scope.go:117] "RemoveContainer" containerID="e36fd29c354928ae5e5e41432d56ac7d589bb3fcd68c98fda545a3d82585498f" Dec 01 09:54:03 crc kubenswrapper[4689]: E1201 09:54:03.047906 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:54:18 crc kubenswrapper[4689]: I1201 09:54:18.048531 4689 scope.go:117] "RemoveContainer" containerID="e36fd29c354928ae5e5e41432d56ac7d589bb3fcd68c98fda545a3d82585498f" Dec 01 09:54:18 crc kubenswrapper[4689]: E1201 09:54:18.049336 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:54:31 crc kubenswrapper[4689]: I1201 09:54:31.054465 4689 scope.go:117] "RemoveContainer" containerID="e36fd29c354928ae5e5e41432d56ac7d589bb3fcd68c98fda545a3d82585498f" Dec 01 09:54:31 crc kubenswrapper[4689]: E1201 09:54:31.055276 4689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hmdnx_openshift-machine-config-operator(3947625d-75bf-4332-a233-1491b2ee9d96)\"" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" podUID="3947625d-75bf-4332-a233-1491b2ee9d96" Dec 01 09:54:46 crc kubenswrapper[4689]: I1201 09:54:46.048117 4689 scope.go:117] "RemoveContainer" containerID="e36fd29c354928ae5e5e41432d56ac7d589bb3fcd68c98fda545a3d82585498f" Dec 01 09:54:47 crc kubenswrapper[4689]: I1201 09:54:47.074482 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hmdnx" event={"ID":"3947625d-75bf-4332-a233-1491b2ee9d96","Type":"ContainerStarted","Data":"6a3570d3f4cadfd74489944ddbff028fa661d5628bb5c34568a63d2e93d96733"} Dec 01 09:54:52 crc kubenswrapper[4689]: I1201 09:54:52.488521 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jxdtw"] Dec 01 09:54:52 crc kubenswrapper[4689]: E1201 09:54:52.489945 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4b7ae15-5736-4d52-a3c9-dea5cbd8c7d7" containerName="extract-utilities" Dec 01 09:54:52 crc kubenswrapper[4689]: I1201 09:54:52.489963 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4b7ae15-5736-4d52-a3c9-dea5cbd8c7d7" containerName="extract-utilities" Dec 01 09:54:52 crc kubenswrapper[4689]: E1201 09:54:52.489972 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4b7ae15-5736-4d52-a3c9-dea5cbd8c7d7" containerName="extract-content" Dec 01 09:54:52 crc kubenswrapper[4689]: I1201 09:54:52.489979 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4b7ae15-5736-4d52-a3c9-dea5cbd8c7d7" containerName="extract-content" Dec 01 09:54:52 crc kubenswrapper[4689]: E1201 09:54:52.489995 4689 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4b7ae15-5736-4d52-a3c9-dea5cbd8c7d7" containerName="registry-server" Dec 01 09:54:52 crc kubenswrapper[4689]: I1201 09:54:52.490019 4689 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4b7ae15-5736-4d52-a3c9-dea5cbd8c7d7" containerName="registry-server" Dec 01 09:54:52 crc kubenswrapper[4689]: I1201 09:54:52.490308 4689 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4b7ae15-5736-4d52-a3c9-dea5cbd8c7d7" containerName="registry-server" Dec 01 09:54:52 crc kubenswrapper[4689]: I1201 09:54:52.492955 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jxdtw" Dec 01 09:54:52 crc kubenswrapper[4689]: I1201 09:54:52.512612 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jxdtw"] Dec 01 09:54:52 crc kubenswrapper[4689]: I1201 09:54:52.611073 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/982e3533-2f09-4237-b916-fddc6e7bc494-catalog-content\") pod \"certified-operators-jxdtw\" (UID: \"982e3533-2f09-4237-b916-fddc6e7bc494\") " pod="openshift-marketplace/certified-operators-jxdtw" Dec 01 09:54:52 crc kubenswrapper[4689]: I1201 09:54:52.611193 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/982e3533-2f09-4237-b916-fddc6e7bc494-utilities\") pod \"certified-operators-jxdtw\" (UID: \"982e3533-2f09-4237-b916-fddc6e7bc494\") " pod="openshift-marketplace/certified-operators-jxdtw" Dec 01 09:54:52 crc kubenswrapper[4689]: I1201 09:54:52.611256 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqjm6\" (UniqueName: \"kubernetes.io/projected/982e3533-2f09-4237-b916-fddc6e7bc494-kube-api-access-kqjm6\") pod \"certified-operators-jxdtw\" (UID: \"982e3533-2f09-4237-b916-fddc6e7bc494\") " pod="openshift-marketplace/certified-operators-jxdtw" Dec 01 09:54:52 crc kubenswrapper[4689]: I1201 09:54:52.713604 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/982e3533-2f09-4237-b916-fddc6e7bc494-catalog-content\") pod \"certified-operators-jxdtw\" (UID: \"982e3533-2f09-4237-b916-fddc6e7bc494\") " pod="openshift-marketplace/certified-operators-jxdtw" Dec 01 09:54:52 crc kubenswrapper[4689]: I1201 09:54:52.713750 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/982e3533-2f09-4237-b916-fddc6e7bc494-utilities\") pod \"certified-operators-jxdtw\" (UID: \"982e3533-2f09-4237-b916-fddc6e7bc494\") " pod="openshift-marketplace/certified-operators-jxdtw" Dec 01 09:54:52 crc kubenswrapper[4689]: I1201 09:54:52.713831 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqjm6\" (UniqueName: \"kubernetes.io/projected/982e3533-2f09-4237-b916-fddc6e7bc494-kube-api-access-kqjm6\") pod \"certified-operators-jxdtw\" (UID: \"982e3533-2f09-4237-b916-fddc6e7bc494\") " pod="openshift-marketplace/certified-operators-jxdtw" Dec 01 09:54:52 crc kubenswrapper[4689]: I1201 09:54:52.714234 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/982e3533-2f09-4237-b916-fddc6e7bc494-catalog-content\") pod \"certified-operators-jxdtw\" (UID: \"982e3533-2f09-4237-b916-fddc6e7bc494\") " pod="openshift-marketplace/certified-operators-jxdtw" Dec 01 09:54:52 crc kubenswrapper[4689]: I1201 09:54:52.714264 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/982e3533-2f09-4237-b916-fddc6e7bc494-utilities\") pod \"certified-operators-jxdtw\" (UID: \"982e3533-2f09-4237-b916-fddc6e7bc494\") " pod="openshift-marketplace/certified-operators-jxdtw" Dec 01 09:54:52 crc kubenswrapper[4689]: I1201 09:54:52.739525 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqjm6\" (UniqueName: \"kubernetes.io/projected/982e3533-2f09-4237-b916-fddc6e7bc494-kube-api-access-kqjm6\") pod \"certified-operators-jxdtw\" (UID: \"982e3533-2f09-4237-b916-fddc6e7bc494\") " pod="openshift-marketplace/certified-operators-jxdtw" Dec 01 09:54:52 crc kubenswrapper[4689]: I1201 09:54:52.828298 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jxdtw" Dec 01 09:54:53 crc kubenswrapper[4689]: I1201 09:54:53.402324 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jxdtw"] Dec 01 09:54:54 crc kubenswrapper[4689]: I1201 09:54:54.153021 4689 generic.go:334] "Generic (PLEG): container finished" podID="982e3533-2f09-4237-b916-fddc6e7bc494" containerID="acdc369fc563bcd1420838327f6f08171a620710d37f7eab9dcd4ee46deed815" exitCode=0 Dec 01 09:54:54 crc kubenswrapper[4689]: I1201 09:54:54.154612 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jxdtw" event={"ID":"982e3533-2f09-4237-b916-fddc6e7bc494","Type":"ContainerDied","Data":"acdc369fc563bcd1420838327f6f08171a620710d37f7eab9dcd4ee46deed815"} Dec 01 09:54:54 crc kubenswrapper[4689]: I1201 09:54:54.154771 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jxdtw" event={"ID":"982e3533-2f09-4237-b916-fddc6e7bc494","Type":"ContainerStarted","Data":"38674b75414e83237465a65ea326bb560414e425b97b942ad9d4d92316be205f"} Dec 01 09:54:54 crc kubenswrapper[4689]: I1201 09:54:54.293977 4689 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jw7ht"] Dec 01 09:54:54 crc kubenswrapper[4689]: I1201 09:54:54.297340 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jw7ht" Dec 01 09:54:54 crc kubenswrapper[4689]: I1201 09:54:54.312508 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jw7ht"] Dec 01 09:54:54 crc kubenswrapper[4689]: I1201 09:54:54.347023 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dcbf3d82-7fd9-4fc2-86ca-f2e10c6cafa5-utilities\") pod \"redhat-marketplace-jw7ht\" (UID: \"dcbf3d82-7fd9-4fc2-86ca-f2e10c6cafa5\") " pod="openshift-marketplace/redhat-marketplace-jw7ht" Dec 01 09:54:54 crc kubenswrapper[4689]: I1201 09:54:54.347089 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dcbf3d82-7fd9-4fc2-86ca-f2e10c6cafa5-catalog-content\") pod \"redhat-marketplace-jw7ht\" (UID: \"dcbf3d82-7fd9-4fc2-86ca-f2e10c6cafa5\") " pod="openshift-marketplace/redhat-marketplace-jw7ht" Dec 01 09:54:54 crc kubenswrapper[4689]: I1201 09:54:54.347294 4689 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nt48\" (UniqueName: \"kubernetes.io/projected/dcbf3d82-7fd9-4fc2-86ca-f2e10c6cafa5-kube-api-access-4nt48\") pod \"redhat-marketplace-jw7ht\" (UID: \"dcbf3d82-7fd9-4fc2-86ca-f2e10c6cafa5\") " pod="openshift-marketplace/redhat-marketplace-jw7ht" Dec 01 09:54:54 crc kubenswrapper[4689]: I1201 09:54:54.449982 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dcbf3d82-7fd9-4fc2-86ca-f2e10c6cafa5-utilities\") pod \"redhat-marketplace-jw7ht\" (UID: \"dcbf3d82-7fd9-4fc2-86ca-f2e10c6cafa5\") " pod="openshift-marketplace/redhat-marketplace-jw7ht" Dec 01 09:54:54 crc kubenswrapper[4689]: I1201 09:54:54.450845 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dcbf3d82-7fd9-4fc2-86ca-f2e10c6cafa5-catalog-content\") pod \"redhat-marketplace-jw7ht\" (UID: \"dcbf3d82-7fd9-4fc2-86ca-f2e10c6cafa5\") " pod="openshift-marketplace/redhat-marketplace-jw7ht" Dec 01 09:54:54 crc kubenswrapper[4689]: I1201 09:54:54.450628 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dcbf3d82-7fd9-4fc2-86ca-f2e10c6cafa5-utilities\") pod \"redhat-marketplace-jw7ht\" (UID: \"dcbf3d82-7fd9-4fc2-86ca-f2e10c6cafa5\") " pod="openshift-marketplace/redhat-marketplace-jw7ht" Dec 01 09:54:54 crc kubenswrapper[4689]: I1201 09:54:54.451117 4689 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nt48\" (UniqueName: \"kubernetes.io/projected/dcbf3d82-7fd9-4fc2-86ca-f2e10c6cafa5-kube-api-access-4nt48\") pod \"redhat-marketplace-jw7ht\" (UID: \"dcbf3d82-7fd9-4fc2-86ca-f2e10c6cafa5\") " pod="openshift-marketplace/redhat-marketplace-jw7ht" Dec 01 09:54:54 crc kubenswrapper[4689]: I1201 09:54:54.451162 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dcbf3d82-7fd9-4fc2-86ca-f2e10c6cafa5-catalog-content\") pod \"redhat-marketplace-jw7ht\" (UID: \"dcbf3d82-7fd9-4fc2-86ca-f2e10c6cafa5\") " pod="openshift-marketplace/redhat-marketplace-jw7ht" Dec 01 09:54:54 crc kubenswrapper[4689]: I1201 09:54:54.475733 4689 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nt48\" (UniqueName: \"kubernetes.io/projected/dcbf3d82-7fd9-4fc2-86ca-f2e10c6cafa5-kube-api-access-4nt48\") pod \"redhat-marketplace-jw7ht\" (UID: \"dcbf3d82-7fd9-4fc2-86ca-f2e10c6cafa5\") " pod="openshift-marketplace/redhat-marketplace-jw7ht" Dec 01 09:54:54 crc kubenswrapper[4689]: I1201 09:54:54.618443 4689 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jw7ht" Dec 01 09:54:55 crc kubenswrapper[4689]: I1201 09:54:55.131815 4689 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jw7ht"] Dec 01 09:54:55 crc kubenswrapper[4689]: W1201 09:54:55.329468 4689 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddcbf3d82_7fd9_4fc2_86ca_f2e10c6cafa5.slice/crio-54e1c2d1a52bf5dd04f5af315042506db5fb0fb1445f443fcb810f238da9a5bf WatchSource:0}: Error finding container 54e1c2d1a52bf5dd04f5af315042506db5fb0fb1445f443fcb810f238da9a5bf: Status 404 returned error can't find the container with id 54e1c2d1a52bf5dd04f5af315042506db5fb0fb1445f443fcb810f238da9a5bf Dec 01 09:54:55 crc kubenswrapper[4689]: E1201 09:54:55.783799 4689 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddcbf3d82_7fd9_4fc2_86ca_f2e10c6cafa5.slice/crio-341d982557b775d00beb419d69da612f4b7b1fd5ce4aa90abecb0104146b9c31.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddcbf3d82_7fd9_4fc2_86ca_f2e10c6cafa5.slice/crio-conmon-341d982557b775d00beb419d69da612f4b7b1fd5ce4aa90abecb0104146b9c31.scope\": RecentStats: unable to find data in memory cache]" Dec 01 09:54:56 crc kubenswrapper[4689]: I1201 09:54:56.173922 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jxdtw" event={"ID":"982e3533-2f09-4237-b916-fddc6e7bc494","Type":"ContainerStarted","Data":"6874f755057b75507b7967455f2dee39afcf2d100f7947880814a4ddfff7d716"} Dec 01 09:54:56 crc kubenswrapper[4689]: I1201 09:54:56.179858 4689 generic.go:334] "Generic (PLEG): container finished" podID="dcbf3d82-7fd9-4fc2-86ca-f2e10c6cafa5" containerID="341d982557b775d00beb419d69da612f4b7b1fd5ce4aa90abecb0104146b9c31" exitCode=0 Dec 01 09:54:56 crc kubenswrapper[4689]: I1201 09:54:56.179906 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jw7ht" event={"ID":"dcbf3d82-7fd9-4fc2-86ca-f2e10c6cafa5","Type":"ContainerDied","Data":"341d982557b775d00beb419d69da612f4b7b1fd5ce4aa90abecb0104146b9c31"} Dec 01 09:54:56 crc kubenswrapper[4689]: I1201 09:54:56.179951 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jw7ht" event={"ID":"dcbf3d82-7fd9-4fc2-86ca-f2e10c6cafa5","Type":"ContainerStarted","Data":"54e1c2d1a52bf5dd04f5af315042506db5fb0fb1445f443fcb810f238da9a5bf"} Dec 01 09:54:57 crc kubenswrapper[4689]: I1201 09:54:57.191064 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jw7ht" event={"ID":"dcbf3d82-7fd9-4fc2-86ca-f2e10c6cafa5","Type":"ContainerStarted","Data":"2d582732f432c70acef15506e7dc1e4be6296c87a1bbbd7faa4b6cdf20bb38f2"} Dec 01 09:54:57 crc kubenswrapper[4689]: I1201 09:54:57.192890 4689 generic.go:334] "Generic (PLEG): container finished" podID="982e3533-2f09-4237-b916-fddc6e7bc494" containerID="6874f755057b75507b7967455f2dee39afcf2d100f7947880814a4ddfff7d716" exitCode=0 Dec 01 09:54:57 crc kubenswrapper[4689]: I1201 09:54:57.192924 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jxdtw" event={"ID":"982e3533-2f09-4237-b916-fddc6e7bc494","Type":"ContainerDied","Data":"6874f755057b75507b7967455f2dee39afcf2d100f7947880814a4ddfff7d716"} Dec 01 09:54:58 crc kubenswrapper[4689]: I1201 09:54:58.205748 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jxdtw" event={"ID":"982e3533-2f09-4237-b916-fddc6e7bc494","Type":"ContainerStarted","Data":"7025a734f2a837a9e71a70c93966417bf4707ce82666d256bd848ffff52f93b5"} Dec 01 09:54:58 crc kubenswrapper[4689]: I1201 09:54:58.207911 4689 generic.go:334] "Generic (PLEG): container finished" podID="dcbf3d82-7fd9-4fc2-86ca-f2e10c6cafa5" containerID="2d582732f432c70acef15506e7dc1e4be6296c87a1bbbd7faa4b6cdf20bb38f2" exitCode=0 Dec 01 09:54:58 crc kubenswrapper[4689]: I1201 09:54:58.207944 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jw7ht" event={"ID":"dcbf3d82-7fd9-4fc2-86ca-f2e10c6cafa5","Type":"ContainerDied","Data":"2d582732f432c70acef15506e7dc1e4be6296c87a1bbbd7faa4b6cdf20bb38f2"} Dec 01 09:54:58 crc kubenswrapper[4689]: I1201 09:54:58.273558 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jxdtw" podStartSLOduration=2.751558251 podStartE2EDuration="6.27353728s" podCreationTimestamp="2025-12-01 09:54:52 +0000 UTC" firstStartedPulling="2025-12-01 09:54:54.175460008 +0000 UTC m=+4574.247747912" lastFinishedPulling="2025-12-01 09:54:57.697439037 +0000 UTC m=+4577.769726941" observedRunningTime="2025-12-01 09:54:58.24252955 +0000 UTC m=+4578.314817454" watchObservedRunningTime="2025-12-01 09:54:58.27353728 +0000 UTC m=+4578.345825184" Dec 01 09:54:59 crc kubenswrapper[4689]: I1201 09:54:59.219918 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jw7ht" event={"ID":"dcbf3d82-7fd9-4fc2-86ca-f2e10c6cafa5","Type":"ContainerStarted","Data":"b05ec1a18a30b18d07c412d04f70b2442dcf87c6c7e97ca9c7890c651a4dbe45"} Dec 01 09:55:02 crc kubenswrapper[4689]: I1201 09:55:02.829121 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jxdtw" Dec 01 09:55:02 crc kubenswrapper[4689]: I1201 09:55:02.831502 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jxdtw" Dec 01 09:55:02 crc kubenswrapper[4689]: I1201 09:55:02.889252 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jxdtw" Dec 01 09:55:02 crc kubenswrapper[4689]: I1201 09:55:02.915695 4689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jw7ht" podStartSLOduration=6.314113803 podStartE2EDuration="8.915670666s" podCreationTimestamp="2025-12-01 09:54:54 +0000 UTC" firstStartedPulling="2025-12-01 09:54:56.189172486 +0000 UTC m=+4576.261460390" lastFinishedPulling="2025-12-01 09:54:58.790729349 +0000 UTC m=+4578.863017253" observedRunningTime="2025-12-01 09:54:59.253738093 +0000 UTC m=+4579.326026007" watchObservedRunningTime="2025-12-01 09:55:02.915670666 +0000 UTC m=+4582.987958570" Dec 01 09:55:03 crc kubenswrapper[4689]: I1201 09:55:03.314587 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jxdtw" Dec 01 09:55:04 crc kubenswrapper[4689]: I1201 09:55:04.094601 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jxdtw"] Dec 01 09:55:04 crc kubenswrapper[4689]: I1201 09:55:04.619417 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jw7ht" Dec 01 09:55:04 crc kubenswrapper[4689]: I1201 09:55:04.619507 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jw7ht" Dec 01 09:55:04 crc kubenswrapper[4689]: I1201 09:55:04.675867 4689 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jw7ht" Dec 01 09:55:05 crc kubenswrapper[4689]: I1201 09:55:05.277077 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jxdtw" podUID="982e3533-2f09-4237-b916-fddc6e7bc494" containerName="registry-server" containerID="cri-o://7025a734f2a837a9e71a70c93966417bf4707ce82666d256bd848ffff52f93b5" gracePeriod=2 Dec 01 09:55:05 crc kubenswrapper[4689]: I1201 09:55:05.336231 4689 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jw7ht" Dec 01 09:55:06 crc kubenswrapper[4689]: I1201 09:55:06.474959 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jw7ht"] Dec 01 09:55:07 crc kubenswrapper[4689]: I1201 09:55:07.294960 4689 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jw7ht" podUID="dcbf3d82-7fd9-4fc2-86ca-f2e10c6cafa5" containerName="registry-server" containerID="cri-o://b05ec1a18a30b18d07c412d04f70b2442dcf87c6c7e97ca9c7890c651a4dbe45" gracePeriod=2 Dec 01 09:55:08 crc kubenswrapper[4689]: I1201 09:55:08.305818 4689 generic.go:334] "Generic (PLEG): container finished" podID="982e3533-2f09-4237-b916-fddc6e7bc494" containerID="7025a734f2a837a9e71a70c93966417bf4707ce82666d256bd848ffff52f93b5" exitCode=0 Dec 01 09:55:08 crc kubenswrapper[4689]: I1201 09:55:08.305832 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jxdtw" event={"ID":"982e3533-2f09-4237-b916-fddc6e7bc494","Type":"ContainerDied","Data":"7025a734f2a837a9e71a70c93966417bf4707ce82666d256bd848ffff52f93b5"} Dec 01 09:55:08 crc kubenswrapper[4689]: I1201 09:55:08.798114 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jxdtw" Dec 01 09:55:08 crc kubenswrapper[4689]: I1201 09:55:08.844131 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kqjm6\" (UniqueName: \"kubernetes.io/projected/982e3533-2f09-4237-b916-fddc6e7bc494-kube-api-access-kqjm6\") pod \"982e3533-2f09-4237-b916-fddc6e7bc494\" (UID: \"982e3533-2f09-4237-b916-fddc6e7bc494\") " Dec 01 09:55:08 crc kubenswrapper[4689]: I1201 09:55:08.844251 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/982e3533-2f09-4237-b916-fddc6e7bc494-catalog-content\") pod \"982e3533-2f09-4237-b916-fddc6e7bc494\" (UID: \"982e3533-2f09-4237-b916-fddc6e7bc494\") " Dec 01 09:55:08 crc kubenswrapper[4689]: I1201 09:55:08.844299 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/982e3533-2f09-4237-b916-fddc6e7bc494-utilities\") pod \"982e3533-2f09-4237-b916-fddc6e7bc494\" (UID: \"982e3533-2f09-4237-b916-fddc6e7bc494\") " Dec 01 09:55:08 crc kubenswrapper[4689]: I1201 09:55:08.845755 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/982e3533-2f09-4237-b916-fddc6e7bc494-utilities" (OuterVolumeSpecName: "utilities") pod "982e3533-2f09-4237-b916-fddc6e7bc494" (UID: "982e3533-2f09-4237-b916-fddc6e7bc494"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:55:08 crc kubenswrapper[4689]: I1201 09:55:08.869811 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/982e3533-2f09-4237-b916-fddc6e7bc494-kube-api-access-kqjm6" (OuterVolumeSpecName: "kube-api-access-kqjm6") pod "982e3533-2f09-4237-b916-fddc6e7bc494" (UID: "982e3533-2f09-4237-b916-fddc6e7bc494"). InnerVolumeSpecName "kube-api-access-kqjm6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:55:08 crc kubenswrapper[4689]: I1201 09:55:08.902708 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/982e3533-2f09-4237-b916-fddc6e7bc494-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "982e3533-2f09-4237-b916-fddc6e7bc494" (UID: "982e3533-2f09-4237-b916-fddc6e7bc494"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:55:08 crc kubenswrapper[4689]: I1201 09:55:08.946469 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kqjm6\" (UniqueName: \"kubernetes.io/projected/982e3533-2f09-4237-b916-fddc6e7bc494-kube-api-access-kqjm6\") on node \"crc\" DevicePath \"\"" Dec 01 09:55:08 crc kubenswrapper[4689]: I1201 09:55:08.946509 4689 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/982e3533-2f09-4237-b916-fddc6e7bc494-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 09:55:08 crc kubenswrapper[4689]: I1201 09:55:08.946525 4689 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/982e3533-2f09-4237-b916-fddc6e7bc494-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 09:55:09 crc kubenswrapper[4689]: I1201 09:55:09.320147 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jxdtw" Dec 01 09:55:09 crc kubenswrapper[4689]: I1201 09:55:09.320150 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jxdtw" event={"ID":"982e3533-2f09-4237-b916-fddc6e7bc494","Type":"ContainerDied","Data":"38674b75414e83237465a65ea326bb560414e425b97b942ad9d4d92316be205f"} Dec 01 09:55:09 crc kubenswrapper[4689]: I1201 09:55:09.320752 4689 scope.go:117] "RemoveContainer" containerID="7025a734f2a837a9e71a70c93966417bf4707ce82666d256bd848ffff52f93b5" Dec 01 09:55:09 crc kubenswrapper[4689]: I1201 09:55:09.332164 4689 generic.go:334] "Generic (PLEG): container finished" podID="dcbf3d82-7fd9-4fc2-86ca-f2e10c6cafa5" containerID="b05ec1a18a30b18d07c412d04f70b2442dcf87c6c7e97ca9c7890c651a4dbe45" exitCode=0 Dec 01 09:55:09 crc kubenswrapper[4689]: I1201 09:55:09.332222 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jw7ht" event={"ID":"dcbf3d82-7fd9-4fc2-86ca-f2e10c6cafa5","Type":"ContainerDied","Data":"b05ec1a18a30b18d07c412d04f70b2442dcf87c6c7e97ca9c7890c651a4dbe45"} Dec 01 09:55:09 crc kubenswrapper[4689]: I1201 09:55:09.370431 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jxdtw"] Dec 01 09:55:09 crc kubenswrapper[4689]: I1201 09:55:09.380334 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jxdtw"] Dec 01 09:55:09 crc kubenswrapper[4689]: I1201 09:55:09.561321 4689 scope.go:117] "RemoveContainer" containerID="6874f755057b75507b7967455f2dee39afcf2d100f7947880814a4ddfff7d716" Dec 01 09:55:09 crc kubenswrapper[4689]: I1201 09:55:09.756146 4689 scope.go:117] "RemoveContainer" containerID="acdc369fc563bcd1420838327f6f08171a620710d37f7eab9dcd4ee46deed815" Dec 01 09:55:09 crc kubenswrapper[4689]: I1201 09:55:09.803897 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jw7ht" Dec 01 09:55:09 crc kubenswrapper[4689]: I1201 09:55:09.966351 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4nt48\" (UniqueName: \"kubernetes.io/projected/dcbf3d82-7fd9-4fc2-86ca-f2e10c6cafa5-kube-api-access-4nt48\") pod \"dcbf3d82-7fd9-4fc2-86ca-f2e10c6cafa5\" (UID: \"dcbf3d82-7fd9-4fc2-86ca-f2e10c6cafa5\") " Dec 01 09:55:09 crc kubenswrapper[4689]: I1201 09:55:09.966561 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dcbf3d82-7fd9-4fc2-86ca-f2e10c6cafa5-catalog-content\") pod \"dcbf3d82-7fd9-4fc2-86ca-f2e10c6cafa5\" (UID: \"dcbf3d82-7fd9-4fc2-86ca-f2e10c6cafa5\") " Dec 01 09:55:09 crc kubenswrapper[4689]: I1201 09:55:09.974619 4689 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dcbf3d82-7fd9-4fc2-86ca-f2e10c6cafa5-utilities\") pod \"dcbf3d82-7fd9-4fc2-86ca-f2e10c6cafa5\" (UID: \"dcbf3d82-7fd9-4fc2-86ca-f2e10c6cafa5\") " Dec 01 09:55:09 crc kubenswrapper[4689]: I1201 09:55:09.975589 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dcbf3d82-7fd9-4fc2-86ca-f2e10c6cafa5-utilities" (OuterVolumeSpecName: "utilities") pod "dcbf3d82-7fd9-4fc2-86ca-f2e10c6cafa5" (UID: "dcbf3d82-7fd9-4fc2-86ca-f2e10c6cafa5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:55:09 crc kubenswrapper[4689]: I1201 09:55:09.975901 4689 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dcbf3d82-7fd9-4fc2-86ca-f2e10c6cafa5-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 09:55:09 crc kubenswrapper[4689]: I1201 09:55:09.979703 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcbf3d82-7fd9-4fc2-86ca-f2e10c6cafa5-kube-api-access-4nt48" (OuterVolumeSpecName: "kube-api-access-4nt48") pod "dcbf3d82-7fd9-4fc2-86ca-f2e10c6cafa5" (UID: "dcbf3d82-7fd9-4fc2-86ca-f2e10c6cafa5"). InnerVolumeSpecName "kube-api-access-4nt48". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 09:55:09 crc kubenswrapper[4689]: I1201 09:55:09.987542 4689 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dcbf3d82-7fd9-4fc2-86ca-f2e10c6cafa5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dcbf3d82-7fd9-4fc2-86ca-f2e10c6cafa5" (UID: "dcbf3d82-7fd9-4fc2-86ca-f2e10c6cafa5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 09:55:10 crc kubenswrapper[4689]: I1201 09:55:10.078841 4689 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4nt48\" (UniqueName: \"kubernetes.io/projected/dcbf3d82-7fd9-4fc2-86ca-f2e10c6cafa5-kube-api-access-4nt48\") on node \"crc\" DevicePath \"\"" Dec 01 09:55:10 crc kubenswrapper[4689]: I1201 09:55:10.079226 4689 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dcbf3d82-7fd9-4fc2-86ca-f2e10c6cafa5-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 09:55:10 crc kubenswrapper[4689]: I1201 09:55:10.346090 4689 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jw7ht" event={"ID":"dcbf3d82-7fd9-4fc2-86ca-f2e10c6cafa5","Type":"ContainerDied","Data":"54e1c2d1a52bf5dd04f5af315042506db5fb0fb1445f443fcb810f238da9a5bf"} Dec 01 09:55:10 crc kubenswrapper[4689]: I1201 09:55:10.346151 4689 scope.go:117] "RemoveContainer" containerID="b05ec1a18a30b18d07c412d04f70b2442dcf87c6c7e97ca9c7890c651a4dbe45" Dec 01 09:55:10 crc kubenswrapper[4689]: I1201 09:55:10.346178 4689 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jw7ht" Dec 01 09:55:10 crc kubenswrapper[4689]: I1201 09:55:10.377361 4689 scope.go:117] "RemoveContainer" containerID="2d582732f432c70acef15506e7dc1e4be6296c87a1bbbd7faa4b6cdf20bb38f2" Dec 01 09:55:10 crc kubenswrapper[4689]: I1201 09:55:10.382499 4689 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jw7ht"] Dec 01 09:55:10 crc kubenswrapper[4689]: I1201 09:55:10.390016 4689 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jw7ht"] Dec 01 09:55:10 crc kubenswrapper[4689]: I1201 09:55:10.452562 4689 scope.go:117] "RemoveContainer" containerID="341d982557b775d00beb419d69da612f4b7b1fd5ce4aa90abecb0104146b9c31" Dec 01 09:55:11 crc kubenswrapper[4689]: I1201 09:55:11.076152 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="982e3533-2f09-4237-b916-fddc6e7bc494" path="/var/lib/kubelet/pods/982e3533-2f09-4237-b916-fddc6e7bc494/volumes" Dec 01 09:55:11 crc kubenswrapper[4689]: I1201 09:55:11.077501 4689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcbf3d82-7fd9-4fc2-86ca-f2e10c6cafa5" path="/var/lib/kubelet/pods/dcbf3d82-7fd9-4fc2-86ca-f2e10c6cafa5/volumes"